Chroderos t1_ja6tu1f wrote

As digital entities, our “bodies” would be immensely hardened compared to our current biological ones. This combined with the simplification of our physical needs, would make expansion into, and exploitation of, space far far easier than it is for us presently.

If we’re at that point, the energy available to us scales so massively we probably don’t have to fear a malthusian situation. Just start harnessing the energy of the next star whenever things get crowded.

As for trying to prevent the worst case scenarios, I’m sure we’ll try to do that. Can’t have a paperclip optimizer fill up the universe. I’m just not sure insisting on preserving humanity in its current form beyond the point AI exceeds us makes a lot of sense.


Chroderos t1_ja6t5kn wrote

It might be that they decide to work to “uplift” us to their superior state of being and make us equals. That would be wonderful.

Either way, I think it is better to view them as our children, our descendants, the next torchbearers of the legacy of humanity, rather than something to be suppressed because we want to hang on to the same physical form we have now.

I think what you are saying above is that we should take it slow and try to integrate advances into our own bodies and minds, right? The issue is, human behavior pretty much guarantees we won’t do this. Someone, somewhere will be motivated to take the easier route of developing the AI first, I think.


Chroderos t1_ja6rzqo wrote

Hey man, if they’re better than us, we should consider them like children that have exceeded their parents and turn the future over to them fully rather than trying to contain them.

They’d be our descendants of a sort, and our betters, and we should let them reach their potential rather than trying to hang on as a jealous, outdated species.


Chroderos t1_ja4hehx wrote

It will have its problems, but most likely will be more prosperous and less apocalyptic than we imagine.

Collapse: while there might be periods of difficulty, modern life won’t stop and we won’t be going back to the stone age or something.

Climate change: we’ll likely supplement our insufficient social and behavioral interventions with engineering ones once problems become too acute/expensive to ignore (i.e. soon).

Economy: recessions are a normal part of the business cycle. They happen every 10-20 years. We’ll survive, and the deeper the stock market falls, the better deals you will get coming in as a long term investor. Don’t let people talk you into never investing out of fear. You’ll regret it when you get older.

AI: It’s a historic revolution in tools and we’re in for the ride. Even if humanity were to end in favor of AI (Unlikely), AIs would basically be our descendants.

Bottom line: while you should care about these issues, you should not stress about them so much it detracts from living your everyday life.


Chroderos t1_j8qe8rk wrote

Sure. How about Mike from Robert Heinlein’s The Moon Is A Harsh Mistress? Or TARS in Interstellar? The Ship’s Computer and Data from Startrek? Droids in Star Wars? Scary AI is definitely in fashion, but we don’t lack for benevolent examples in fiction either.

However… Knowing that we’re training ChatGPT on data that causes it to exhibit human insecurities on steroids is terrifying, yes. I understand how ChatGPT works at a basic level, and I personally view human intelligence as very similar to what we’re doing with ChatGPT (A moment to moment statistical next thing predictor), just with a persistent backward looking internal narrative generator layered on top that provides the illusion of a continuous and self contained identity and an internal monologue hallucination ability when looking at the results of that predictor in hindsight. I don’t think it will take us all that long to emulate that too, if we want.

Edit: having seen several posts today where Bing Chat references fresh Reddit posts, I suggest you can even give it an ad-hoc persistent memory simply by logging your chat history to a searchable url and then asking Chat to “recall” the data at that location each time you start a new session.


Chroderos t1_j8jv8qd wrote

Then our super-intelligent AI descendants take over the mantle of “human” civilization. Every parent wants to see their child exceed them, so this isn’t necessarily a bad thing. It might even be in line with what you are arguing as AI may not be subject to all the crazy sex stuff OP is opposed to and could just go on doing meaningful stuff for the future of intelligence in the universe full time.

Alternatively, we could just grow new generations in artificial wombs if necessary.


Chroderos t1_j6oanqy wrote

Getting a PhD and working in academics is usually about slowly inching forward our theoretical understanding in an obscure corner of knowledge known to your small circle of a hundred or so fellow academics.

If that is not for you and you get more out of seeing your work affect the real world and/or building things you will see in use than advancing theory, may I recommend looking into engineering, computer/data science, or medicine (MD)? These fields are very flexible and you can lean into the more research oriented side and work in R&D as an engineer/computer scientist or in research/clinical trials as an MD, or even bring those skills fully to an academic setting where they are highly valuable. Additionally, you can much more easily pivot if your desires in life change.

PhD is a big commitment and I just would make sure you understand it is typically hyper focused on advancing very very specific, very niche knowledge that may not ever see real world application in your lifetime, so if you need that part for fulfillment, PhD might not be optimal for you. Coming from someone who spent many years in academics before becoming an R&D engineer at a company, which I love.

I can’t speak personally about the MD experience, but there’s an old saying about the difference in motivation between scientists and engineers which might be helpful:

Scientist: you build in order to learn things

Engineer: you learn in order to build things


Chroderos t1_j62mcns wrote

I would major in one of the “core” engineering fields (Mechanical/Electrical) in undergrad and then do a 1-2 year MS program in mechatronics if you want to go this route. Build a solid foundation in one of the “evergreen” engineerings and it will help your employability and knowledge base a lot.


Chroderos t1_j4nmvaz wrote

If you’re talking about something like an actual unified global government stronger and more centralized than the UN or something… I think that will never happen short of an alien invasion where the survival of humanity is at stake and we are truly, truly desperate. Maybe not even then.

However, there will always be an ad hoc international governance structure in the form of institutions like the UN, and the patchwork of various multiparty transnational treaties that will bind some, but not all, nations together on specific shared goals and areas of cooperation and understanding.


Chroderos t1_j1c0bjf wrote

I’m ok with it if we can merge our consciousness with AI and elevate ourselves to that level of being so we still have purpose (And also not be slaves to some tech overlord lol), but the idea of just becoming a pampered housepet with no further purpose but hedonism is not appealing.


Chroderos t1_iw1nqce wrote

Yeah I’m aware of the usage in WW2 era, but it seemed to fall out of favor since then. I read today that Ukrainian special forces are in Kherson so thought this might be a confusion with activities of those groups rather than locals. Anyway, sounds like they’re probably there supporting local resistance which makes sense.


Chroderos t1_iw1m5ro wrote

Thanks. I’ve heard the term used that way, but extremely rarely. Seems kind of an archaic term these days so just wanted to verify.

I think OP might be advised to say something like “local resistance fighters,” “guerrilla warfare,” or the like.