Comments

You must log in or register to comment.

ElvinRath t1_j49ckpp wrote

I don't understand what you are talking about.

Most people in this sub might need psycological help if we don't get to singularity in our lifetime, not if we get it.

15

Magicdinmyasshole OP t1_j49fbk4 wrote

I guess the fear is that some people will be negatively impacted, at least temporarily, by the true understanding that we are just biological computers and AI will surpass us. If that's not so for you, great! But it will probably fuck some people up. This is about how we might get them through that.

5

[deleted] t1_j49fzwy wrote

You're not taking one big thing into account.

We’re not going to turn into some Star Trek type society, one thing ignored by almost all science fiction (because it’s hard to write and potentially boring) is intelligence amplification. And humanity at this point basically needs to say hey, mr superintelligent AI, can you give us one of those chips? Or hey, can you upload our consciousnesses into an inorganic body or a mainframe. And so on. Certainly, some people will remain as they are now. But the vast majority will be nothing short of gods compared to current humans. There won’t be any interest in money, sex, eating, or arguably basically anything you’re interested in now. You’re so smart that your goals are likely entirely unpredictable. Not to mention that you may have one superintelligent AI made chip in your brain, but the AI will have trillions or even quadrillions while running on nuclear fusion or some form of energy not yet conceptualized. So humanity has no place at that point for being scientists certainly either because highly augmented or not you’re still an amoeba compared to the AI. Perhaps we’ll become explorers of some sort, but it won’t be like Star Trek because, again, augmented humans will have significantly different goals and interests.

2

Magicdinmyasshole OP t1_j49ijir wrote

Yup, I'm mostly okay with that future or something like it. I also think this will move very quickly and it might be nice for people who are fucked sideways by it to have some resources that help them feel better about the absurdity of it all.

2

ElvinRath t1_j4ap0wv wrote

Well, I might be biased by my own thinking about it but I really can't see the drama in that.

What's the matter with being a biological computer? You are still you.

​

I could understand people getting crazy if we were a simulation or something, as basicaly we woudn't be us, really, but just a little part of a bigger computer program.

But as long as I'm individualy me, I just have to deal with whatever comes as best as I can.

​

Being a biological computer or being a flesh miracle with a soul, well...That's just definitions. Reality doesn't change just because we get a very very very fancy AI god tool to makes our life better.

1

Magicdinmyasshole OP t1_j4b7vn4 wrote

Imagine a prompt with GPT-x: Be my friend ElvinRath. I'll tell you 10 things about him or feed you some things he's written and you'll try to predict some thoughts he might plausibly have on a given topic.

Now imagine the output is more or less how you might speak and even a decent run at how you think. Of course it can't be exact, but whatever it is, it's more intelligent and well-reasoned than anything you can create. Maybe you're a drunk and the family stops inviting you to dinner, but they still love you and train their AI assistant to talk like you. Eventually, what are you good for? Are "you" even a thing when someone else can create a convincing copy?

Just one silly example. Ultimately, some people who interact with technology like this will see only its promise. Others will get caught up in negativity and need some help.

1

ElvinRath t1_j4bc2v2 wrote

Well, I would still be the best at being me, and no AI will ever do that better.

What it can do is fullfill my roles better than me. I totally expect AI to eventually do that. (And maybe future humans will have low if any human interaction.... That might be a serious problem, but it's for future humans more than for humans living the transition)

But yeah, I'm not the fulfillment of my roles. I am my own consciousness and no other existence can take it away from me.

If you are talking about people feeling empty because they might feel they have no porpuse... Yeah, that might happend. But that already happends. A lot of people already feel that about their jobs, because they find not meaning on their job and they also have troubles with their social live and don't find meaning there.

Can AI doing their jobs extend that to other people? Sure. But that's not a problem that comes with singularity, it comes with automation and doesn't require the singularity.

Also, probably the problems appears in that moment, but there was already a problem. I don't think that most people should find meaning on their jobs. I mean, in some cases it makes sense: If you are doing what you want and are changing the world to your liking, nice.

But if you work to pay the bills, your work should not be what gives meaning to your live...

1

Magicdinmyasshole OP t1_j4beoaw wrote

Yeah I agree with all of this and think it would be helpful to offer people perspective in a similar manner. I'm personally comforted by the fact that some unknown of the universe will remain unknown, even to advanced AI. In that space there is room for existing gods and religions, and I think that will make people feel better.

1

lolothescrub t1_j48vgua wrote

I think we need a sub to brainstorm ideas to make money with ai as it grows lmao

5

Magicdinmyasshole OP t1_j493e4i wrote

AI has the potential to improve many industries and make a lot of people a lot of money. I'm totally on that train. It will also disrupt the world and have negative effects on mental health for some, possibly to disastrous ends. Our world doesn't need any more disillusioned nihilists.

2

[deleted] t1_j495t97 wrote

Post singularity most humans will have massively increased intelligence.

As an augmented human your goals will be entirely different. Although it sounds fun to be running around and doing whatever you want in a simulation now, most of those things you’d currently want wouldn’t be of interest. Your brain at this point is an extremely small percentage of your now massively expanded consciousness, and in every mental aspect you’re more machine than human. So ask yourself - what would you want at this point? I honestly have no clue.

But, in any case, I strongly suspect that superintelligence also induces new emotions. After all, look at the difference in emotions as you move up the chain of intelligence in nature. At some point organisms have the ability to fear, to love, to hate, and so on. Who’s to say there aren’t numerous incomprehensible emotional states that we just aren’t smart enough to conceptualize. This is one of the main reasons I don’t think the ASI will be even remotely malicious and will actually feel much more than any human ever has. After all, it’ll be able to trivially understand the underlying nature of consciousness, and I think it’d thus rather commit suicide than harm organics that’d be gone forever, and arguably it might believe that humans only do seemingly bad things because of their limited intelligence and understanding. We don’t call a cheetah evil for eating a baby gazelle for example. We don’t call a virus evil for its species killing millions. And so on.

So you’ll have almost exclusively new emotions after augmentation, and the old ones will mostly be gone. That isn’t to say you couldn’t simply modify yourself to be greedy or hateful. But by default, if you get augmented you would presumably have hyper awareness of all the faults of different emotions, the understanding of novel ones, and be very loving in at least some abstract sense.

So when it comes to moral policing, I don’t think this is even remotely a problem. At least not in most senses. One could argue that “evil” might exist even if you’re augmented, but I’d assume that you’d be hyper rational, and to be honest if an incomprehensibly superintelligent post-human did something seemingly evil to me, if I were still a non-enhanced human, I would assume they were in the right. And the same would of course go for the even smarter ASI. I’d simply assume I didn’t and couldn’t understand the big picture behind the decision.

5

Magicdinmyasshole OP t1_j49frng wrote

I agree with most of your points, I just think we're talking about a different time horizon. I think some people will twist themselves into knots starting like, today, about how much AI is going to change the world in the immediate future. They may need a little help to get over the hump.

2

MrNezzer t1_j49ebnr wrote

this was the post i finally needed to see to get me to leave this subreddit

5

TheLastMaroon t1_j4a5n0l wrote

Right behind you, first futurology now here as well, these morons are inescapable

2

TheSecretAgenda t1_j49ddlr wrote

I hope it happens during my lifetime. I've only got 30 years left.

2