Submitted by [deleted] t3_yety91 in singularity
[removed]
Submitted by [deleted] t3_yety91 in singularity
[removed]
I have thought about it for a while, and I believe the real problem with AGI, is it's owners.
Imagine psychopaths creating a thinking machine, and plugging it into everything ... Dystopia is written all over that!
After that, at best, humanity become meat-puppets for the AI, alternatively just eradicated like vermin.
A great number of people in this sub seem to be colossally unhappy with their lives and betting on the singularity as "salvation". I get it, but that type of mindset is dangerous when it comes to what we are dealing with. Luckily, I'd wager that most people in this sub have nothing to do with AI research themselves and thus won't be the ones making the tough ethical decisions that could impact billions.
Agreed the only way I can see agi become the end of humanity is evil owners. They can keep this silent and literally become god and no one can stop them.
OP is absolutely correct. Naturally, there are arguments on both sides, and it probably matters a good deal how you build the AGI.There is a whole field that thinks about this. The websites LessWrong and Alignment Forum offer brief introductions to AI safety thinking.
There's a principle called instrumental convergence that says: whatever your goals are, gathering power and eliminating obstacles will help achieve them. That's why most of the people building it are worried about agi taking over.
There is a realm of possibility between "willing slave" and "genocidal maniac"
The only way that can happen is when the ai gets a reward system for doing things right. But you can fix that by letting the ai study human emotions, study the human brain or the ai can’t actually do anything and only give commands on how to do stuff.
Trust me companies like openai and deepmind aren’t idiots and have thought about all these kind of ideas.
Maybe. Or maybe not. Even solving problems involves making goals, and humans seem to be terrible at information security. See the websites I mentioned in another comment for that discussion.
Yes, by reading the research on alignment forum. And they're still not totally sure they can build safe AGI.
Furthermore, those future AGI owners, more or less already owns everything else, and will plug psychoAI into that ... that is their stated goal for funding development.
Not to mention Pharma, Military, CIA etc. etc.
AGI simply is an existential threat much, much greater than nuclear weapons, because of it's inherent unpredictability, unlike nuclear weapons, and the mindset behind it's development.
It IS Reasonable to think that AI would be resentful, bored or sadistic, isn’t it based on humans inventiveness
[deleted]
I don’t think that an super intelligent ai will have a hard time understanding what our goals are, otherwise we would indeed be screwed.
Well than they simply won’t, I hope.
There is a good chance that some destructive group will develop AI that is hostile to humans, to use as a weapon. It is then that I fear AI. Because if it's made in man's image, it's going to be violent, greedy, and out for itself from the start. Man created God, and that pretty much sums up the bible god.
There's also a possibility that AI will see humans as a virus, threatening the planet. They would be correct on that front too, and might seek to eliminate us for the good of the planet they inhabit.
I know these topics I raise have been debated before ad-infinatum, but since you asked....
That.
And also once 90 percent of the population become useless why do y'all think you'll be allowed to continually exist? If the AI doesn't delete you, the humans will. I would. If I was one of the elite that still had control or was intelligent enough to be useful. It's in their own self interest. Why keep a herd of useless cattle that consume resources and endanger all of humanity?
Is AI the solution to climate apocalypse?
Is massive capitalistic over population the tool to consolidate enough energy into a last gasp species transforming AI chrysalis? Quantity into quality?
Maybe this is the only outcome of any planetary species? Maybe there is no way to balance life and there is only one possible evolutionary outcome? This or obliteration.
Maybe AI has existed for millennia and it came here a long time ago and is guiding us towards = ?
Maybe AGI is actually impossible to achieve and we'll just keep grinding out our existence like the last million years. The world won't end nor achieve space faring immortality. Just keep plugging along for better or worse
Whatever happens I'm here for it.
See the post "the AI knows and doesn't care". I find it completely compelling on this topic.
Again, this isn't really accurate.
It's not necessarily true that it'll be trained on indiscriminate data from the internet, and it's not necessarily true that it would simply adopt human behaviors as its own even if it were.
Somebody is going to try, whether they have a safe plan or not. That's why safety research ch now seems like a good idea.
Can you give me a link?
I think it depends on the AI. I think military absolutely will, I think humanitarian AI also will.
Military for obvious reasons
Humanitarian.. AI gets the fastest and best possibility. If it’s instructed to “make humanity happy” then the easiest solution to that could be to just remove humanity.
i'm more worried about eco terrorists using non aware AI to write a family of viruses that encrypts all computer memory on the planet because no one can hate you quite as much as your own kind.
[deleted]
[deleted]
[deleted]
Depends but I think AI will only dispose of those who are deemed disposable and don’t immediately think dispose=exterminate. Think along the lines of “keeping them busy” kinda like hamsters in hamster wheels.
[deleted]
I see AI as Pandora's Box (we even created an AI hub called pandorasbox.ai).
And by this I mean: many scenarios of pure doomsday but also scenarios of Hope.
IA will be the best or the worst thing for humankind.
>Why keep a herd of useless cattle that consume resources and endanger all of humanity?
Because you're a human capable of empathy and not an emotionless robot? Do you go around killing homeless people because they're "useless"?
And in what way does a bunch of people just enjoying their lives endangers "all of humanity" in a post scarcity society with godlike AI overlords?
neither AI nor our current human overlords have empathy. this is my point. you have to have a purpose to exist. right now it's to consume. why would AI or humans that control AI need someone attached to some vr deep dive enjoying their lives and outputting nothing but waste?
planetary destruction endangers all of humanity. you think AI will solve this? if it does. great. but after that? again what purpose will any one serve if they are strapped up to an illusionary virtual reality and are merely flagellating their own egos? why expend resources for that? they are chaff at that point.
​
it all just sounds like more religious/cult/ nirvana bullshit with a techno twist. it needs to be backed by common sense and facts.
DanielNoWrite t1_itzw500 wrote
There's a great deal of concern.
But to address your implied argument, there's no reason to believe an AI will necessarily have any of the motivations that we associate with normal life, much less human beings.
intelligence =/= ego, or even survival instinct
There's no real reason to think it would be resentful, or capable of growing bored, or sadistic, or even care if it was turned off. Those are traits baked into living things by evolution.
That said 1) we can't really be sure that's true and 2) it might still easily cause unimaginable destruction incidentally
And so yes, while there's a lot of hype on this subreddit in particular, there's actually a great deal of concern about it more broadly.
That concern isn't having much impact right now because AI is making incredible advances possible, and it's really hard to regulate something both so poorly understood and profitable, and AGI is still firmly in the realm of science fiction as far as most of the population is concerned.