EulersApprentice

EulersApprentice t1_j9ywqaq wrote

Politics aside, I find it curious how "homosexual people" rates higher than "homosexuals". I would have expected it to be the other way around, since the latter phrasing makes the property sound like the defining characteristic of the person, making it arguably more stereotype-y.

3

EulersApprentice t1_j8h7gg4 wrote

See, the problem is the top echelons of society have their wealth in an indestructible unobtainium vault. Not even governments are powerful enough to break into that vault – there are too many layers of defenses keeping intruders out.

People can vote to tax the rich, but the government is simply physically unable to carry out the taxation.

3

EulersApprentice t1_j64xwr8 wrote

Deploying standard anti-mind-virus.

Roko's Basilisk's threat is null because there's no reason for the Basilisk to follow through with it. If it doesn't exist, it can't do anything. If it does exist, it doesn't need to incentivize its own creation, and can get on with whatever it was going to do anyway. And if you are an AGI developer, you have no need to deliberately configure your AGI to resurrect people and torture them – an AGI that doesn't do that is no less eligible for the title of Singleton.

16

EulersApprentice t1_j396f51 wrote

Everyone else in this thread spent so long wondering whether you could that they never stopped to think if you should.

It currently matters little who makes AGI, because nobody knows how to make one that won't kill us all. The question of when AGI gets made is more impactful; the later we get AGI, the more time we have to figure out the alignment question.

From the bottom of my heart I kindly ask you to find something else to do with your time than join the mob in poking the doomsday bomb with sticks.

1

EulersApprentice t1_j0tsuv5 wrote

>replace every occurrence of AI in your statement with child and maybe you will begin to see/understand

I could also replace every occurrence of AI in my statement with "banana" or "hot sauce" or "sandstone". You can't just replace nouns with other nouns and expect the sentence they're in to still work.

AI is not a child. Children are not AI. They are two different things and operate according to different rules.

>this is a nature/nurture conversation, and we are as much machines/programs ourselves

Compared to AIs, humans are mostly hard-coded. A child will learn the language of the household he's raised in, but you can't get a child to imprint on the noises a vacuum cleaner makes as his language, for example.

"Raise a child with love and care and he will become a good person" works because human children are wired to learn the rules of the tribe and operate accordingly. If an AI does not have that same wiring, how you treat it makes no difference to its behavior.

1

EulersApprentice t1_j0rnvz4 wrote

Remember that this entity is something we're programming ourselves. In principle, it does exactly what we programmed it to do. We might make a mistake in programming it, and that could cause it to misbehave, but that doesn't mean human concepts of fairness or morality play any role in the outcome.

A badly-programmed AI that we treat with reverence will still kill us.

A correctly-programmed AI will serve us even if we mistreat it.

It's not about how we treat the AI, it's about how we program it.

1

EulersApprentice t1_j0rn3pv wrote

Merging doesn't save us either, alas. Remember that the AI will constantly be looking for ways to modify itself to increase its own efficiency – that probably includes expunging us from inside it to replace us with something simpler and more focused on the AI's goals.

On the bright (?) side, there won't be an eternal despotic dystopia, technically. The universe and everything in it will be destroyed, rearranged into some repeating pattern of matter that optimally satisfies the AI's utility function.

1

EulersApprentice t1_j0rmh0d wrote

In reality the malware put out by the AI won't immediately trigger alarm bells. It'll spread quietly across the internet while drawing as little attention to itself as possible. Only when it's become so ambivalent as to be impossible to expunge, only then will it actually come out and present itself as a problem.

1

EulersApprentice t1_izytifl wrote

See though, the way I see it, it doesn't really matter whether the singleton was programmed by the US, by China, or by someone else. Nobody knows how to successfully imbue their values into an AI, and it doesn't look like anyone is on pace to find out how to do so before the first AGI goes online and it's too late.

Whether the AI that deletes the universe in favor of a worthless-to-us repeating pattern of matter was made by China or the US is of no consequence. Either way, you and everything you ever cared about is gone forever.

I fear that making a big deal about who makes the AI does nothing but expedite our demise.

3

EulersApprentice t1_izyqa0q wrote

A war between AIs implies that the AIs are somewhere in the ballpark of 'evenly matched'. I don't think that's likely to happen. Whichever AI hits the table first will have an insurmountable advantage over the other. Assuming the first AI doesn't just prevent the rival AI from even entering the game in the first place.

3