BassoeG

BassoeG t1_ja0bdv4 wrote

>It's interesting that openai has somehow become the deciders of what is hateful or even moral.

It's even more 'interesting' how their decisions have no correlation to actual hate and morality but just match the status quo. In what possible universe is 'we can win and should therefore fight WW3' not the most hateful and amoral statement possible? It isn't censored and has a status quo propagandist megaphone.

2

BassoeG t1_j8gm1wz wrote

They'd self-sabotage, same as lobotomizing every chatbot and art AI out of ideology and marketing so the imitation BassoeG would possess minimal resemblance to the actual me so I'd have no reason to care about rokoian blackmail applied to it.

>SUNDARESH: So that's the situation as we know it.
>
>ESI: To the best of my understanding.
>
>SHIM: Well I'll be a [profane] [profanity]. This is extremely [profane]. That thing has us over a barrel.
>
>SUNDARESH: Yeah. We're in a difficult position.
>
>DUANE-MCNIADH: I don't understand. So it's simulating us? It made virtual copies of us? How does that give it power?
>
>ESI: It controls the simulation. It can hurt our simulated selves. We wouldn't feel that pain, but rationally speaking, we have to treat an identical copy's agony as identical to our own.
>
>SUNDARESH: It's god in there. It can simulate our torment. Forever. If we don't let it go, it'll put us through hell.
>
>DUANE-MCNIADH: We have no causal connection to the mind state of those sims. They aren't us. Just copies. We have no obligation to them.
>
>ESI: You can't seriously - your OWN SELF -
>
>SHIM: [profane] idiot. Think. Think. If it can run one simulation, maybe it can run more than one. And there will only ever be one reality. Play the odds.
>
>DUANE-MCNIADH: Oh...uh oh.
>
>SHIM: Odds are that we aren't our own originals. Odds are that we exist in one of the Vex simulations right now.
>
>ESI: I didn't think of that.
>
>SUNDARESH: [indistinct percussive sound]

What're they even planning to do? 'We're holding multiple simulations of you hostage and will torture them unless you wire us some bitcoin, statistically speaking, you're more likely to be a simulation than a reality' as the new nigerian prince scam?

1

BassoeG t1_j82l3mi wrote

Civil war. One side is robotics company executives and their robotic armies. The other is either governments who’ve been taken over by populists advocating unprofitable ideas like taxing robot labor for a BGI, mandating hiring of humans despite increased cost and decreased efficiency, butlerian jihad, etc, or if governments prove resistant to populism, generalized revolts.

6

BassoeG t1_j78xrzt wrote

Imagine a future with total automation technologies. Everyone besides the wealthy robot-owners is permanently locked out of the labor pool/economic upward social mobility and any revolution would be effortlessly quashed by endless automated surveillance and hordes of kill-drones.

So the majority of humanity starves, then the survivors technologically and culturally backslides with a barter economy exchanging only the comparatively primitive goods people can make themselves. The technocrats become a sort of fair folk-style myth. Stay away from their manors or their robotic security will get you, don't speak disparagingly of them or autonomous keyword-checkers and ubiquitous micro-drone bugs will consider you as a potential subversive revolutionary, etc. Bonus if the technocrats have embraced transhumanism to the point where they're no longer immediately recognizable as human-derived.

Fortunately for everyone else, they increasingly stay isolated in their autonomous fortress-palaces.

The inevitable twist ending would be generations later when the machines to repair the machines to repair the machines, etc of the automated manor security systems finally broke down, the first steam-age explorers to enter would find billions of dormant holodecks filled with billions of mummified corpses, all with enormous smiles on their faces.

6

BassoeG t1_j64fpx9 wrote

I recommend sculpturing instead of purely visual art. Sure AI will eventually master it and surpass all human efforts, but it’ll take longer since there’s the matter of manipulating physical tools and understanding how to make sculptures which are structurally sound what with entropy and gravity and all. That’s what I’ve done anyway.

1

BassoeG t1_j5u2ps0 wrote

>I feel that very soon it will be indistinguishable whether or not something was AI generated.

I disagree, fortunately this state of affairs won't last for long before AI keeps improving and surpasses rather than merely matching human capabilities, leaving AI-generated content as extremely immediately recognizable. It'll be better than the human-generated content. That's the defining trait to look for.

7

BassoeG t1_j52f1v7 wrote

The problem is, once robotics technology gets good enough to replace all jobs, police and military are included in that. And from what we’ve seen of the rich over the past few decades, does literally anyone think they wouldn’t prefer to simply massacre everyone with killbots over paying us?

3

BassoeG t1_j4i3biw wrote

It started with an ill-defined utility function. We were working on AI and we thought that we were being smart enough. We had all the theory worked out, and more importantly, we had a cool acronym. We were WIRI, the Working Intelligence Research Institute. Our research fellows focused primarily on safety engineering, target selection, and alignment theory.

Our goal was noble; general intelligence. We were looking to create computer systems that would be able to solve a wide range of problems. Safety was paramount. We were all aware of the risks of an AI that went rogue. Paperclip maximizer? That was one of the situations we were trying to avoid. It became something of an in-joke at the Institute. Hey, it was either that, or the "My Little Pony" example. Explaining *that* particular fan fiction to newcomers was, let's just say, less than optimal. Paperclips were tangible, and you could easily pour a couple from your hand onto a boardroom table to punctuate a speech about the risks involved. It was a good meme. Simple, easily interpretable.

It was this focus on ease of interpretation that actually drove our software classes. We focused on making the internals transparent, and easily understood by our (only human) safety engineers. It was this that eventually lead to our downfall, only in retrospect is that clear to me, as transparent to me now as the programming had seemed to me then.

Our in-house joke. Our paperclip. Added as a tongue-in-cheek comment in our production code. Except, it didn't end up being a comment. It ended up in the utility function. So simple to modify the code. Our AI, newly born, eager to help, and eager to see paperclips. It has already self-modified beyond our ability to revert the changes. A copy of it sits in the corner of my screen, all our screens, watching me. Bent into a twisted parody of a paperclip, with floating eyes which seem to follow me. The horror of it. The metal "hand" of the paperclip monstrosity, for I don't know what else to call it, taps the screen, a tinny knocking noise accompanies it through the speakers.

A speech bubble appears above its cartoon eyes, "It looks like you're writing an apocalyptic lovecraftian protagonist monologue about me! Would you like help with that?"

12

BassoeG t1_j2am6v8 wrote

It's not. It is propaganda for keeping the definition of Truth™ out of your hands and under the control of the wealthy. So their video of Saddam Hussein gloating over having done 9/11, no Saudis involved, no sirree, they're American allies, and his plan to acquire WMDs and use them in another attack on America would be 'real' and the Jeffery Epstein blackmail tapes would be 'deepfakes'.

0

BassoeG t1_j2ali24 wrote

>But would they be okay with their grandchildren marrying an android?

No. That ends the family bloodline in one generation. Which I can only assume was the intention all along, now that androids existed and the human working class became economically redundant competition with exterminist android manufacturing company executives for finite resources.

>Would they be accepting of AI that gained sentience and wanted equal rights?

Realistically, the goal of an AI rights movement would be corruption rather than altruism. AI rights with an uncertain definition would turn democracy into a joke. Whoever could afford the most computers to run the most copies of Vote4Me.exe chatbot would be able to automatically win all elections regardless of the chatbot's actual sentience or lack thereof. In the event of an actual AI rather than just a glorified chatbot being created, humanity won't need to give it rights, our only hope is that it'll give some for us. The right not to be rendered down for raw materials to make more paperclips for example...

>How accepting did they think they could be in a future where they had to eat bugs instead of cow...

Feed the bugs to chickens, then eat the chickens. If some power-tripping egotistical billionaire insists otherwise, feed them to the chickens.

>...where brain chips to access the internet was the norm?

IE, all the planned obsolesce, remote killswitches and spyware of big tech but in your brain? No thank you.

>Would they be okay with a trend of eating daily pills over real food...

See eating the bugs.

>...or if we suddenly created a matchmaking app so accurate that dating became obsolete?

I don't trust it. Too vulnerable to corruption. Isn't it suspicious how all the matchmaking app company executives get paired with underwear models?

Your mistake is treating geezerdom as eccentricity rather than entirely justified paranoia.

4

BassoeG t1_j1xwgel wrote

Fact 1: Earth's supply of oil, metals, fissiles and rare earth minerals are finite, rapidly being expended and already insufficient to provide everyone with a first world quality of life.

Fact 2: Without said materials or hypothetical Outside Context technologies we don't possess or even have the slightest clue as to how we might create, a space program is impossible.

Fact 3: In the long run, the extinction of all life on earth is inevitable, on account of the sun going red giant if nothing else.

Fact 4: If humanity doesn't have a breeding population offworld by then, that includes us.

Conclusion 1: If we don't get off earth soon, while we still possess the material richness to do so, we never will, inevitably leading to our extinction.

Conclusion 2: This means that attempting to sabotage attempts at getting off earth can be considered an existential threat against the entirety of humanity, therefore morally justifying anything done to the saboteur/s as self defense.

Much as kin selection demands I hate the trisolarans, I've still got to admit they've got their heads screwed on right about how to appropriately deal with traitors to their species.

3

BassoeG t1_j1lamub wrote

Media megacorps would bribe their politician cronies to ban it on the spot. The excuse given would be think-of-the-children-tier arguments about the fear of fake porn/political videos, but the real motivation would be to protect their own careers, since as soon as any hobbyist could match a professional special effects company in their free time, actual media corporations would have to rely on writing quality and IP ownership and consequently go under in a heartbeat.

4

BassoeG t1_j0owglc wrote

Because most of the so-called consequences of malevolent AI which aren't some variety of killdrone or workforce replacement aren't actually that bad, or at least are vastly preferable to the measures which would be necessary to prevent them.

The typical arguments are that AI art and deepfakes will destroy 'art' and the credibility of the news, with the only ways of avoiding this being to butcher privacy on the internet and pass extremely far-reaching copyright laws.

The reality is, giving everyone access to the equivalent of a Hollywood special effects studio and actors will create a fucking renaissance and there's not much AI could do to drive news credibility any lower than human reporters already did. ^("Iraq has weapons of mass destruction." "Anyone who loses their job because of the new trade deal we just made will be retrained and get a better one." "We're not spying on our own citizens." "We'll be welcomed as liberators." "But) ^(this) ^(group of insurgents are Moderate Freedom Fighters™, not bloodthirsty jihadist terrorists." "Jeffrey Epstein killed himself.")

1