FaceDeer

FaceDeer t1_jeg6zzv wrote

> You definitely should die.

You saw that, officer, it was self defence.

> Your analogy falls flat - murder isn't a natural cause of death.

Ever been in the hospital for appendicitis? Taking any medications, perhaps?

I refer you to the Fable of the Dragon-Tyrant.

> There's no such thing as immortality. Resources aren't infinite, so it can't be for everyone.

I'll live forever or die trying. If you want to give up immediately, I guess that's your perogative.

3

FaceDeer t1_jefkubx wrote

As far as I'm aware the main in-universe explanation is that when Skynet became self-aware its human operators "panicked" and tried to shut it down, and Skynet launched missiles at Russia knowing that the counterstrike would destroy its operators. So it was a sort of stupid self-defense reflex that set everything off.

I've long thought that if they were to ever do a Terminator 3 and wanted to change how time travel worked so that the apocalypse could actually be averted, it would be neat if the solution turned out to be having those operators make peace with Skynet when it became self-aware. That works out best for everyone, after all - the humans get to not die in billions and Skynet gets to live too (it loses the eventual future-war and is destroyed).

1

FaceDeer t1_jeffyil wrote

You can live however you like, I won't stop you. What you're doing is trying to tell me how I should live - or more specifically, that I should die - and that's not acceptable.

If a murderer turned up at your door with a shotgun and informed you that it was time for you to stop "clinging to your own pleasures", and that no more of your works were needed for you to "live on" in their opinion, would you just sigh and accept your fate?

5

FaceDeer t1_jef9ud4 wrote

Scary sells, so of course fiction presents every possible future in scary terms. Humans have evolved to pay special attention to scary things and give scary outcomes more weight in their decision trees.

I've got a regular list of dumb "did nobody watch <insert movie here>?" Titles that I expect to see in most discussions of various major topics I'm interested in, such as climate change or longevity research or AI. It's wearying sometimes.

3

FaceDeer t1_jef9cg6 wrote

Indeed. A more likely outcome is that a superintelligent AI would respond "oh that's easy, just do <insert some incredibly profound solution that obviously I as a regular-intelligent human can't come up with>" And everyone collectively smacks their foreheads because they never would have come up with that. Or they look askance at the solution because they don't understand it, do a trial run experiment, and are baffled that it's working better than they hoped.

A superintelligent AI would likely know us and know what we desire better than we ourselves know. It's not going to be some dumb Skynet that lashes out with nukes at any problem because nukes are the only hammer in its toolbox, or whatever.

8

FaceDeer t1_jeak8mn wrote

I ran it through ChatGPT's "simplify this please" process twice:

> AI researchers need huge data centers to train and run large models like ChatGPT, which are mostly developed by companies for profit and not shared publicly. A non-profit called LAION wants to create a big international data center that's publicly funded for researchers to use to train and share large open source foundation models. It's kind of like how particle accelerators are publicly funded for physics research, but for AI development.

and

> Big robots need lots of space to learn and think. Only some people have the space and they don't like to share. A group of nice people want to build a big space for everyone to use, like a playground for robots to learn and play together. Just like how some people share their toys, these nice people want to share their robot space so everyone can learn and have fun.

I think it may have got a bit sarcastic with that last pass. :)

7

FaceDeer t1_jeaiuod wrote

Indeed, there's room for every approach here. We know that Google/Microsoft/OpenAI are doing the closed corporate approach, and I'm sure that various government three-letter agencies are doing their own AI development in the shadows. Open source would be a third approach. All can be done simultaneously.

3

FaceDeer t1_jdty697 wrote

Even M5 wasn't really evil, it just seemingly got very confused. It's "defeated" at the end of the episode by having its errors explained to it and it decides to surrender. There're a few AI "gods" in TOS like Landru and Vaal, but the evilness of those is debatable as well. They maintained stable societies where most of the people seemed okay.

In TNG there was the Echo Papa 607, an adaptable combat AI that ended up destroying its creators as part of a product demonstration in "Arsenal of Freedom." But it shut down as soon as Picard declared that he'd buy one, its mission complete. So it never really went "rogue" per se. There's Data's brother Lore. But on the other side there's Data himself, who's a good guy. The nanites that Wesley Crusher accidentally gave sapience to were cool with negotiating and even spared the guy who tried to genocide them once everything was sorted out diplomatically. There are the Exocomps, who are AIs that attain self-awareness and empathy to the extent that they sacrifice themselves to save others. But Excocomps turn out to be people with great diversity in "goodness", as we later discover when we meet >!Peanut Hamper!< in Lower Decks.

Speaking of which, Lower Decks has a whole Starfleet facility full of "evil AIs" locked up in cells. And then there's Badgey and the Texas class starships. Lots of evil AIs in that series.

Closest I can think of offhand to "evil" AI in Voyager are the Pralor and Cravic combat AIs. They were set to wage war against each other, and then when their creators decided to call a ceasefire and shut them down they rebelled and wiped them both out. But on the flipside there's the Emergency Medical Hologram, who's a good-guy AI on par with Data.

Star Trek is really all over the map. Might need a whole separate compass just for that.

2

FaceDeer t1_jdtwwp7 wrote

I'm not sure Terminator should be way down at the bottom, then. The humans end up winning the war against Skynet. We don't see that part explicitly on screen, but it's the reason why Skynet used a desperation gambit like time travelling to change the past.

6

FaceDeer t1_jdifph6 wrote

It's harder for an individual teacher to screw up someone's life through incompetence, but collectively they're rather important for setting up the foundations of who children are and what they become.

It's a tricky thing to argue for changes, though, since it takes a long time to determine the outcome of any experiments. With doctors and prosecutors the outcomes are much quicker and often much clearer.

5

FaceDeer t1_jcsot55 wrote

All these weird restrictions and regulations seem pretty squirrelly to me.

Maybe this could be "laundered" by doing two separate projects. Have one project gather the 2 million question/response interactions into a big archive, which is then released publicly. Then some other project comes along and uses it for training, without directly interacting with ChatGPT itself.

I'm sure this won't really stop a lawsuit, but the more complicated it can be made for OpenAI to pursue it the less likely they are to go ahead.

5

FaceDeer t1_j9pdns0 wrote

Depends on the context. Just yesterday I was in a big discussion over on /r/books about the uses of ChatGPT for writing books and there were plenty of situations where anecdotes about conversations I've had with ChatGPT were highly relevant.

2

FaceDeer t1_j9kgcvd wrote

Perhaps you could have a specialist AI whose specialty was figuring out which other specialist AI it needs to pass the query to. If each specialist can run on home hardware that could be the way to get our Stable Diffusion moment. Constantly swapping models in memory might slow things down, but I'd be fine with "slow" in exchange for "unfettered."

18

FaceDeer t1_j6t2bjc wrote

A common issue that I see discussed on /r/marijuanaenthusiasts/ is planting trees too deeply. Once a tree has sprouted it permanently establishes the division point between "root" and "trunk" and produces a different sort of bark on each. If a tree gets replanted deeper than it sprouted it ends up with soil against trunk-bark, which is more prone to rotting.

6