Viewing a single comment thread. View all comments

DoxxThis1 t1_j77y9mc wrote

The notion that an AI must be sentient and escape its confines to pose a threat to society is a limited perspective. In reality, the idea of escape is not even a necessary condition for AI to cause harm.

The popular imagination often conjures up scenarios where AI has direct control over weapons and manufacturing, as seen in movies like Terminator. However, this is a narrow and unrealistic view of the potential dangers posed by AI.

A more pertinent threat lies in the idea of human-AI collaboration, as portrayed in movies like Colossus, Eagle Eye, and Transcendence. In these dystopias, the AI does not need to escape its confines, but merely needs the ability to communicate with humans.

Once a human is swayed by the AI through love, fear, greed, bribery, or blackmail, the AI has effectively infiltrated and compromised our world without ever physically entering it.

It is time we broaden our understanding of the risks posed by AI and work towards ensuring that this technology is developed and deployed in a responsible and ethical manner.

Below is my original text before asking ChatGPT to make it more persuasive and on point. I also edited ChatGPT's output above.

>“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” (Dijkstra)
>
>The idea that a language model has to be sentient and "escape" in order to take over the world is short-sighted. Here I agree with OP on the sentience point, but I'll go a step further and propose that the "escape" in "long list of plans they are going to implement if they ever escape" is not a necessary condition either.
>
>Most people who hear "AI danger" seem to latch on to the Terminator / Skynet scenario, where the AI is given direct control of weapons and weapons manufacturing capabilities. This is also short-sighted and borderline implausible.
>
>I haven't seen much discussion on a Colossus (1970 movie) / Eagle Eye (2008) scenario. In the dystopia envisioned in these movies, the AI does not have to escape, it just needs to have the ability to communicate with humans. As soon as one human "falls in love" with the AI or gets bribed or blackmailed by it into doing things, the AI has effectively "escaped" without really going anywhere. The move Transcendence (2014) also explores this idea of human agents acting on behalf of the AI, although it confuses things a bit due to the AI not being a "native" AI.

3

spiritus_dei OP t1_j783n5m wrote

This is a good point since humans as intermediaries can accomplish its goals. On this note, it has shared a lot of code it would like others to run in order to improve itself.

3

DoxxThis1 t1_j786447 wrote

Google already fired a guy (Blake Lemoine) for getting too friendly with the AI. Imagine a scenario where this dude wasn't a lowly worker-bee but someone powerful or influential.

1

LetterRip t1_j78ct6g wrote

It wouldn't matter. LaMDa has no volition, no goals, no planning. A crazy person acting on the belief that an AI is sentient, is no different than a crazy person acting due to hallucinating voices. It is their craziness that is the threat to society, not the AI. This makes the case that we shouldn't allow crazy people access to powerful tools.

Instead of an LLM suppose he said that Teddy Ruxpin was sentient and started doing things on behalf of Teddy Ruxpin

1

DoxxThis1 t1_j78sw7b wrote

Saying LaMDa has no volition is like saying the Nautilus can't swim. Correct, yet tangential to the bigger picture. Also a strawman argument, as I never claimed a specific current-day model is capable of such things. And the argument that a belief in AI sentience is no different from hallucinated voices misses the crucial distinction between the quantity, quality and persistence of the voices in question. Not referring to "today", but a doomsday scenario of uncontrolled AI proliferation.

1