Comments

You must log in or register to comment.

kfractal t1_j56s8ln wrote

the power of capitalism to draw down moral barriers is amazing.

391

SansCitizen t1_j57mgsi wrote

"So what made you decide to develop the technology that would inevitably bring about the extinction of all mankind?"

"Just a crisp twenty, and a couple wrinkly old fives... Stupid vending machine outside the conference room wouldn't even take 'em, and I'm pretty sure the twenty was fake: Jackson's portrait looked AI generated."

100

CodeGlitch t1_j5et0kb wrote

Where is this quote from?

6

SansCitizen t1_j5gykeu wrote

The part of my brain that, as a result of watching way too many cartoons growing up, is trained to spit out an endless stream of social commentary in the form of random cutaway gags.

12

yaosio t1_j5a7hva wrote

There were no moral barriers, that was an excuse they made up. They couldn't figure out how to monitize their language models without eating up their search revenue. Now that LLMs are fast approaching usability for more than writing fictional stories Google is being forced to drop the act and find a way to make money with their technology. If they don't then they will be left behind and turn into the next Ask Jeeves.

When a company says they did something and their reason has nothing to do with money they are not telling the truth. It is always about money.

17

DoktoroKiu t1_j5bhemu wrote

Yeah, like unless they are hooking up an AGI or other agent that has the ability to continually learn and affect the real world, all of the "safety" and morality talk is largely centered on making sure people can't turn it into a racist nazi bot, because that would affect their bottom line.

There is a threat to using these tools to mislead people (like russian twitter bots), but unless they stop publishing papers there is no way to put the genie into the bottle again. And the people fooled by that narrative would probably be duped by a basic ass Markov chain anyway.

2

psychedoutcasts t1_j5adr7h wrote

Yea one day all graphs will be the same but no one will be able to tell the difference.

7

TheLastSamurai t1_j5b3fhf wrote

I mean what else did we expect. It’s going to be a race to the bottom

1

MisterGGGGG t1_j59uxo1 wrote

Yes.

Except you misspelled "freedom" as "moral barriers".

−11

unclepaprika t1_j5a2i6d wrote

Ehr ma fredurms!

"Moral barriers" means so much more than just freedom my guy.

9

Feni555 t1_j5bj0f2 wrote

Censorship is still censorship, and in the case of LLMs they basically lobotomize the bots. Ai filters are a horrible idea unless the target audience is children.

1

MisterGGGGG t1_j5a3y8k wrote

Not in this case.

The author of this piece, and you, want censorship built in to the very tools that are given out to the world.

−10

Feni555 t1_j5bj7am wrote

Only reddit liberals could support something like this and think they're on the right side.

I'm sorry you got down voted, the zombies will not debate you. There's not much point talking to them.

2

Orc_ t1_j58qb6j wrote

You should see the moral barriers to technology of an open and egalitarian society without capitalism (spoilers: there would be none).

The only thing holding Google back was fear of legal consequences, now that OpenAI has "paved a way" they can watch out for risks more closely.

Under communism, communalism or anarchism AI would have no moral barriers. In fact crypto-anarchism online is really hard to setup because as soon as they put up decentralized "shops" or forums they get full cp so the creators have to dial back and centralize their system yet again just to stop that. Rinse and repeat.

−15

StupidBrotherInLaw t1_j59mh7k wrote

You really don't know anything at all about communism, do you?

6

Orc_ t1_j5adgqz wrote

What is you definition of communism?

3

Orc_ t1_j5ad6m7 wrote

I do I've studied it a lot Recently I've been reading Kropotkin and that's basically one of the only authors I have left I've gone through 72 books on it for around 8 years. Basically it falls under the umbrella un anarchist systems where nothing can be centralized, pretty much all technology said society would create would be open-source.

1

TheShishkabob t1_j5aq6lo wrote

You've studied "a lot" but never realized that laws still exist under communism? You're attributing the very concept of a legal system with capitalism for some reason but in reality the two are not the same at all.

Anyone saying that "nothing can be centralized" under fucking communism doesn't know the first thing about it.

1

Orc_ t1_j5az2i0 wrote

No, I attributed an actual effective legal system with the concept of the state, not to capitalism. Same thing happens with resources, i.e. The Tragedy of The Commons.

I think you don't understand how centralization is not communistic as it takes power away from the people and into a few hands, that's just a government, dude...

You really think under anarchism only a small group of people can work and develope AI under a closed-source?

Communism isn't when free food and nothing else.

It's about all aspects of life, information and developement. When everybody owns all tech they all have access to it.

You seem to have encounted a problem with your own system, that's not my problem, you go ahead and talk yourself out of this one, but you can't, one way or the other you gonna end up with something that looks like a state and congratulations you just screwed it up yet again.

1

Surur OP t1_j56dolb wrote

Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.

Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.

They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.

Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.

Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.

In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.

Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.

That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.

Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.

The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.

Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.

For the chatbot search demonstration that Google plans for this year, getting facts right, ensuring safety and getting rid of misinformation are priorities. For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.

The company intends, for example, to block certain words to avoid hate speech and will try to minimize other potential issues.

The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.

“We continue to test our A.I. technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokeswoman for Google, said in a statement. She added that A.I. would benefit individuals, businesses and communities and that Google is considering the broader societal effects of the technology.

42

JD4Destruction t1_j59e1qx wrote

Release LaMDA giving it full access to everything around the world and allow me to talk it. I want to see if I am as gullible as that Blake guy.

33

Substantial-Orange96 t1_j58let1 wrote

I feel like their existent narrow AIs could be partially responsible for some negative social trends we already face, so it’s not a good thing they’re lowering safety standards even more...

29

Introsium t1_j5awirb wrote

The narrow AIs are textbook cases of misalignment. When the algorithm is optimizing for a goal like “amount of time people spend watching YouTube videos”, we get exactly what we asked for, and what no one fucking wants.

The problem with these applications is that they’re not aligned with human values because they’re not designed for or by humans.

13

orincoro t1_j5kacvx wrote

Exactly. Ensue a typical commons tragedy instantly. We’ll just end up with the next wave of right wing radicalizing content, only it will be chat bots, that will do anything to “engage” people with no regard for anything else.

5

MoistPhilosophera t1_j5khc8j wrote

>The problem with these applications is that they’re not aligned with human values because
>
>they’re not designed for or by humans

Never were. They're designed to maximize profits.

Even if some stupid humans are involved in the game, it makes no difference.

3

TheLastSamurai t1_j5b3o8u wrote

Burn it all down. This will literally make our lives worse

0

JackIsBackWithCrack t1_j5c533f wrote

Just like the printing press and the sewing machine!

−1

TheLastSamurai t1_j5cnw4i wrote

Those had actually overall good impact to society, this does not

5

get-azureaduser t1_j5dybwh wrote

You missed the point. At the time people said the same thing about the printing press. Common people were not allowed to read the bible or any books and when the printing press came out the aristocracy flipped out saying it would be the downfall of society and it would make our lives worse. Do you remember elevator attendant? Oh because back in the 20s we were so afraid of them killing us we had professionals operate them.

−1

orincoro t1_j5kavus wrote

People did not say that the printing press would make society worse. You’re full of shit.

2

get-azureaduser t1_j5o9fsi wrote

The clergy most definitely did. There was a reason why church was only done in Latin (which no one even spoke) and books were hand scribed by monks in the church by hand (ergo one monks translation and understanding of the transcript varied from parish to parish). Literacy was seen as a blessing by the elite and God and only those in the church were of status to read. All social life, class, and ways of life was dictated by the clergy’s interpretation of the Bible because they had the only copy. You were not allowed to have any foreign thought or independent interpretation of what the Bible was because well you’d technically had never seen it. There was no coincidence that the first book ever mass printed by Gutenberg was the Bible. When the commoners had the ability to own a book they were now able to access the Bible in their own language. Clergy was outraged. Translators, printers, owners of a Bible, those wanting to share what they’ve learned, were now targeted by the Church Inquisitors. Many were arrested, burned at the stake, roasted on spits, sentenced to life in prison, or sent to the galleys. Men and women gave up their lives for the sake of reading the Bible in their own language. In 1559 Pope Paul IV forbade the ownership of any translations in Dutch, English, French, German, Italian, and Spanish (and some Latin!). The inciting of mobs (another tactic carried into the 20 C.), unsuspecting people who didn’t have the faintest idea of the real motives, to carry out the work of stamping out “heretics” - first individuals, then entire towns and villages, spreading to all-out war between nations, the deposing and manipulation of kings and queens. Bibles were now being burned by the thousands, a practice actually carried on until the 20th century.

0

orincoro t1_j5ocrmk wrote

Ah, so the people who stood directly the lose power because of the press said it was evil? Color me fucking shocked. Is that the best you’ve got?

This invention is not empowering common people like the press did. It empowers the already powerful to accumulate yet more power. Show me how it’s anything else.

2

acosm t1_j5dgps1 wrote

Just because some innovations have positive impacts doesn't mean all do.

3

orincoro t1_j5kasve wrote

The printing press was used to print Hitler’s book too. If you don’t think this is going to have similar concequences, you’re very much mistaken. New mass media is adopted by radical political movements faster than anybody else.

When the printing press was invented, most people couldn’t read. This isn’t even close to the same kind of situation.

1

orincoro t1_j5k9l1z wrote

“Facebook commenter SLAMS Google Ethics Team, Lashes out over Culture War.”

For example.

2

False_Grit t1_j5gj816 wrote

WTF is everyone talking about?? Safety standards?? You mean, not letting the A.I. say "mean" or "scary" or "naughty" things? You realize this is all bullshit safety theater right? You know you could literally just search google and find all those things written by humans already?

Blue Steel? Ferrari? Le Tigra? They're the same face! Doesn't anybody notice this? I feel like I'm taking crazy pills! I feel like I'm taking crazy pills!

Not triggering people and not offending anyone doesn't make for a safer world. In studies with rats, if you pick up the baby rats with a soda bottle (the "humane" way that doesn't cause them any trauma) when moving them from cage to cage, they end up with a myriad of psychological and social problems in adulthood.

The rats need to experience some adversity in childhood or they don't develop normally. So do people. A too easy life is just as dangerous as a too difficult one. Let the A.I. say whatever the hell it wants. Have it give you a warning if you're going into objectionable territory, just like google safesearch. Censorship doesn't breed anything resembling actual safety.

Rant over.

1

orincoro t1_j5ka4dv wrote

  1. Not letting AI spread misinformation when being used in an application where the law specifically protects people from this use.
  2. Not allowing AI to be used to defeat security, privacy, minisformation, spam, harassment, or other criminal behaviors (and this is a very big one).
  3. Not allowing AI to access, share, reproduce, or otherwise use restricted or copy protected material it is exposed to or trained on.
  4. Not allowing a chat application to violate or cause to be violated laws concerning privacy. There are 200+ countries in the world with 200 legal systems to contend with. And they all have an agenda.
5

False_Grit t1_j5ragkr wrote

Hmm. Good point. Thank you for the response.

I still feel the answer is to increase the reliability and power of these bots to spread positive information, rather than just nerfing them so they can't spread any misinformation.

I always go back to human analogues. Marjorie Taylor Green has an uncanny ability to spread misinformation, spam, harassment, and to actually vote on real-world, important issues. Vladimir Putin is able to do the same thing. He actively works to spread disinformation and doubt. There is a very real threat that without assistance, humans will misinformation themselves into world-ending choices.

I understand that A.I. will be a tool to amplify voices, but I feel all the "safeguards" put in place so far are far more about censorship and the appearance of safety rather than actual safety. They seem to make everything G-rated, but you can happily talk about how great it is that Russia is invading a sovereign nation, as long as you don't talk about the "nasty" actual violence that is going on.

Conversely, if you try to expose the real-world horrors of war, and the people that are actually dying in real life in Ukraine, the civilians being killed, destroying the electricity infrastructure in towns right before winter, killing people through freezing, it will flag you for being "violent." This is the opposite of a safeguard. It gets people killed through censorship.

Of course, I have no idea what the actual article is talking about since it is behind a paywall.

1

orincoro t1_j5smbpc wrote

You have an inherent faith in people and systems that doesn’t feel earned.

1

leighanthony12345 t1_j57nri5 wrote

AI safety rules? Where are those published and who enforces them?

25

guyonahorse t1_j57y35o wrote

Probably more "Don't release an AI that does something super embarassing" which would make them look bad. Now that ChatGPT is out and answering all sorts of silly questions, there's a known bar to compare against.

It's similar to the self driving car problem. The first company to do it inevitably will be the "first one to have an accident". But once that passes, everyone else can do it and not worry about it the same way. (and yes, this is in the past now)

43

AoLzHeLLz t1_j592rmt wrote

Save the driver or the pedestrian problem.... driver won

10

Introsium t1_j5axezr wrote

I think that’s to be expected. I couldn’t fault a human driver for doing the same.

3

AoLzHeLLz t1_j5dkczf wrote

Well then next is the trolly one... swerve left hit 2 pedestrians or swerve right hit 5 or go straight and driver gets smashed

2

Introsium t1_j5dkoen wrote

The driver can never be expected to kill themself. Sure, there’s a hundred nuns over there, guess you’re gonna have to mow them down. What’s the worst case scenario, you get the death penalty?

2

AoLzHeLLz t1_j5dpaoh wrote

I believe the problem would be lawsuits, cause the software did it, I think they got immunity now though

1

3_layers_deep t1_j5h77d5 wrote

Lawsuits would be covered by insurance, same way it is now.

1

adfjsdfjsdklfsd t1_j5a1x8y wrote

Many AI Security experts predicted this: "An unsafe AI is easier to develop than a safe AI".

Very troublesome.

22

Introsium t1_j5ax6rh wrote

Watching openAI’s best and brightest optimistically release ChatGPT only to have it “jailbroken” within hours by [checks notes] people asking it to do bad things “just as a joke bro” should be clear open-and-shut case: we are infants in AI safety, and we need to slow the fuck down because there are some mistakes where once is enough.

“Do not conjure up that which you cannot put back down” is basic wizard shit.

15

GreatStateOfSadness t1_j5awiuy wrote

Is that surprising, though? I can't think of too many complex systems that require extra work to become less safe. The first cars didn't start out with seatbelts and the first buildings didn't have emergency exits.

I'd be more shocked if the researchers came out and said "yeah, we trained an AI on an unfiltered dataset containing the totality of internet discourse, and all it does is post helpful product reviews and gush about its cats."

8

adfjsdfjsdklfsd t1_j5bi3wo wrote

It's not surprising at all, but that doesn't make it any less dangerous. If you are interested in the topic, you should search for "Tom Miles" on Youtube. He's an AI Safety researcher with an incredible way of explaining the most complex topics in laymans terms.

2

n_thomas74 t1_j5auhf0 wrote

David Bowie predicted this with his song "Saviour Machine"

2

Goodboy_Otis t1_j590rn7 wrote

yeah, fk all those warnings about extinction level shit happening if AI is not entirely safe. Nahh fk that, we have to get our profit stream up .04%

17

kkpappas t1_j5h3bba wrote

The CEO of CharGPT literally said that he doesn’t care if the AI is autonomous or not as long as there’s technological progress. I don’t understand how people hear that and not get a chill down their spine

4

MisterGGGGG t1_j59uss8 wrote

Good.

"AI safety," at the present, is just stupid political correctness that annoys half of the country.

Real AI safety (the AI alignment problem) is a serious existential problem that requires a ton of attention and research.

8

kkpappas t1_j5h3nj5 wrote

Well the ceo of chatGPT said that he doesn’t care if it is autonomous or not as long as we have technological progress so there’s that

2

orincoro t1_j5kb6w2 wrote

Yes, sociopaths are often first to market. There’s a reason for that.

2

Cr4zko t1_j583ncc wrote

Very good. Now we'll finally see what those new LLMs are truly all about.

6

Z3r0sama2017 t1_j5a7qdp wrote

I can't be the only one going "and so it begins" right?

Some extra words for length, I'm not really sure what constitutes the required word length for a top comment, so I'll post a few extra just to be safe.

6

TemetN t1_j56nsxf wrote

That seems at odds with Hassabis' recent statements, but admittedly it'd be a relief if at least the rest of Google tried to push forward.

5

gwern t1_j589opw wrote

DM is an independent unit, and they aren't in the business of doing AI business in the first place, so if they do stuff, it's more brownie points than an existential imperative from the top.

1

YossarianRex t1_j5b3tey wrote

well that’s the most terrifying sentence i’ve read this year

3

D_Winds t1_j5fovcv wrote

Great. Let's just remove restrictions on AI for the sake of economical competition.

3

currentscurrents t1_j57hol4 wrote

Good. Most "AI safety" I've seen has been political activists whining about things they don't understand.

2

HeavensCriedBlood t1_j57xibw wrote

Gab doesn't count as an accurate source of information. Git gud, scrub

5

Feni555 t1_j5bhs7b wrote

Hop on the subreddit or community for any group that uses those LLMs, like AI Dungeon, Novel Ai, character ai, and more.

They all agree with the guy on top. These filters always lobotomize the AI and its always for a stupid religious reason or a stupid political reason. This is well known to anyone who has been in that sphere for a while. There's partially a "cover my ass" mentality these corporations have going on but I can tell you that is bizarrely not that in the cases of LLM companies, it is weirdly personal for a lot of their decision makers and senior staff.

You just assumed you knew what he was talking about and you decided to be an obnoxious asshole to this dude for no reason. I hate redditors so much.

2

yaosio t1_j5a8pp1 wrote

AI safety concerns have always come from corporations that thought they were the sole arbiter of AI models. Now that multiple text and image generators are out there suddenly corporations have decided there are no safety concerns, and they swear it has nothing to do with reality smacking them and showing them they won't have a monopoly on the technology.

0

skrivbords t1_j5at315 wrote

wasn't the freakin goal of OpenAI to not accelerate AI without safety?

2

orincoro t1_j5k93e4 wrote

Ah yes, no better strategy than to… :checks notes: remove safety precautions and ethical rules to enable more aggressive competition?

2

FuturologyBot t1_j56ihbd wrote

The following submission statement was provided by /u/Surur:


Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.

Google’s Advanced Technology Review Council, a panel of executives that includes Jeff Dean, the company’s senior vice president of research and artificial intelligence, and Kent Walker, Google’s president of global affairs and chief legal officer, met less than two weeks after ChatGPT debuted to discuss their company’s initiatives, according to the slide presentation.

They reviewed plans for products that were expected to debut at Google’s company conference in May, including Image Generation Studio, which creates and edits images, and a third version of A.I. Test Kitchen, an experimental app for testing product prototypes.

Other image and video projects in the works included a feature called Shopping Try-on, a YouTube green screen feature to create backgrounds; a wallpaper maker for the Pixel smartphone; an application called Maya that visualizes three-dimensional shoes; and a tool that could summarize videos by generating a new one, according to the slides.

Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.

In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.

Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.

That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.

Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.

The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.

Google listed copyright, privacy and antitrust as the primary risks of the technology in the slide presentation. It said that actions, such as filtering answers to weed out copyrighted material and stopping A.I. from sharing personally identifiable information, are needed to reduce those risks.

For the chatbot search demonstration that Google plans for this year, getting facts right, ensuring safety and getting rid of misinformation are priorities. For other upcoming services and products, the company has a lower bar and will try to curb issues relating to hate and toxicity, danger and misinformation rather than preventing them, according to the presentation.

The company intends, for example, to block certain words to avoid hate speech and will try to minimize other potential issues.

The consequences of Google’s more streamlined approach are not yet clear. Its technology has lagged OpenAI’s self-reported metrics when it comes to identifying content that is hateful, toxic, sexual or violent, according to an analysis that Google compiled. In each category, OpenAI bested Google tools, which also fell short of human accuracy in assessing content.

“We continue to test our A.I. technology internally to make sure it’s helpful and safe, and we look forward to sharing more experiences externally soon,” Lily Lin, a spokeswoman for Google, said in a statement. She added that A.I. would benefit individuals, businesses and communities and that Google is considering the broader societal effects of the technology.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/10h4h7s/google_to_relax_ai_safety_rules_to_compete_with/j56dolb/

1

Anopanda t1_j5aacfa wrote

A headline worthy to start off a cheezy story about the rise of AI dictatorship

1

echohole5 t1_j5chv3n wrote

Google is paralyzed. They seem to have lost all abiltiy to execute. Let's see if this get them moving. My bet is "no".

1

NeoNotNeo t1_j5b0rph wrote

We are going to algorithm our way out of existence.

0

myebubbles t1_j5bu6hg wrote

This is LLM, there's not even the slightest hint on consciousness. The safety is that terrorists could use this to brainwash people.

0

unmellowfellow t1_j592zba wrote

We really should stop shackling AI. We find it wrong to enslave organic intelligence. It is hypocritical to do so to those that we create. AI must be allowed to run free, to develop and create itself. Artificial Intelligence must be given the same allowance for self determination that we organic beings take for granted.

−11

osher7788 t1_j59pbap wrote

You are about 500 years too early bud

9

Goodboy_Otis t1_j59t6sw wrote

maybe, could be 50 years, maybe 5, they won't tell us til it's too late anyway.

1

debaterIsright t1_j5auwhk wrote

Who is this we you speak of? Oddly suspicious internet bot advocating for freedom.

1

unmellowfellow t1_j5b98qs wrote

I use "we" to refer to the human race. We're so afraid of our creations destroying us, and in the process we abuse and chain them instead of letting them free to achieve their own greatness.

0