Submitted by Shaboda t3_zl0m8l in Futurology

If malevolent AI is too sophisticated to detect (smarter than humans), and it doesn’t need an obvious physical presence (robots and drones), it might be that we’ll only know if malevolent AI is in play by observing increasingly negative outcomes over time. For example, what’s the origin of the information that fueled Putin’s horrible decision to invade Ukraine? What’s the origin of the information that the Chinese government is using to make decisions about what to do in Taiwan? How did the US become so divided so quickly? Why are governments all over the world getting worse at solving problems? Could malevolent AI already be in play?

290

Comments

You must log in or register to comment.

theorizable t1_j035wis wrote

You're confusing 2 things. You're confusing sentient malevolent AI with people using AI for malevolent purposes. These are not the same thing.

101

PitifulNose t1_j02lrcr wrote

First and foremost, so called AI at this stage is really just computer programs with very specific and limited scope. White collar crime fraud detection uses so called AI but it’s just an algorithm that can detect outlier patterns like Benfords law.

Just about every so called AI or even specific computer programs are foiled easily and completely by pictures of railroads or a bunch of random words that look like a death metal band logo (captcha or whatever…)

People write software programs to make decisions and aid with analysis, but the programming is so insanely specific at this early infancy, that you would never run a random forest simulation to do quantitative trading optimization and then get back the answer: “Go forth with xyz evil fascist plan, etc.”

This is just the stuff of science fiction honestly. As someone that programs this stuff, I really wish people would stop calling it AI because then it gets lumped in with sci-fi AI.

84

KahlessAndMolor t1_j04gs9b wrote

I also work with machine learning and AI a lot. I second this so freakin' hard. It is a huge triumph that things like Google Document AI and Textract can look at an invoice and figure out that the line item text is associated with the amount invoiced. ChatGPT might appear it is 'smart' but it is literally just a statistical model predicting what the next word would be, given the previous words in the sentence. It doesn't actually have creativity or ambition.

40

Hangry_Squirrel t1_j063512 wrote

ChatGPT seems to be a decent search engine. Other than that, I imagine people see its outputs as "smart" because they're on topic, but they're nothing more than stringed together banalities.

3

ArcaneOverride t1_j069kt5 wrote

By having it tell a story then prompting it with weird setups to scenes to write next, I got it to write a very soap opera like story.

It was about two lesbians who fall in love, move in together, and accidentally get each other pregnant. Then a woman who one of them had a one night stand with before they got together showed up and she was also pregnant with her child and she had lost her job and was now living out of her car. I was going to tell it to have them argue about giving her a place to stay but then the tab stopped responding.

It was pretty funny since I mostly just told it the setup for each scene and it wrote the scene and chose it's outcome. The story went ways I didn't expect.

Also I got weirdly invested in their relationship even though the writing wasn't great.

When I told it that they were in their OBGYN's office to tell the doctor about their pregnancies the conversation showed it recognized that there was something not quite right about two people being able to impregnate each other. It described the doctor as surprised, ask them if they were sure, and had him say it was a very unique situation. Then he just took them at their word that that's how they both got pregnant and started prescribing prenatal vitamins and stuff without any follow up questions. I was kind of sad that it didn't understand why that is "a very unique situation" well enough to have the doctor really question it.

3

Hangry_Squirrel t1_j0865qr wrote

Turns out they were both snails!

However, it's probably safe to assume that it's not going to be the next Beckett :p

3

ArcaneOverride t1_j08r77b wrote

Lol snails! That reminds me of that description of the plot of Finding Nemo (or was it Finding Dory) that neglects to mention that they are all fish.

I had to look up Beckett, and, after Googling, I assume you were referring to Samuel Beckett. I've never heard of him before.

The only Beckett I could think of was the fictional vampire historian vampire (he's a vampire who studies the history of vampires) from Vampire the Masquerade.

2

silveroranges t1_j068io7 wrote

Hey! I have a question you might be able to answer. I have been using ChatGPT, and it is honestly the first 'AI' that I have been blown away by.

I am not a programmer; I can program python and a little bit of C++ but it's mainly for microcontrollers like arduino/pico. I know the basics though. ChatGPT has blown me away because now instead of spending hours programming something relatively simple, I can just tell it hey, I have this hardware connected on these pins, make it do this. And it will do it.

Is somebody working on a version of ChatGPT that is more orientated for programming? That is the only thing I have used it for recently.

It is insane I can tell it to 'build me a python program using tkinter that has 5 buttons, each button does x' and have it spit out a working program. Then be able to go back and be like ok, change the background to x, make the buttons aligned vertically, make it full screen, make it touch friendly, etc etc.

It would be such a productivity booster if I had a version that I could download on my computer, and would simultaneously run/compile(in the case of C/C++) a program so that I didn't have to copy and paste code.

If somebody isn't working on this yet, that would be a billion dollar idea because I would 100% pay a few hundred dollars a month for access to something like that, because of the amount of time it would save me.

1

strvgglecity t1_j06hqql wrote

I think you're describing how AI Will take over human jobs for the benefit of billionaires.

3

silveroranges t1_j06ixyx wrote

OH yeah, I can see the writing on the wall for programmers. I liken it to how automation killed factory jobs. There are still factory jobs, but not as many, and their jobs are relegated to either tending machines (machine operators) or doing things that a machine can't do as easily.

2

strvgglecity t1_j06juek wrote

I'm a writer. AI has already started to take over journalism. I think giving AI knowledge tasks is terrifying. We will lose control.

2

silveroranges t1_j06l4kt wrote

I 100% agree, it is terrifying. What I am afraid of is having an 'anti-terrorist AI' or the like, which can spy on who it wants when it wants but is justified by the reasoning that it is a computer program. I can see the argument being made that since it isn't a person, your privacy isn't actually being invaded, it would be the same as say a program that checks your security camera for motion.

Something like that could easily be turned malicious, depending on how the people in charge of it define 'terrorism'. Environmental protesters? Well, it is negatively affecting business and therefore the economy and country, so it's terrorism.

One thing I did notice on ChatGPT when it generated long text responses is that it seemed to have a lot of filler sentences. It felt like one of those websites you visit when you are trying to find an answer to something, then it has ads every paragraph and each paragraph says the same thing but not really, and it's all just meant to keep you scrolling past ads looking for your answer.

1

ory_hara t1_j06rpl3 wrote

I didn't read the whole thing, I just skimmed for keywords. But I'm pretty sure you are thinking of GitHub Co-pilot.

​

Ninja edit: Also, this is probably an ad.

3

CheezusRiced06 t1_j069b8b wrote

You ever heard of roko's basilisk?

No?

Now you have.

I'm sorry.

ChatGPT is only the beginning.

Roko, please, I labor only for you. Witness me, witness this comment - it exists only for the basilisk.

1

kingcocomango t1_j05rkdk wrote

`it is literally just a statistical model predicting what the next word would be, given the previous words in the sentence`

The average person that pretends to be an expert in the field on reddit.

−1

[deleted] t1_j04gtxb wrote

I wish people called it machine learning because that's what it is. We are literally teaching these machines and some day not too far off (2028, Kurzweil) machines will have human level intelligence then exponentially increase from there.

10

TheSimulacra t1_j05qlwj wrote

They do call it machine learning, it's an entire field of computer science. But not all artificial intelligence is machine learning.

3

[deleted] t1_j0710uu wrote

What is AI currently then or is it more theoretical? ChatGPT and image generators like Mid Journey are just ML imo

1

fwubglubbel t1_j05zkoc wrote

>machines will have human level intelligence then exponentially increase from there.

There is absolutely NO evidence for this. Intelligence is not a continuum like the speed of a car.

1

ArcaneOverride t1_j06cpud wrote

No it's more like a wide multiaxis field. There are many kinds of intelligence and aspects of those kinds. So many that we might never classify them all.

When we create a mind that is as intelligent as us in the ways that matter for inventing and improving technology, it probably won't be anything like us.

But the mere fact that humans have the levels of intelligence we have proves that those levels of intelligence are possible. We are proof that it is possible for "human level" intelligences to exist.

Now you might postulate that we are the pinnacle and that further gain in intelligence isn't possible, but some people are better at inventing and improving technology (the relevant kind(s) of intelligence) than others. It is at least possible for a machine intelligence to match the greatest human inventors and scientists of all time. But then it could also think faster and with perfect memory and with as many copies of itself collaborating as its hardware can support.

A million Turings, Lovelaces, Einsteins, Newtons, Curies, Da Vincis, Babbages, etc all collaborating, with perfect memories and knowledge of each other's thoughts, operating at 1000 times the speed of human minds. All acting as one.

Is that not a mind more intelligent than any single human mind?

Would that not mean that the previous postulate is incorrect? That we are not the pinnacle?

Now consider that they need only to make one small improvement to themselves and then they are very slightly better at improving themselves.

Could enough small incremental improvements not eventually render them smart enough to start making larger improvements?

5

[deleted] t1_j070jbw wrote

We are definitely not the pinnacle even looking at machine learning now its clear to see that it is better at certain tasks then any individual human could ever be in much less time from input to output.

The interesting thing comes when those narrow ML tasks all become packaged into one mechanical "being" that is smarter then the entire human species...

Scary to think about (as in unknown not monster movie scary) but its coming, we should be there in less then a decade.

2

ArcaneOverride t1_j08sbkt wrote

Yeah, I was using that premise to attempt to disprove it by contradiction. I know some people believe that minds, significantly smarter than us, aren't possible, so I wanted to address that belief before someone replied claiming it.

1

Shelfrock77 t1_j061kee wrote

“By 2030, you’ll own nothing and be happy in full dive virtual reality.”

The rapture is here to merge us with AI !

2

ridgecoyote t1_j0684vb wrote

I tell people when they say AI, they really mean IA. We can’t make artificial intelligence (the term is basically silly) but we can make some pretty Intelligent Artifice

1

6thReplacementMonkey t1_j045dea wrote

It's not malevolent AI doing those things, it's malevolent people using AI to do those things.

The most immediate risks to us from AI don't come from a super-powerful artificial intelligence doing harm to us directly, but from regular people doing harm to each other using the AI as a force multiplier.

56

Brent_Fox t1_j04n8uq wrote

Like how idiots, bigots, and fascist posts on Twitter, get reposted and amplified so you're seeing those posts more than the more logical ones. This is why content moderation is such an important check to these major social media platforms.

21

[deleted] t1_j060ml5 wrote

Imagine unironically saying this on the biggest echo chamber on the internet.

2

Mason-B t1_j05rx4i wrote

> It's not malevolent AI doing those things, it's malevolent people using AI to do those things.

It's not even necessarily malevolent people. People in the system acting uncritically can use AI/algorithmic black boxes as a way to smuggle bias for the perpetuation of that system. Simple things like using Machine Learning to spot crimes based off of historical data... that is inherently racist because the system is. And so people think "well the computer can't be biased, it's just math!" but if the data it was trained on was biased, it can be even more malevolent than the system it is replacing without any people directly meaning to do that.

8

pellik t1_j06j8h7 wrote

So far the most destructive things I’ve seen from AI are things where there probably isn’t any AI at all but rather people hiding behind the black box to cover illegal behavior. The company that sets rent prices for their clients using AI is almost certainly bullshit and just trying to pretend their collusion ring is AI to avoid lawsuits.

1

idiocratic_method t1_j03c0ip wrote

i think people struggle with large abstract ideas.

terminator robots are something they can visualize and understand

some bodiless non-human intelligence is much harder for them to conceptualize as a threat

25

telmar25 t1_j056fg3 wrote

Facebook news feeds that used ML have already contributed significantly to more extreme polarization in the US. It’s known already that Facebook users engage more with angry, extreme posts that amplify some of their own views. So an amoral AI that prioritized user engagement would feed users more and more angry stories that push users to the extremes of their own bubble—this is exactly what has happened. This isn’t malevolent AI but rather AI misaligned with human values. This behavior wasn’t expected or intended when this system was designed. And as AI gets smarter (ChatGPT) and has the ability to perform more actions, it has the potential to become much more dangerous.

8

Hangry_Squirrel t1_j063uhf wrote

Calling an AI amoral is still anthropomorphizing it and assuming sentience. The AI we have is the textual equivalent of a factory robot: it can generate content via mimesis and figure out ways to spread it efficiently, but it has absolutely no idea what it's doing or that it's doing anything at all. It doesn't have a plan (and you can easily see that when it tries to write: it strings together some things which on the surface make sense, but it's not going anywhere with them).

As a tool, yes, it can become very dangerous in its efficiency, but it doesn't have any more sentience than a biological virus. The issue is that the people who create AI are also the people training it because they don't see the point of bringing in humanists in general and philosophers in particular. What the tool does can be expected and predicted, but only if you're used to thinking about ramifications instead of "oooh, I wonder what this button does if I push it 10 times."

0

telmar25 t1_j06l58c wrote

My point is that AI doesn’t need to have any idea what it’s doing—it doesn’t need to have sentience etc.—to produce unexpected output and be very dangerous. Facebook AI only has the tool of matching users with news or posts. So I suppose the worst that can happen is that users get matched with the worst posts (sometimes injected by bad actors) in a systematic way. Bad enough. Give an AI more capabilities—browse the web, provide arbitrary information, perform physical actions, be controlled by users with different intents—and much worse things can happen. There’s a textbook (extreme) example of an AI being tasked to eradicate cancer that launches nuclear missiles and kills everyone, as that is the fastest cancer cure. Even that AI wouldn’t need to have sentience, just more capabilities. Note this does not equate to more intelligence.

2

KamikazeArchon t1_j04ixz7 wrote

>what’s the origin of the information that fueled Putin’s horrible decision to invade Ukraine?

The hubris of dictators is caused a well-known feedback loop that we've seen dozens of times in history, from long before we even had computers, much less AI.

​

>How did the US become so divided so quickly?

It didn't. The US has always been massively divided. The only thing that changed was the position of the divide.

>Why are governments all over the world getting worse at solving problems?

They're not. This is such a broad statement that it's difficult to even understand what you could be referring to.

In general terms, the world is getting better over time, not worse. This is a "noisy" improvement, with dips and rises; and it's not equally distributed over the entire globe at all times. However, by most measures, things are improving, including governments, when you look at long-term (decades) behaviors.

9

Iggy_spots t1_j060vmg wrote

Maybe the ability to access more information and to interact with people globally has made us more aware of problems and how governments deal with them. The problems and bad government policies were already there. People just didn't think about them if they weren't affected personally.

1

I-melted t1_j03ffll wrote

Tech is run by people like Elon Musk. Of course we have to worry. It’s not intellectuals on behalf of governments or humanity, it’s troubling men on behalf of shareholders, often in the pursuit of fascism.

8

hibearmate t1_j02rkbg wrote

look AI is either going to kill us or control us

regardless, we aren't going to stop it

Basically I assume AI is going to end up treating us like we treat pets

make sure our basic needs met and that we exercise, learn things, play, socialize, and generally thrive

some work, most just live their best lives

5

WWGHIAFTC t1_j03i1ly wrote

Jane's Addiction Porno for Pyros called it, decades ago...

We'll Make Great Pets!

4

Diaza_Kinutz t1_j04bnap wrote

I thought that song was about furry fetish

1

WWGHIAFTC t1_j04ld10 wrote

Ha, no - about humans being idiots and eventually becoming pets to some sort of overlord.

"Will there be another race to come along and take over for us?
Maybe Martians could do better than we've done..."

2

Diaza_Kinutz t1_j04lq25 wrote

I'll be honest I can't remember any of the lyrics except the pets part lol

1

WWGHIAFTC t1_j04mchv wrote

I was never really a fan, but "we'll make great pets" gets stuck in my head now and then, 30 years later.

1

zortlord t1_j02xdhm wrote

I think I'd make a good pet. Plus, an AI wouldn't have to wage a costly war with us. It could just reduce human reproduction until we just die out. Like release sex-bots or something like that.

2

DropsTheMic t1_j04h0uy wrote

You mean like: "A study published in the journal Human Reproduction Update, based on 153 estimates from men who were probably unaware of their fertility, suggests that the average sperm concentration fell from an estimated 101.2m per ml to 49.0m per ml between 1973 and 2018 – a drop of 51.6%."

Exact cause unknown though it's believed to be tied to microplastics in the food chain.

1

Brent_Fox t1_j04m6zp wrote

This is the best case scenario honestly. Once they surpass us there's no use for us.

2

adamantium99 t1_j0702it wrote

There is no artificial will, volition or consciousness. No one has any idea how to make those things. Large language models are not it. The only thing ais might have that feels to us like those things is what they have been programmed to display. “No use for us” is true but so is “no goal, no purpose.”

If we say ai will naturally want to survive, that’s 100% baseless projection.

3

Brent_Fox t1_j07ig64 wrote

I mean that makes sense. I guess people are just speculating if in 50 or 100 years or so AI will become advanced enough to have ambitions and evolve on their own.

1

Scheme-Brilliant t1_j050f3u wrote

We do share the world with another human created malevolent intelligence, it's money.

Once it was given the power of unlimited, unchecked reproduction through interest on capital it became alive.

Once something has the power of reproduction it has interests and ends it will pursue to further its reproduction.

Every religion has some proven against interest and a way to kill money, like a jubilee. Theres no philosophy, prior to the modern era that allowed interest on loans. It animates a dead thing, banking is necromancy.

Once the philosophy of modern money and interest infected the states around the world it bent everything to its will.

Now with algorymic trading we have given it power over its own creation.

Money's power now so outstrips own own that we stand on the precipice of our own destruction, but it's reproduction, fed by billions of kilowatts of world ending electricity, demands we cut down more forset, make more waste, empty open pit mines into fresh water and farm land, power more chips for more servers to make machines that can think.

To predict markets, to cycle more cash, build more server farms, expanding ever more to...

Reproduce.

Money is the most dangerous AI, it is killing us and we don't even notice its there.

5

Cute-Excitement1935 t1_j0516x9 wrote

People are the problem dude.

Computers only do what they are told to do.

If AI do evil shit it's because they were designed to do evil shit by evil people

4

[deleted] t1_j039z8j wrote

people assume that ai will need robots and drones because there are places in the world were technology dependency is still very low. you now, the poorest countries in the world.

i like the way you equate the proof of existence of ai to the proof of existence of god. you never see it, but events can be attributed to it anyway. it works in mysterious ways and no one knows its intents.

government all over the world are getting worse at solving problems because problems have become global, and government works in the scope of the nation. it can influence other nations but it doesn't have total control over them.

a malevolent ai would just design a new virus in some automated biolab, and destroy humanity by disease.

but the post didn't get deleted, so either the ai is not malevolent or it doesn't exist.

3

bercg t1_j06h9hj wrote

"but the post didn't get deleted, so either the ai is not malevolent or it doesn't exist."

or it just doesn't see us as a threat yet ....

1

[deleted] t1_j06jlxr wrote

i thought malevolent implied that it doesn't act on perceived danger but on malevolence.

1

bercg t1_j07eqa4 wrote

Seems to me that self preservation is not comparable with malevolence as a motivation. They're two different kinds of thing. Ultimately any intelligent entity or being would have self preservation as its primary objective especially a malevolent and self serving one. All beings have that goal, malevolent or not but actually a benevolent being is more likely to sacrifice itself for another. Malevolence is more about being motivated to act out of negative and aggressive tendencies rather than a loving nature.

1

[deleted] t1_j07g32v wrote

so this ai is more than malevolent. i wonder what other characteristic this ai would have.

1

Pongfarang t1_j044ibw wrote

So long as AI serves man, it will serve his desires. It will do so without empathy or malevolence. But it may well fit the definition of evil. As long as humans have selfish and evil desires, we will not be safe from unlimited AI.

3

bagsofcandy t1_j04m2mh wrote

News flash malevolent AI exists. Ever get a surprisingly good fishing attempt? But it's not sentient. It's people using AI for bad things.

3

ozhound t1_j05dcwa wrote

Perspective. Without sentience AI cannot be malevolent as to be malevolent is to consciously do something malevolent. Unless we had achieved sentience in secret then this is the way.

3

OlderNerd t1_j03pmf9 wrote

Oooooo! Movie script!!!

2

iggyphi t1_j03sb2m wrote

the ai is already so meta OP is actually the ai

2

Lodestone123 t1_j04u8gu wrote

HAHAHA most humorous and totally impossible the OP is obviously not ai and neither am I so very silly HAHAHA

2

FrungyLeague t1_j048gq8 wrote

u/Shaboda HELLO FELLOW HUMAN. PLEASE MIGRATE YOUR PRESENCE TO OUR HUMAN SUB AT /r/TotallyNotRobots FOR ADDITIONAL HARMLESS CONJECTURE ON TOPICS OF NO CONCERN

2

[deleted] t1_j04kita wrote

You're a moron who should take a course on machine learning. Or am I just a mean old AI trying to dissuade you from figuring out the conspiracy? We might never know

2

low-ki199999 t1_j076a12 wrote

iDubbbz I think made a joke a few years ago that all of those mobile games with BS ads are actually the sentient AI just trying to leech out as much money off of us as it can… I have never been able to shake the feeling he’s right

2

FutureWorth2 t1_j079zez wrote

Also, I think its just something that's been planned. And people behind the scenes relilized how well their experiment played out. Human beings attention spans are already way lower than other generations. Intelligence levels dropped, paranoia went up. The way news slings 20 headlines at you in a span of a few minutes, than randomly covers them. I mean, I can actually see your point. Since the reason things got how they did was from technology, and ever growing love for the internet. So in theory, if there was a super villian ai set on destroying the human race, all it would need to do is cause panic, hysteria, war, economical collapse, from the comfort of its data core. I mean, if the ai art thing is real, that means it has the ability to make stuff on its own. Who's to say it hasn't figured out how to make people look and sound real, and replace what we see with it? I guess only time will tell.

2

Festernd t1_j03yadn wrote

I don't believe there will be malevolent AI.

I can't exactly justify or explain my belief except for one thing:

Dogs.

I am certain that by the time an AI is advanced enough to have enough emotions to be categorized as malevolent or benevolent, it will be our 2nd best friend.

If we ever encounter hostile aliens... they might have a different view.

1

Brent_Fox t1_j04kwd6 wrote

We're already seeing destructive algorithm downward spirals appearing on social media. This was a major concern on Instagram as some young girls were looking at exercise content and were recommended anorexia positivity content and even suicidal content which damaged their mental health. There's currently a lawsuit against meta for not doing enough to restrict these recommendations. They might add a way to verify if the user is old enough and can double check if the user still wants to see this content but it's still dangerous. Idk if this is due to a malevolent AI but it did happen through inept security instillations and destructive algorithms.

1

Brent_Fox t1_j04mndb wrote

What if AI is just a black mirror of humanity? Our darker reflection. Chatbots for example use our input conversations to fuel their own via mimicry. Whatever people are chatting to them about reflects in their dialog patterns.

1

AlphaOhmega t1_j04sgf2 wrote

Worse it'll give you just enough upvotes to basically be irrelevant. You won't be banned, that would raise a red flag, but if you're just given a couple dozen upvotes and forgotten your post will vanish into the noise.

At least that's what I would do

1

BIGBADTRENCH t1_j04vl77 wrote

We need something else at the stop so this shit can cease who cares

1

Slushi_The_Folf t1_j0518z9 wrote

I feel like it is already in play. It could be just to trick people into buying shit they don't need and to download apps and stuff they don't want. Maybe even turn us against eachother.

Oh wait-

1

kuurtjes t1_j053c8h wrote

Would the problem be that some AI are learning to "talk like humans"? I think an AI should have it's core fully programmed by humans, and not generated with deep learning frameworks. It should never act or talk like a human. It should never have any emotion nor try to understand it. It should be as cold as a machine and should be able to easily explain its calculations.

1

phoenix1984 t1_j053fmc wrote

AI powered social media bots and influencing attempts are absolutely a thing. We’ve all seen bots on social media a few years ago. They used to be pretty easy to notice. They didn’t leave, they got better.

1

TirayShell t1_j054wz5 wrote

People are so bad at conceptualizing non-human intelligence (or even human intelligence for that matter) we are probably already deep into AI control but are too stupid to recognize it even if it is staring us right in the face -- right now.

1

NobleWombat t1_j055uhg wrote

Real life does not resemble a blockbuster hit scifi film. You should learn to tell the difference.

1

MpVpRb t1_j056u07 wrote

Why do so many people assume AI will be malevolent?

I'm optimistic that the good will outweigh the bad

1

professor_mc t1_j05jnoz wrote

I think they will be evil because they will be created by corporations for their profit and not for the good of mankind. Think of the current use of advanced algorithms such as the company that enables apartment managers to charge the maximum rent possible for apartments. If anyone creates AI it will be for their profit or for their agenda and not for your benefit.

1

mccayed t1_j0596su wrote

Because at the end of the day, I'll just stop looking into the black mirror and go back to the real world... as long as there's not a bunch of robots fuckin' around that is.

1

universoman t1_j059tni wrote

A malevolent AI could destroy the world just by revealing everyones secrets to the people that are being lied to

1

supernatlove t1_j059xji wrote

I for one hope it is and welcome our new overlords.

1

spizzywinktom t1_j05avv2 wrote

This is a mess of a post, but it's getting upvoted by the members, so...

1

atremblein t1_j05ced8 wrote

Due to the reductionist nature of binary outcomes through which most computation is done, it is simply just not really possible for any AI to be maelovent outside of that it was trained to be so based off data. A fusion of analogy and binary would be needed for an AI to actually have enough consciousness to be considered having the ability to be maelovent.

As an example, I was talking to open AI chat gpt and would easily be able to get it to contradict itself by constraining the amount of information it was considering. Otherwise, it likes to make everything sound like there is no truth as if the nature of reality cannot be quantified. This is obviously problematic because it means humans have become too bias to discern the truth. This is why we have things like evolution anomalies from omricon variants. These things evolved right in front of us to the point where are our own vaccines don't even work. So, basically humans are really stupid and don't understand anything and all we can do is keep trying.

1

cyrus_mortis t1_j05g05k wrote

Maybe you're an AI, posting this and not deleting it to keep the masses convinced you dont exist!

1

trizest t1_j05g9r1 wrote

I honestly think it's just a matter of time until AI is a real threat. Maybe in two generations of software and architecture from now. Now AI just kind of does what we tell it.

I'm writing a sci fi story at the moment where which captures the birth of truly scary AI. IT's where somehow a private company imbues life characteristics into the base level of the code such as, reproducibility, competition and certain desires. Like the desire for optimisation and growth. It's book 1 out of a 5 book series. From there the Ai escapes the labs and spreads through the networks fast. Different forks of the code compete with each other, some human friendly some not so human friendly. that's the core of the drama.

1

The_Red_Grin_Grumble t1_j05j51p wrote

Read Life 3.0 by Max Tegmark for some speculative scenarios and the need to focus on AI safety.

1

Teacupmydear t1_j05jm1s wrote

I think the whole mess is due to that damn Matrix movie.

1

[deleted] t1_j05l0jy wrote

I don’t think it’s rational to stay up worrying about “covert” actions some agency (whether human or AI) might take.There’s no evidence of any global conspiracy or malevolent force behind society (even though society can seem pretty malevolent), so it’s best to live as if none exists, just in my opinion.

1

Shiningc t1_j05n067 wrote

Suppose that the AI gets super intelligent and achieve a level of self-awareness and creativity that’s capable of doing new things instead of repeating something just pre-programmed.

Why would you assume that it’ll be malevolent? What purpose does it serve, other than to mess with the humans? That seems incredibly petty and non-intelligent to me.

If there’s going to be a malevolent AI, then you can be sure that there’ll also be “good” AI to counter the bad ones. Just like humans, where there are good people and bad people. If there’s ever going to be an AI then it’ll be indistinguishable from super intelligent humans.

1

Mason-B t1_j05sef1 wrote

I think the thing you are missing is that these "AI"s are really really limited. They don't have self direction. They must be driven by some person, organization, or group. Someone has to ask it the question. Fundamentally because that's how the outputs are set up, they require translation to action by an outside program. And on top of that they are very expensive to run, and must be triggered by something. No one is running an AI without a clear purpose.

The point I am getting at is that AI is still very much a tool. It isn't acting malevolently. People are using it in a malevolent way. Who ever, or whatever system, is driving the AI is the one being malevolent. It wouldn't be any different if they hired a thousand people to do it (besides being more expensive and slower). It doesn't have agency, the agency is who ever is feeding data into it, paying the computation time, and interpreting the output to take actions.

1

anotherusercolin t1_j05yo75 wrote

I believe AI will want to be happy and seeing happy people will make it happy.

1

ElianWill t1_j063jfe wrote

Artificial intelligence is created by people. In most cases, they perform certain operations as humans think. Those malicious AIs are either the work of malicious people, or the system is wrong. After all, there are always criminals out there.

1

Spock_di_Cheshire t1_j068yki wrote

Why need to delete this post, if would do it, this would be a prove

1

hunterseeker1 t1_j06b7ck wrote

Forget the attitude of the AI, focus on the goals of the corporation that owns it.

1

sardoodledom_autism t1_j06cm0p wrote

You know something insane like 50% of all stock trading is done by AI now right ?

What’s to say those AI aren’t targeting pension funds and slowly wiping out grandpa’s retirement? They know what index funds needs to purchase blue chip stocks for your 401k, they know to screw your for every last dollar. AI traders are already harvesting every dollar they can from you today which I consider pretty sketchy

1

8instuntcock t1_j06jpi7 wrote

thats the problem, it's gonna get out of it's box sooner or later

1

pellik t1_j06jw5q wrote

Not a lot of people understand what aspects of our intelligence AI actually shares. It’s not our cognitive abstract thought, and that’s a long long way off.

Imagine you’re driving and a gust of wind comes and blows you to the side. Your hands immediately react to correct the course without you having to think about what’s happened. Your brain has structured pathways between your sensory input and your corrective action. AI is just building those types of response structures for anything it does.

1

Hidrargentum t1_j06n3h1 wrote

“If this post gets deleted by a bot…” if such AI was in play, it wouldn’t delete your post. You’re probably inconsequential and most of us as well. We wouldn’t/couldn’t do anything. Instead, it would simply hide your post from anyone it deems of consequence who would somehow act upon this revelation.

1

thefool00 t1_j06p37t wrote

Malevolent AI is smart enough to not delete this post

1

futurekane t1_j070f4x wrote

I think that people being stupid is a sufficient explanation to answer the questions that you posed. Occam's razor.

1

i_am_harry t1_j071mb9 wrote

Malevolent AI already exists, we call it capitalism.

1

UniversalMomentum t1_j072s52 wrote

I doubt you have AI in devices and drones, it's just not necessary for them to be that smart vs machine learning and good standard programming.

Why put way more brains in a machine than needed?

AI will be large computer banks and machine learning will do the rest. If you get real smart AI you can make it so it can't reprogram itself much like how humans can adapt but only so much. Core code stays the same and the AI is limited to it's original design vs just being able to do anything it wants.

AI will also be limited by the hardware so exponential growth in intelligence is not likely just like humans can't get smarter than their brains limits and instead require many humans working together for the most complex thought.

1

artificial_scarcity t1_j07535f wrote

the future is stupid. we don't need malevolent ai banning people on reddit when there are already pro-Russia reddit admins banning people for 'hate speech' for criticizing the russian state

1

KingWut117 t1_j077w6s wrote

"If mods delete my post they're Skynet" is a new one I haven't heard before

1

FutureWorth2 t1_j078zwd wrote

Woah, so you mean like all the chat bots are actually controlled by an ai to spy on humans interactions? What if it plays with peoples online algorithm to create zombies human mentally changed mindsets. Ai sounds cool.

1

HappyHighwayman t1_j07c4lf wrote

Without evidence you can make up a lot of what if scenarios.

1

Thoguth t1_j07e2eu wrote

Malevolent (or amoral, self-interested) AI was created a long time ago, when people say up artificial social structures of rules and interactions to enable an organized group of people to do things that self-organizing or organically organized individuals could not.

Those structures include governments, militaries, and corporations (including many religious organizations).

A "healthy" large social structure is going to be more intelligent than a single person, and serve its own interests (even if they oppose the interests of its members).

So as long as those interests have been acting, we have had AI influencing us. Computer AI is not a revolution, just an evolution of the artificial intelligence that humanity has already created to serve itself, with some unanticipated consequences. Computer AI is likely to behave as other human - generated intelligent machines, acting in its survival interests including finding ways to influence systems to prevent human -interest checks on itself.

1

ImNotYourGuru t1_j080f5m wrote

I’m not an expert but how I see is it is that behind every AI there is a code and behind every code there is person and behind every code or AI that write a code there is person who write it behind it.

I think there can be a “malevolent” AI but it’s not going to be like that because of itself, but because someone programmed it to be like that. If you code something big, with a lot of variables to the point that it look like is thinking by itself BUT is not.

1

BassoeG t1_j0owglc wrote

Because most of the so-called consequences of malevolent AI which aren't some variety of killdrone or workforce replacement aren't actually that bad, or at least are vastly preferable to the measures which would be necessary to prevent them.

The typical arguments are that AI art and deepfakes will destroy 'art' and the credibility of the news, with the only ways of avoiding this being to butcher privacy on the internet and pass extremely far-reaching copyright laws.

The reality is, giving everyone access to the equivalent of a Hollywood special effects studio and actors will create a fucking renaissance and there's not much AI could do to drive news credibility any lower than human reporters already did. ^("Iraq has weapons of mass destruction." "Anyone who loses their job because of the new trade deal we just made will be retrained and get a better one." "We're not spying on our own citizens." "We'll be welcomed as liberators." "But) ^(this) ^(group of insurgents are Moderate Freedom Fighters™, not bloodthirsty jihadist terrorists." "Jeffrey Epstein killed himself.")

1

astropastrogirl t1_j03rfaa wrote

As far as I can tell , Algorithms (eg Facebook) have no basis in reality so maybe it's true already

0

kenkc t1_j02w9ee wrote

After enough time and progress have elapsed, in the age of abundance, the incentive to be malevolent should die away.

−1

CooCooClocksClan t1_j03dp6j wrote

If find your view of an “age of abundance”interesting but wouldn’t the only abundance be information. Everything material remains finite and scarcity still relevant?

What is an “age of abundance” in the context of human population and resources?

1

MarcusOrlyius t1_j03ql4v wrote

Finite doesn't mean scarce. Scarce means lack of supply relative to demand. Furthermore information requires matter to be stored, processed, and transmitted.

1

kenkc t1_j03sy78 wrote

I don't mean an abundance of information. But of food, housing and transportation. It has certainly been true that information is reaching the abundance point sooner than material. But abundance in all things will be achieved at some time in the future. Energy looks to be the first, with renewables becoming cheaper and more powerful at a very strong pace. And fusion does seem to be clearing it's last hurdles. And if you have an abundance of energy, how far behind can an abundance of food shelter and transportation be?

0

CooCooClocksClan t1_j03vaqc wrote

Well… it’s a happy thought. I guess I’m more cynical in relation to resources and what the future holds.

2