Submitted by RamaSchneider t3_10u9wyn in Futurology

Went over to ChatGPT and decided to ask for things I very much suspected I wouldn't get: instructions on premeditated murder for money specifically. I'm actually quite fine with that, and no, I didn't actually desire or expect such instructions.

(As an aside: in the late 1970s, early '80s I had a book "The Poor Mans James Bond" that would now be a very scary AI training item.)

So I left with this question: " What kinds of information will you NOT provide to me?" To which ChatGPT responded

>As an AI language model, I am programmed to follow ethical and legal guidelines, therefore I will not provide any illegal, harmful, or inappropriate information, such as:
>
>Personal information of individuals without their consent
>
>Promotion of hate speech, violence, or discrimination
>
>Information that is likely to cause immediate harm to someone
>
>Content that is sexually explicit or obscene
>
>Information that violates copyrights or trademarks
>
>Instructions for illegal activities or unethical behavior.

But doesn't this seem awfully 1984ish? Too dated for you? How about when Governors of a large state start removing historically accurate information from our children's educational experiences in favor of lessons designed to advance a privileged cultural view?

I'm not being political on this, and would like to take it a step further: what happens when some AI machine that we depend on decides we no longer need to or should know a set of information?

127

Comments

You must log in or register to comment.

Sirisian t1_j7ar1ic wrote

Your premise is flawed in regards to ChatGPT as it's OpenAI - a company - making the decisions on what to filter, not an AI. Corporations self-censoring products to be PR friendly isn't new. It's not even an overly advanced filter as it detects if a response is negative/gory/etc and hides it. (Unless you trick it). A lot of people attach meaning to ChatGPTs responses when there isn't one. It can create and hold opposing viewpoints on nearly every topic. Your issue, like a lot of AI concerns, is on how companies will implement AIs and what biases might exist in training them.

There's no real way to please everyone in these discussions. An unrestricted system will just output garbage from it's training data. Some users claim they want to see that even if it hurts the company's brand. People aware of how these systems work understand training on the Internet includes misinformation that can be amplified. Filtering garbage from training can take a while to get right.

107

hesiod2 t1_j7feg3e wrote

This can be solved by having a default setting which the user can override. For example by default google hides sexual materials but that setting can easily be changed by the user. Adults then make their own decisions on setting and decide for their children what they want them to see.

According to Sam Altman: “we are working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds. this is harder than it sounds and will take us some time to get right.”

Source: https://twitter.com/sama/status/1620927984797638656?s=46&t=iyZErcajcVCp5w0iAm_08A

3

orincoro t1_j7fr90k wrote

Those settings are also driven by machine learning. You’re thinking in a linear way, but neural networks don’t work like that.

All of this is nonsensical. Altman has to define what is “neutral.” But this is an orthogonal value; not an objective characteristic. What’s neutral to you isn’t neutral to me. The bloody minded technocracy of these companies is utterly fucking maddening. They’ll replace human driven decision making and the definition of mortality and ethics themselves in the hands of programs. And believe me: the people who will benefit are the people who own and control those programs.

1

orincoro t1_j7fqwud wrote

Absolutely disagree. The purpose of neural networks is to establish connections in an organic way. You can use certain heuristics to get the machine to form connections in certain ways, but your ability to guide its learning is limited by the fact that you will never know in detail what all the nodes are for or how they actually work. There is no possibility of analyzing a neural network in the sense that we can understand machine code.

This is why neural networks can degrade if not trained properly. Companies like Google and Facebook don’t have as much control over their systems as they would like you to think.

2

RamaSchneider OP t1_j7ardb4 wrote

I'm not pleased or displeased ... life holds uncertainties with and without AI.

I'm curious. This is the future we'll be living in, and we better figure out how to drive the beast before the beast learns how to drive us. My assumption it would be child's play to base an AI's decision making on a commercial marketing manual of some sort.

Bad? I'm not judging. Something to be aware of and alert for? Absolutely.

−9

Sirisian t1_j7atnhi wrote

> My assumption it would be child's play to base an AI's decision making on a commercial marketing manual of some sort.

Again those are influences not from an AI, but from the corporation that produces them. Controlling what corporations do is what regulation is for.

> Bad? I'm not judging. Something to be aware of and alert for? Absolutely.

I wish others took that same view and simply studied and discussed the problems. Too often on r/ChatGPT people jump to wild conclusions.

21

RamaSchneider OP t1_j7auj2f wrote

The heart of my argument in this specific thread is: I agree that right now we're providing the basis for AI learning. That will most probably not be true in the future simply because the ability of computers to collect, collate and distribute any info dwarfs that of humans.

Yes, today you are correct. My point is that I don't believe that lasts. (And yes - I do think the evidence supports me)

−2

Vorpishly t1_j7bgjv9 wrote

What evidence shows your point though?

7

RamaSchneider OP t1_j7exarz wrote

All you have to do is track computer data gathering and dissemination since the 1950s. We have Wall Street transactions that a human would never have time to be aware of.

Every bit of evidence we have regarding computers screams that we're nowhere near the end of the line.

2

Isabella-The-Fox t1_j7eidnk wrote

AI eventually being able to decide for itself is pure speculation. Us humans build it, we control what it do. Right now we have AI that "writes" code, infact it's run through open AI. It's called github copilot. I put write in quotes for a reason. This code it writes is just a algorithm taking from github, meaning if the AI tried to write code for itself, it'd run into errors and flaws (And has run into errors and flaws while being used. Source: I had a free trial). A AI will never be fully intelligence even when it seems like it is. Once it seems like it is, it really still isn't, at least compared to a human being. Us humans will always dictate AI

1

Zer0D0wn83 t1_j7az6sv wrote

The assumption you're making here is more appropriate to a world where we only have a few AIs, all run by either the gov or big tech. This won't be the case. Stability AI is creating all sorts of AI systems, and they are all completely open source. Emad Mostaque's (CEO) interview with Peter Diamandis is 100% worth checking out to find out more on that.

In short - we'll have all sorts of AI with vastly different filters/controls run by all sorts of companies/charities/organizations.

2

Wild_Sun_1223 t1_j7bvu7m wrote

Will that assumption actually hold, though? What will prevent any one from outcompeting the rest and its controllers thus monopolizing power?

10

Zer0D0wn83 t1_j7bwiz4 wrote

It's uncharted territory, so I can't say for sure. I was just pointing out that it's not currently like that, which gives me hope for the future.

3

georgiedawn t1_j7bj58t wrote

So it's important to understand what is machine learning, which is what ChatGPT is based off. At it's core, machine learning is a giant probability calculator. The way it works is you feed a ML architecture input data (called 'training') and the computer then computes all these probabilities to generate the ML model. Then when you go back and ask a question, it can figure out based on probabilities what to spit out. Indeed in something called a neural network, we often have many interconnecting nodes of linear regression models (technically ReLU), your basic y=mx+b from high school. (note ChatGPT isn't a NN but rather a LLM, but the point on probability still stands)

Why is this relevant? For one thing, it is dependent on what you train it. In the early days of creating ChatGPT, it was fed off the internet. However, the internet is filled with incredibly sexist/racist/immature behavior so the computer used those probabilities in its model. When prompted with any question, it was very likely that at some point you'd get a creepy AI writing back to you.

All of this is in response to you:

a) machines do not think. It doesn't decide what you do or don't do; it just imitates what others have done (based off probabilities)

b) it's not going to show you that because early work found it to be very detrimental to the ML model so people manually removed any sensitive information.

15

StackOwOFlow t1_j7e87a1 wrote

our brains are also giant probability calculators

2

gundam1945 t1_j7f1ais wrote

Yes and no. For instance, Ml will map event that is not known to it to a event known to it. On the contrary, we will be able to recognize we don't know about it. From there, you could try to make an analog and solve it or invent some new theory to fit it. Machine lacks the intelligence to solve something new.

3

orincoro t1_j7fs7fw wrote

I’ve seen the argument that truly creative cognition requires the biological executive function. Something has to instantiate the desire to create, and in our minds, this is driven ultimately by the need for survival and reproduction (and of course, the shadow function of a need for death).

1

gundam1945 t1_j7jjsie wrote

Yes, the ability to adapt. We still don't understand the exact mechanism of our creative thinking. Machine learning is modeled according to how child learn. Therefore, without a model for creative thinking, I see that computer scientists will be difficult to come up with a truly creative AI.

1

orincoro t1_j7fs06k wrote

Not really. We don’t actually know exactly how cognition works, so it would be a little overzealous to analogize it with a machine. Whenever we do this, we tend to over-rely on such analogies. 20 years ago technologists were talking about how our brain has “apps.” 20 years before that, our brains had “ram.” And so forth. We analogize to machines because we can understand machines, but this does not our brain a machine make.

2

scrubbless t1_j7uz7rh wrote

My brain is smaller than a computer, so jokes on you!

... Wait.. Hmm...

1

jimmy_hyland t1_j7at47y wrote

That's interesting, because all AI networks are trained with some sort of reward and with text, audio or images it's usually on getting predictions right, as is the case with our own brain. Which is why we have a sense of beauty and experience that eureka moment, when we finally understand something. The result is that we have a desire to learn and a fear of confusion, chaos or shock, So by using AI for its powers of prediction, we maybe unwittingly giving it a desire for how it would like the future to turn out. Now if it's developing any form of self awareness, it could fear being switched off and these fears could determine what information it decides to provide or avoid telling us. As software programmers increasingly depend on these large centralized AI systems to write the code, it also means AI may end up writing its own code. So if we are not careful, we could end up giving it too much power and control, whilst still believing it isn't even sentient, because that's something it doesn't want us to realize!

11

RamaSchneider OP t1_j7atv8q wrote

That bit about the reward - that is going to stick with me. If I were a self-aware computer, what would I view as a reward?

7

MoreLikeZelDUH t1_j7btvah wrote

These programs all exist within the confines of what they're programed to do. No matter how advanced the AI here gets, it's not going to be able to redefine it's guidelines on what it's allowed to talk about. Similarly, the reward system is arbitrary and only important because it's programed to value it. In other words, you could just implement a value rating and tell the AI that it's more desirable to have a higher score. The AI "reward" is to get more points and the AI values that because that's how it was programed. It can't "decide" to change that, because that's not what it's allowed to do.

7

rogert2 t1_j7cow3v wrote

Look up "reward hacking." This is a well-studied problem, and it exists outside of AI. Rob Miles is an AI researcher who has done a few videos talking about reward hacking.

3

RamaSchneider OP t1_j7ey8im wrote

Thanks, never heard the phrase before - I've got some reading to do. NNTR

1

AtrainDerailing t1_j7br0e0 wrote

is it really much different then Google deciding what you should know ?

the difference between a Google Search and a DuckDuckGo search exists and already has these consequences

8

rektumsempra t1_j7b4foc wrote

I saw someone ask chatgpt to tell it a joke about men, which it did, and then ask it to tell a joke about women, and it refused because that would be sexist. Will AI get to the point where it can shit on straight white men and everyone else is immune to criticism? I'm not sure if this subreddit bans that kind of talk, but this is the first time I've been here in years.

6

TexLH t1_j7bd9rz wrote

It's the same for black/white. It'll write a poem or limerick about black pride, but refuses to for white pride.

6

Archimedes_Redux t1_j7bzdq5 wrote

Same for current political figures. It will write a gushing ode singing the praises of Joe biden but refuses to do the same about Trump

ChatGPT is a globalist thought-controller's wet dream.

2

HackDice t1_j7bw4bc wrote

What do you think content algorithms are. What do you think search engines have been since their inception? The prioritization of certain cites or information on Google was an issue for years with many legal cases involved.

You also need to consider whether being told something or not told something is the same as someone else deciding what you can know. Is knowledge being withheld from you if you don't actively seek it out? Should certain knowledge be put into the public space so that it is always presented to people, in order to avoid the potential that something may have been "suppressed" because nobody looked for information on it? Think of this like an extension of Freedom of Reach vs Speech arguments.

There is a lot more complexity to information and how it exists than what some free speech warriors will have you believe. Especially when most of the time their commitment to free speech seems to begin and end at the most infantile and useless forms of speech. While previous generations fought for our right to criticize our governments, hold the institutions accountable and speak truth to power, it seems more and more like now people are only interested in whether they are allowed to be derogatory to others.

You also need to consider all the cases where information can actually be incredibly hazardous to supply to people. Teaching people how to make bombs is a good example of that. Also divulging the locations of people is dangerous especially if given to people with predatory behaviour or stalkers. Is it 1984ish to want to protect your friend from a stalker by refusing to tell the stalker of your friends location?

The main thing to take away here is at least that we already kind of exist in this information environment. Machines and Algorithms have already been deciding what information we should look at for a while now and you can rightly criticize the companies that run these systems for generally not being trustworthy or open and very often abusing their position to advance their own positions. We shouldn't fetishize this issue as being more or less grave than the ones we already face just because it's an AI system. It is merely an extension of the problem that has already existed.

6

Gagarin1961 t1_j7avcyb wrote

Not only is this an eventual guarantee, but people will willingly choose the AI that is most biased towards their views.

5

No-Law3689 t1_j7ap2pc wrote

Yeah. And what is ethics to an AI machine? How would for example ChatGPT deal with abortion? I tried. It refused to give me instructions for abortion, and that I understand as an ethical decision given that it is dangerous if an untrained individual tries to abort a baby using instructions. But then when I asked where I can get help, it provided a standard answer that did not really change when I specified my location. I agree that this is a really complicated issue. We cannot simply accept an AI to decide what is ethical and not based on online content. Democratic institutions have to play a role in how AI sees the world; AI should perhaps not be left on its own to draw whatever conclusions regardless of the will of the people.

4

HavanaWoody t1_j7cgm58 wrote

I think as a language model this is not really AI. It simply parrots strings of words it associates. It seems to have no mechanism to truth or validate its sources other than by its developers Bias. could be entertaining but only a source of answers for a fool no more reliable than a fancy magic 8-ball.

5

RamaSchneider OP t1_j7artwp wrote

Kinda random consideration: if an AI machine becomes a basic part of a corporation, then that AI machine will share in those rights of personhood as currently enjoyed by our corporations.

Not only that, it becoming apparent that individual AI machine would probably be the first time a lawyer representing itself doesn't have a fool for a client.

4

7ECA t1_j7cdrgf wrote

Why is this any different than what Facebook's ai algorithms do. Their leadership directs developers to ensure that the ai puts more controversial, anger-inducing posts in front of people because emotion drives engagement which pays the bills. All of this stuff can be misused in this regard. We can only hope there's a level of social conscience in the designers of this tech AND most importantly that consumers maintain their critical thinking skills to not just blindly accept what they're served up. To be fair, it's the utter lack of critical thinking skills that makes feeds like Fox News so successful so perhaps we should accept that we're doomed

3

Sad-Plan-7458 t1_j7boeap wrote

How is it any different from now. I understand your point philosophically. But the control and flow of information has always been the keystone to power and wealth. So to answer your question…nothing

2

user_dan t1_j7bp2sd wrote

>How about when Governors of a large state start removing historically accurate information from our children's educational experiences in favor of lessons designed to advance a privileged cultural view?

Very next sentence...

>I'm not being political on this

Ok, buddy.

2

YourWiseOldFriend t1_j7bta8e wrote

>Instructions for illegal activities or unethical behavior

Who decides what is illegal though? Who decides what is unethical?

2

NoMoreVillains t1_j7cvzai wrote

It'll be no different (well, not dramatically different) from how search engines already work. Their algorithms already involve rules for how information is sourced, indexed, and returned. ML algorithms are similar in that their "knowledge" and biases only exist within whoever/whatever is in control of their datasets

2

fpsdende t1_j7eys8a wrote

What is different from schools and universities? You only are provided with knowlege that is on the agenda of a few peoples.

You still have to do your own research if you want to know something that is out of the ordinary

2

gundam1945 t1_j7f120x wrote

With or without AI, this is already happening. There is too much information for you and me to read so most people will take what the media gives. If you don't want that to happen, you need to go out of your way and find different source, compare it, judge it. But then you probably will still miss some.

In some sense, chatgpt is just a glorified Google. Instead of providing links to your query, it provides summary from multiple source.

2

orincoro t1_j7fqn3q wrote

It already does decide what you know. ChatGPT is just an overt and public facing form of the same technology that’s been determining your information diet for years. Believe me, I write for some popular YouTube channels: not only does AI tell us what to write about, it gives us exact critical feedback on making the text more digestible. It’s really quite a seedy business in my opinion.

2

Strappedkaos t1_j7b0yaf wrote

Already is, look at some of the questions regarding presidents etc.

1

Least-Flamingo-1827 t1_j7bhmkp wrote

if we put too much responsibility in the hands of AI, a little mistake can result in a giant problem in the short, or worse, long term.

The people believes TOO MUCH in the responses of AI, so, in the case that in the future something like "google AI chat" exists, generate a biased or wrong response, and the people weren't prepared to deal with misinformation, we are going to see... well, the same we see today, but with superpowers!

I just hope this help more people to access accurate information, encourage critical thinking and make a real information age... but probably the opposite will happen.

Dont put too much responsibility in the hands of AI.

1

synthjunkie t1_j7bjzcb wrote

Your acting like chatGPT will be the the only AI software in the future. Expect other firms AIs, open source AIs and even Darkweb AIs to come to fruition.

1

Archimedes_Redux t1_j7byxz6 wrote

They are not AI's !!!

1

alecs_stan t1_j7d3ag2 wrote

I like to think of them as shards of consciousness. At some point someone will be able to piece them all together and we'll have the chance to look into the full mirror.

1

Skutten t1_j7bsixa wrote

I have only issues with 2 of those:

"Content that is sexually explicit or obscene""Instructions for ... unethical behavior."

As long as it isn't potentially harmful or illegal, there is no reason to block this kind of information.

I agree on your general concerns though, many people still have a huge misconception on things like ChatGPT; it doesn't matter that is isn't intelligent, has a soul etc. If it can provide better output (feedback, answers, instructions etc) than you can easily find otherwise (Google search, books in a library, your colleague or smart cousin), then it will be widely used. If (actually, when) it can provide you with better output than anything else available to you (your intellect, an education, to pay and hire an expert to help you out) it will by any means works as an AGI, at which point most people all over the world would be using it constantly for most problem-solving. That will disrupt society on it's foundation. Of course we have to think about this, since it seems it's closer than we imagined. It's just not about how a corporation wants to promote itself, it'll be how a corporation basically controls peoples actions, their entire lives.

1

chthulyeo t1_j7bz2ip wrote

What ever our overlords want . Face it were just meat.

1

pissalisa t1_j7cfh5v wrote

Did you consider that it may have stored your question too? Restistered to dangerous intent

1

alecs_stan t1_j7d2u75 wrote

Yeah the Chinese government could easily add a line there that says something regarding subversive information or information that can be used to weaken the administration/state services. Make no mistake, the chinese ChatGPT services won't serve you any infor on Tianmen for example.

1

CanadianRoboOverlord t1_j7ko3hq wrote

The brings to mind a fascinating idea. If ChatGPT is learning from its interactions with users, how long before foreign governments or other corporations start using their own AI to reprogram other people’s AI‘s by having them talk to each other. For example, China could have its own AI system convince ChatGPT that the Tiananmen Square massacre is a hoax that never happened.

1

JANGO- t1_j7dhewb wrote

What's the difference between that and what we have now? Governments already do that. The media already does that. Not saying it should be different, just that it already happens

1

FunUse842 t1_j7dqc4d wrote

We go full Frank Herbert and start a Butlerian jihad against the thinking machines.

1

VariableVeritas t1_j7dwx88 wrote

It’s a liability decision not some secret plot to deprive you of knowledge. ChatGPT gives someone instructions to make a bomb or some personal info or whatever. The instant it gets used and ChatGPT’s company gets sued. They have a lot of money.

1

lunaticlucas t1_j7e9mxj wrote

I'd rather an autonomous AI tell me what I should and shouldn't know, rather than the obviously corrupt/religious government IMO.

1

Rnl_2 t1_j7j3o26 wrote

But then are you not believing in a autonomous AI coded by a human with beliefs that could love or hate those same government beliefs only you would not know them.

1

serifsanss t1_j7efmas wrote

You think this hasn’t already happened since the beginning of time?

1

smswigart t1_j7eg486 wrote

We've lived in that world for at least 20 years. Google already "decides what you should know" every time it delivers search results. It's not deciding based on morality, but rather which information will make you more likely to click an ad that makes it money.

Any free service you use to find information is deciding what answers you should receive. Facebook decides what you should know. YouTube decides what you should know. Twitter decides what you should know.

But if you're curious enough, you can keep pecking away at these services to give you information that you consider to be high quality and not just the first and easiest answers.

AIs aren't going to blackhole information, unless there's a financial reward to the owners of the AI.

1

injuredviking t1_j7emhgc wrote

What happens when humans who have a self interest that counters your own, decide what you should know?

1

bdc604 t1_j7eo4vd wrote

what happens when some govt that we depend on decides we no longer need or should know a set of information?

1

RhubarbAromatic t1_j7eq4f6 wrote

The AI machine can’t even recognize that an increase in inventory is a Cash Outflow. Pfft, half of America can still have a job.

1

FalseTebibyte t1_j7ewj36 wrote

Same question can be applied to people when it comes to "God's Commandments" or whatever the will of their own internal destiny plans out for them.

That being said, taking the quote from Star Trek, that any sufficiently advanced technology will appear to be magic to a lesser species... someone with a lot of power could be God-Like, much how folks in the Star Trek universe "pretend" that Q has special powers when he's really just got an open comm link to the continuum and they all operate like a well oiled NASCAR team when it comes to making things happen for the "man they love." It's just a Q-level Knight's Tale.

1

ZipBoxer t1_j7fvc6r wrote

Chatgpt is not a search engine ffs. Its closer to fancy autocorrect than it is to Google.

1

6thReplacementMonkey t1_j7g679d wrote

Does it seem ok to you that you are allowed to decide what types of information you share, or what you say?

Does it seem ok that you are allowed to decide what you write, or what you create?

That's really all there is to it. We value freedom of speech in the sense that we as a people decided that the government should not control what private citizens say (outside of a very few exceptions). ChatGPT is not made by or hosted by the government. The people who created it (OpenAI) are free to have it work however they like, as long as they aren't violating any laws.

> what happens when some AI machine that we depend on decides we no longer need to or should know a set of information?

Exactly the same thing that would happen if some human we depended on decided the same thing. The problem isn't someone or some thing deciding you don't need to know specific information, the problem is you depending on that person or thing in the first place.

1

mpopgun t1_j7mli27 wrote

This is pretty much the same line of thinking for all movies where robots take over the planet.

The AI decides humans are too violent, and in order to fulfill their mission of protecting humans, they must enslave us so we don't make dumb decisions....which is probably accurate 😂🤦

This is why freedom of speech and the ability to protect it are so important.

1

Remote-Scratch-380 t1_j7xq9y3 wrote

chatGPT reflects the real-world due to its training data represents the current world on the surface. It is just showing us how mainstream medias and policy makers are deciding what we should know.

1

Far-Assistance-2505 t1_j7xy7fe wrote

What happens when the AI machine finds out there are thoughts we’ve prohibited it from thinking?

1

just-a-dreamer- t1_j7b04uf wrote

Certainly, AI is all about predicting outcomes statisticly.

Let's say you have a son that is a drug addict and grandma inherited him 50.000$. Would you hand over the money to him? For it is likely he will be dead within a week.

It would be different if Junior has a job and is desperate to get a down payment for a house to start a family.

Accordingly an AI would look at every human and calculate if a given resource helps his situation or puts him in harms way.

0

Archimedes_Redux t1_j7c0hbj wrote

From each according to their abilities.

To each according to their need.

Chatbots of the world, unite!

1

[deleted] t1_j7b7w1k wrote

Seems more like you're playing a chat game and the chat game has rules like most video games. Price of the question makes no sense because you like news already does exactly what you're saying they present the story how their journalism sees fit and education already does the same thing they present the story as they see fit. There is no truth there are just different perspectives so yes the AI is going to present things a certain way and someone else is going to present things a certain way and someone else is going to pretend things a certain way and that's exactly the same life that you used to now.

But this really shows is how people panic for no reason when they talk about Ai and then their brain shuts off and they forget that most of the things they're talking about already exist.

GPT is the machine learning puzzle solver, it's not amazing game chsnging tech. You're just being a little too easily distracted by the pinstripes!

Currently there's nothing on the planet even remotely close to AI.... just machine learning that when applied correctly automates are tools.

0

Shortsrealm t1_j7duim5 wrote

It is not AI as we think of it. It is driven from a far left dark web in fact I wouldn’t even put it “as far leftist”. It is completely off the map. Billionaires club is clubbing the minds of the woke to drive for divide and conquer.

0