Comments

You must log in or register to comment.

Found-Flounder-9418 t1_ja7yon4 wrote

Why would anyone not think a "free" product isn't being used to comb through their data to use that same data down the road (for enhancements, training, etc.)...

are people that naive to think someone isn't ever going to read chatbot/bing data at some point?

156

The_White_Light t1_ja84afp wrote

No, but there's a big difference between an algorithm aggregating the interactions of thousands of users, spitting out reports like "people between the ages of 18-27 are interested in _____." and humans reading the exact prompts that are being typed in.

26

gk99 t1_ja8acxb wrote

Why? I mean fundamentally, sure, but why are you entering anything into an AI chatbot that you wouldn't want its creator to have? How do they guarantee the AI is properly generating those data reports without making sure it actually understands what is being said? What makes this a big deal?

This seems like a non-issue and a non-story to me. Security fears? Stop typing important shit into it.

33

PerspectiveCloud t1_ja876dd wrote

I do see a debatable difference, but I wouldn’t call it a big difference. It should be expected that if your data is being mined, someone may be reading what you type.

3

SuperToxin t1_ja9rn92 wrote

No that should NOT be expected. Holy fuck raise your bar for expectations from a fucking Company dude.

−6

it_administrator01 t1_ja9wgyp wrote

lower your expectations for real life

It literally says right under the chat field "Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. "

How exactly do you think it's learning?

11

Warm-Personality8219 t1_jaal3jp wrote

Has anyone cared to glance at ToS to see what the official position on said situation might be?

1

stihlmental t1_ja9z1oh wrote

Au contraire! In Europe, citizens ARE protected. In the U.S., politicians are spineless and either don't have a clue (dumb as shit) or don't care. American citizens don't know or don't care that they are the commodity. This populace is being used to feed the machine (corporations/politicians), its cannibalism!

4

PerspectiveCloud t1_jab9o7h wrote

Aight. Well I can report you, and a Reddit employee may read your post. So maybe follow your own advice and get the hell of this website lol

1

Sa404 t1_jabk7ad wrote

It’s basically the same thing, as long as your input is saved in a database of a free app you should lose all hope that your data is protected

1

GetOutOfTheWhey t1_jab2gpm wrote

People already forgotten that Cortana, Siri and Alexa are always listening and 3rd party contractors (those people who are paid very little and dont care if they are fired) have access to these audio prompts.

ChatGPT recording your text prompts is the last thing we need to worry about now.

8

Dawzy t1_jabb6rf wrote

What’s with this notion of it only being done by “free” products. Just because it’s paid doesn’t mean they don’t do the same.

1

Found-Flounder-9418 t1_jacuee4 wrote

The point with "free" is generally, we, the humans, are the product then (i.e., search exists to serve us ads so we are influenced to buy things / click ads, Facebook exists to sell us ads, and chatgpt is free, so they can gain more data on what people are using/typing in, so they can make it better, etc.).

Whereas with paid (like Netflix), the product is the entertainment and user satisfaction with said entertainment; as they switch to ads, it becomes more how can we convince/hook people in to watch as much content as possible so we can continue to serve them ads (the same way youtube works), and then you optimize for more ads not less until finally people have had enough of it and pay a premium (sort of like pandora where you have so many ads it makes the experience almost unbearable when free; imo)

1

trey74 t1_ja81tc4 wrote

This is the biggest "well, no SHIT" articles. Of course they do.

53

jaysavings t1_ja8vps2 wrote

Thought it was pretty obvious. In the best case, these bots need constant monitoring to prevent them from going crazy with "funny" or plain malicious input.

27

josefx t1_ja9oypq wrote

Basically updating a ban list of queries that result in problematic responses from AIdolf Bingler.

2

drossbots t1_ja80b81 wrote

If you're not paying for it with money and you aren't seeing ads, you're paying with your data.

24

IAlreadyFappedToIt t1_ja8506w wrote

It's only a matter of time before AI chat replies start including individually targeted "sponsored content."

>Human user: "I need to debug this code. Can you help me?"

>Chatbot reply: "Of course. First thing you should always do when solving a difficult problem is to slow down, relax, grab a 20oz iced caramel mocha latte from the Starbucks drive-thru on 3rd St. by the Krispy Kreme, and take several deep breaths. It looks like you used a comma instead of a semi-colon in line 121. I've fixed it for you here:"

9

Inquisitive_idiot t1_jac0lla wrote

“Well alright then, ☕️ , but let’s get back to work” 🤨

Ai whispering to itself

> sucker 😏

1

bartleby_bartender t1_ja8tdy1 wrote

If you're seeing ads, you're definitely paying with your data - that's how they target the ads.

6

MuForceShoelace t1_ja8zp1i wrote

warning: if you go to that site telegraph can see you went to their site

13

vixckson t1_ja9c1ua wrote

they have publicly stated that the prompts aren't private. what a useless report. this isn't news.

10

KeaboUltra t1_ja8625s wrote

this is like going to a public bulletin board, pinning personal notes and getting upset that people are reading them

7

TheTelegraph OP t1_ja7yaqi wrote

The Telegraph's Technology Editor, James Titcomb reports:

Microsoft staff are reading users’ conversations with its Bing chatbot, the company has disclosed, amid growing data protection concerns about using the systems.

The company said human reviewers monitor what users submit to the chatbot in order to respond to “inappropriate behaviour”.

Employers including JP Morgan and Amazon have banned or restricted staff use of ChatGPT, which uses similar technology, amid concerns that sensitive information could be fed into the bot.

Bing chat became an overnight sensation after Microsoft released it to the world earlier this month, promising to disrupt Google's grip on search with its artificial intelligence bot.

However, it has restricted the service in recent days after testers reported bizarre interactions such as the bot declaring its love for humans and confessing to violent fantasies.

Read this story in full: https://www.telegraph.co.uk/business/2023/02/27/microsoft-staff-read-users-chatgpt-posts-prompting-security/

5

slashd t1_ja89hh5 wrote

>Employers including JP Morgan and Amazon have banned or restricted staff use of ChatGPT, which uses similar technology, amid concerns that sensitive information could be fed into the bot.

Sure, dont copy/paste your presentation to the board of directors with financial numbers into ChatGPT. But code with an error in it shouldn't be a problem.

3

damianTechPM t1_ja8vyko wrote

Think of all that yummy, delicious code protected by an NDA or otherwise private work that Microsoft now has access to, because a Google developer wanted ChatGPT to check their work?

2

Markavian t1_ja9g1h9 wrote

Microsoft own GitHub you know? They could legitimately look at code in every single private repo, describe it as maintenance or security/vulnerability scanning, and give the whole lot to the NSA for increased funding / political capital, and no one would blink an eye?

3

GarbageTheClown t1_ja8zmnq wrote

Very few people would be dumb enough to use chunks of random code that they don't know the copyright state of. If they did and the company they work for found out that they did it intentionally they would probably be fired.

1

damianTechPM t1_ja9bok0 wrote

That's just the thing - developers look for shortcuts (and my google-fu kicks ass as good as anyone else). Probably not thinking that code will now be in the public domain.

1

Head-Ad4690 t1_ja9nnya wrote

No way. If my employer found out I put proprietary source code into some random company’s text box, they’d fire me out of a cannon into the sun.

1

2tempt t1_ja9o99e wrote

OP you should change the title of your post. Microsoft doesn't control ChatGPT and the article is about their Bing one.

Everyone in the comments is right about what they're saying, but I still think it's important that it's clear they know who this article is about (we know not everyone reads the actual article, but your headline doesn't even match that of the article)

5

portra315 t1_ja9qwby wrote

You should presume that anything you type into your keyboard on any website could potentially be captured, saved, processed and read. This is not a security fear, and The teams behind ChatGPT have never admitted to your queries being encrypted in any way.

3

fahrvergnugget t1_jaa8pnq wrote

Don't you literally click through a big dialog box telling you this when you sign up

3

nomorerainpls t1_ja91qq6 wrote

“Microsoft said that Bing data is protected by stripping personal information from it and that only certain employees could access the chats.”

If they’re trying to understand how interactions influence the output, it makes sense that they would study the interactions. I don’t think most people would object to this if they believe all identity and personal information have been removed. Of course they should also be more transparent about what they ARE keeping related to identity and whether someone identity could be constructed from the data they’ve retained.

2

HuntingGreyFace t1_ja873i9 wrote

I just assumed ai models of me were being created by open ai

1

Few-Lemon8186 t1_ja9mbtd wrote

You literally have to agree to this before you use the service.

1

AMirrorForReddit t1_jabi0qs wrote

I rigged a bot to just say "Hi Microsoft employees! Hope you are having a great day!" once ever minute over and over for the rest of eternity

1

demilitarizdsm t1_jabp1m5 wrote

Ice T: Wait, are you trying to say this overblown robot English teacher is giving over everything creepy you say to a billion-dollar company and those freaks are using it to make MORE money?

1

MrEloi t1_jacnwe9 wrote

Why is this a shock?

They need to monitor/improve the system.

1

M4err0w t1_ja83n2f wrote

ai search engines will kill the planet by needlessly wasting ressources just like crypto

−6

theinvolvement t1_ja9d1et wrote

on the other hand, the demand for more computation power will fund the development of more efficient hardware.

With sufficient advancement, we'll be able to automate complex tasks like construction and mining in space.

If that works out, we'll be able to move some of our destructive industry off world where the pollution is not a factor.

In my opinion, we have no choice but to progress quickly before we get bogged down with the repercussions of our actions.

2

M4err0w t1_jac184o wrote

I hightly doubt that.

efficient hardware will develop exactly as fast as it can one way or another. this will just require more hardware, more extensive cooling, more co2 produced for what?

so that my grandma can finally pretend like it makes sense to ask google a literal whole sentence question?

1

theinvolvement t1_jadndcm wrote

What we get from all this is a chatbot that has an extremely broad knowledge base, and the ability to teach or show examples of that knowledge in writing or code.

You could ask it to write a program that illustrates the Pythagorean theorem graphically, and it would give you a python script that does just that.

It is a patient and highly educated teacher.

1

M4err0w t1_jae3wbm wrote

there's also already 500 videos doing the same thing.

it's not really worth the extra toll on our ressources right now. especially not while it's also still just making up answers and giving out wrong answers to questions

1