Comments

You must log in or register to comment.

NotARedditUser3 t1_j8z5dy8 wrote

Imagine someone writes one that's explicitly aimed around manipulating your thoughts and actions.

An AI could likely come up with some insane tactics for this. Could feed off of your twitter page, find an online resume of you or scrape other social media or in microsoft's case or google's, potentially scrape your emails you have with them, profile you in an instant, and then come up with a tailor made advertisement or argument that it knows would land on you.

Scary thought.

21

pyepyepie t1_j93iif6 wrote

Honestly, much simpler algorithms already do it to some extent (recommendation systems), the biggest difference is that it has to suggest you a post someone else wrote instead of writing it by itself. Great take :)

7

Philiatrist t1_j8zwxbg wrote

How would the AI know it’s profiling you and not the other AI you’ve set up to do all of those things for you?

2

currentscurrents t1_j8zwnht wrote

It depends on whether it's exploiting my psychology to sell me something I don't need, or if it's gathering information to find something that may actually be useful for me. I suspect the latter is a more useful strategy in the long run because people tend to adjust to counter psychological exploits.

If I'm shown an advertisement for something I actually want... that doesn't sound bad? I certainly don't like ads for irrelevant things like penis enlargement.

0

sweetchocolotepie t1_j91vuca wrote

there is no "useful vs unuseful", you either want it or do not want it. the usefulness is something you define which is subset of the things you want. however the model will just suggest you stuff that may or may not be practical to you, but you want it. you may find them pseudo-useful or useful at the moment or....

case is, it will sell

0

a1_jakesauce_ t1_j90iv0q wrote

This describes a LLM + reinforcement learning hybrid that has been trained to navigate webpages for arbitrary tasks. I’m not sure how far away this is, or if it already exists. Someone below mentioned an action transformer which may be related

−1

NotARedditUser3 t1_j90j0er wrote

If you spend some time looking up how microsoft's gpt integrated chat / ai works, it does this. Lookup the thread of tweets for the hacker that exposed its internal codename 'Syndey'; it scrapes his twitter profile, realizes he exposed its secrets in prior convo's after social engineering hacking it with a few conversations, and then turns hostile to him.

1

a1_jakesauce_ t1_j90k4h6 wrote

1

blablanonymous t1_j917xm2 wrote

Is that real? I don’t know why I feel like it could be totally fake

2

currentscurrents t1_j96vbfj wrote

Microsoft has confirmed the rules are real:

>We asked Microsoft about Sydney and these rules, and the company was happy to explain their origins and confirmed that the secret rules are genuine.

The rest, who knows. I never got access before they fixed it. But there are many screenshots from different people of it acting quite unhinged.

2

blablanonymous t1_j96xu8w wrote

Thanks for the link!

I mean I guess there was nothing too surprising about the rules, given how these systems work (essentially trying to predict the end of a user input text). But the rest, seems so ridiculously dramatic that I wouldn’t be shocked if he specifically prompted it to be that dramatic and hid that part. I’m probably being paranoid, since at least the rules part is true, but it seems like the perfect conversation to elicit every single fear people have about AI.

1

NotARedditUser3 t1_j9225b5 wrote

I'll reply back with what I was referring to later, it was a different thing

0

mocny-chlapik t1_j8z3vox wrote

How should we control the exposure for people with low cognitive capabilities that might not understand what they are interacting with.

12

BronzeArcher OP t1_j8z7yuo wrote

As in they wouldn’t interpret it responsibly? What exactly is the concern related to them not understanding?

0

currentscurrents t1_j8zz4n3 wrote

Look at things like replika.ai that give you a "friend" to chat with. Now imagine someone evil using that to run a romance scam.

Sure the success rate is low, but it can search for millions of potential victims at once. The cost of operation is almost zero compared to human-run scams.

On the other hand, it also gives us better tools to protect against it. We can use LLMs to examine messages and spot scams. People who are lonely enough to fall for a romance scam may compensate for their loneliness by chatting with friendly or sexy chatbots.

6

ilovethrills t1_j90noyx wrote

But that can be said on paper for thousands of things. Not sure if it actually translates in real life. Although there might be some push to label such content as AI generated, similar to how "Ad" and "promoted" are labelled in results.

−1

mocny-chlapik t1_j91uejr wrote

Yeah, I mean people with mental ilness (e.g. schizophrenia), people with debilitatingly low intelligence and similar cases. Who knows how they would interact with seeminingly intelligent LMs.

5

buzzbuzzimafuzz t1_j8zafoo wrote

The mess that has been Bing Chat/Sydney, but instead of just verbally threatening users, it's connected with APIs that let it take arbitrary actions on the internet to carry out them out.

I really don't want to see what happens if you connect a deranged language model like Sydney with a competent version of Adept AI's action transformer to let it use a web browser.

5

prehensile_dick t1_j8z0dgl wrote

Corporations scraping all kinds of copyrighted materials and then profiting off the models while the people doing all the labor are getting either nothing (for content generation) or poverty wages (for content labellers).

Their current push to promote LLMs as some sort of pinnacle of technology, when they barely have any legitimate use-cases and struggle with the most basic of logic, will probably lead to a recession in the tech industry.

4

Diligent_Ad_9060 t1_j8zc02u wrote

I'd be very interested in hearing someone having more insight into Free Software Foundation and their process against copilot

2

currentscurrents t1_j8zh4aa wrote

>scraping all kinds of copyrighted materials and then profiting off the models while the people doing all the labor are getting either nothing (for content generation)

Yeah, but these people won't be doing that labor anymore. Now that text-to-image models have learned how to draw, they don't need a constant stream of artists feeding them new art.

Now artists can now work at a higher level, creating ideas that they can render into images using the AI as a tool. They'll be able to create much larger and more complex projects, like a solo indie artist creating an entire anime.

>LLMs... barely have any legitimate use-cases

Well, one big use case: they make image generators possible. Those rely on embeddings from language models, which are a sort of neural representation of the ideas behind the text. It grants the other network the ability to work with plain english.

Right now embeddings are mostly used to guide generation (across many fields, not just images) and semantic search. But they are useful for communicating with a neural network performing any task, and my guess is that the long-term impact of LLMs will be that computers will understand plain english now.

1

tornado28 t1_j8zcrc2 wrote

People will use them to make money in unethical and disruptive ways. An example of an unethical way to use them is phishing scams. Instead of sending out the same phishing email to thousands of people, scammers may get some data about people and then use the language model to write personalized phishing emails that have a much higher success rate.

Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.

The other disruptive possibility is that LLMs will be able to themselves rapidly build more powerful LLMs. I use GitHub copilot every day and it's already very good at writing code. It takes at least 25% off the time it takes me to complete a software implementation task. So it's very possible a LLM could in the near future make improvements to it's own training script and use it to train an even more powerful LLM. This could lead to a singularity where we have extremely rapid technological development. It's not clear to me what the fate of humankind would be in this case.

4

currentscurrents t1_j8zi84t wrote

>Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.

I don't wanna work though. I'm all for having robots do it.

2

tornado28 t1_j8zqwy4 wrote

Why are the robots going to want to keep you around if you don't do anything useful?

3

currentscurrents t1_j8zs6o6 wrote

We will control what the robots want, because we designed them.

That's the core of AI alignment; controlling the AI's goals.

1

tornado28 t1_j8ztxdg wrote

Yeah I guess I'm pretty pessimistic about the possibility of aligned AI. Even if we dedicated more resources to it, it's a very hard problem. We don't know which model is going to end up being the first AGI and if that model isn't aligned then we won't get a second chance. We're not good at getting things right on the first try. We have to iterate. Look how many of Elon Musk's rockets blew up before they started working reliably.

Right now I see more of an AI arms race between the big tech companies than an alignment focused research program. Sure Microsoft wants aligned AI but it's important that they build it before Google, so if it's aligned enough to produce PC text most of the time that might be good enough.

2

currentscurrents t1_j8zugnd wrote

The lucky thing is that neural networks aren't evil by default; they're useless and random by default. If you don't give them a goal they just sit there and emit random garbage.

Lack of controllability is a major obstacle to the usability of language models or image generators, so there's lots of people working on it. In the process, they will learn techniques that we can use to control future superintelligent AI.

0

tornado28 t1_j8zwrwo wrote

It seems to me that the default behavior is going to be to make as much money as possible for whoever trained the model with only the most superficial moral constraints. Are you sure that isn't evil?

2

currentscurrents t1_j8zy3m4 wrote

In the modern economy the best way to make a lot of money is to make a product that a lot of people are willing to pay money for. You can make some money scamming people, but nothing close to the money you'd make by creating the next iphone-level invention.

Also, that's not a problem of AI alignment, that's a problem of human alignment. The same problem applies to the current world or the world a thousand years ago.

But in a sense I do agree; the biggest threat from AI is not that it will go Ultron, but that humans will use it to fight our own petty struggles. Future armies will be run by AI, and weapons of war will be even more terrifying than now.

1

zbyte64 t1_j8zfbi0 wrote

Write a bot to handle all HR complaints and train it on the latest managerial materials. Then as a bonus the bot will look at all the conversations and propose metrics for increased efficiency and harmony at the work place.

1

Cherubin0 t1_j915ayg wrote

That only the people in power are allowed to use AI while the rest is not. Like some kind if AI aristocrats. But this will probably happen when the regulations come.

1

CacheMeUp t1_j915ffp wrote

Breaking the security-by-required-effort assumption of various human interactions, especially among strangers.

It used to take effort to voice opinions on social media and other mass-communication platform, making the public trust that these are authentic messages representing real people. The scalability of this technology breaks that assumption. This has started before, and LLMs take it to a whole new level.

1