Viewing a single comment thread. View all comments

tornado28 t1_j8zcrc2 wrote

People will use them to make money in unethical and disruptive ways. An example of an unethical way to use them is phishing scams. Instead of sending out the same phishing email to thousands of people, scammers may get some data about people and then use the language model to write personalized phishing emails that have a much higher success rate.

Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.

The other disruptive possibility is that LLMs will be able to themselves rapidly build more powerful LLMs. I use GitHub copilot every day and it's already very good at writing code. It takes at least 25% off the time it takes me to complete a software implementation task. So it's very possible a LLM could in the near future make improvements to it's own training script and use it to train an even more powerful LLM. This could lead to a singularity where we have extremely rapid technological development. It's not clear to me what the fate of humankind would be in this case.

4

currentscurrents t1_j8zi84t wrote

>Disruptive applications will take jobs. Customer service, content creation, journalism, and software engineering are all fields that may lose jobs as a result of large language models.

I don't wanna work though. I'm all for having robots do it.

2

tornado28 t1_j8zqwy4 wrote

Why are the robots going to want to keep you around if you don't do anything useful?

3

currentscurrents t1_j8zs6o6 wrote

We will control what the robots want, because we designed them.

That's the core of AI alignment; controlling the AI's goals.

1

tornado28 t1_j8ztxdg wrote

Yeah I guess I'm pretty pessimistic about the possibility of aligned AI. Even if we dedicated more resources to it, it's a very hard problem. We don't know which model is going to end up being the first AGI and if that model isn't aligned then we won't get a second chance. We're not good at getting things right on the first try. We have to iterate. Look how many of Elon Musk's rockets blew up before they started working reliably.

Right now I see more of an AI arms race between the big tech companies than an alignment focused research program. Sure Microsoft wants aligned AI but it's important that they build it before Google, so if it's aligned enough to produce PC text most of the time that might be good enough.

2

currentscurrents t1_j8zugnd wrote

The lucky thing is that neural networks aren't evil by default; they're useless and random by default. If you don't give them a goal they just sit there and emit random garbage.

Lack of controllability is a major obstacle to the usability of language models or image generators, so there's lots of people working on it. In the process, they will learn techniques that we can use to control future superintelligent AI.

0

tornado28 t1_j8zwrwo wrote

It seems to me that the default behavior is going to be to make as much money as possible for whoever trained the model with only the most superficial moral constraints. Are you sure that isn't evil?

2

currentscurrents t1_j8zy3m4 wrote

In the modern economy the best way to make a lot of money is to make a product that a lot of people are willing to pay money for. You can make some money scamming people, but nothing close to the money you'd make by creating the next iphone-level invention.

Also, that's not a problem of AI alignment, that's a problem of human alignment. The same problem applies to the current world or the world a thousand years ago.

But in a sense I do agree; the biggest threat from AI is not that it will go Ultron, but that humans will use it to fight our own petty struggles. Future armies will be run by AI, and weapons of war will be even more terrifying than now.

1