Viewing a single comment thread. View all comments

SleekEagle t1_j9tttxr wrote

Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over. Not taking an opinion on which side is "right", just saying that this is a false equivalence with respect to the arguments that are being made.

​

EDIT: Typo

6

wind_dude t1_j9up1ux wrote

> Until the tools start exhibiting behavior that you didn't predict and in ways that you have no control over.

LLMs already do behave in ways we don't expect. But they are much more than a hop skip, a jump and 27 hypothetical leaps away from being out of our control.

Yes, people will use AI for bad things, but that's not an inherent property of AI, that's an inherent property of humanity.

1

SleekEagle t1_j9vl7r3 wrote

I don't think anyone believes it will be LLMs that undergo an intelligence explosion, but they could certainly be a piece of the puzzle. Look at how much progress has been made in just the past 10 years alone - imo it's not unreasonable to think that the alignment problem will be a serious concern in the next 30 years or so.

In the short term, though, I agree that people doing bad things with AI is much more likely than an intelligence explosion.

Whatever anyone's opinion, I think the fact that the opinions of very smart and knowledgeable people run the gamut is a testament to the fact that we need to dedicate serious resources into ethical AI beyond the disclaimers at the end of every paper that models may contain biases.

2