Queue_Bit

Queue_Bit t1_jecjpk9 wrote

This is the thing I really wish I could sit down and talk with him about.

I fundamentally think that empathy and ethics scale with intelligence. I think every type of intelligence we've ever seen has followed this path. I will reconcile that artificial intelligence is likely to be alien to us in fundamental ways, but my intuition that intelligence is directly linked to a general empathy is backed up by real world evidence.

The base assumption that an artificial intelligence would inherently have a desire to wipe us out or control us is as wild of a claim as saying that AI systems don't need alignment at all and are certain to come out "good".

I think in his "fast human slow aliens" example, why could I, as the human, not choose to help them? Maybe explain to them that I see they're doing immoral things. And explain to them how to build things so they don't need to do those immoral things. He focuses so much on my desire to "escape and control" that he never stops to consider that I may want to help. Because if I were put in that situation and I had the power and ability to help shape their world in a way that was beneficial for everyone, I would. But I wouldn't do it by force, nor would I do it against their wishes.

11

Queue_Bit t1_jdzlxht wrote

I mean that we've used about 1/10th of the high quality training data.

Which means that even with zero improvement in algorithms or methodology. And assuming that improvement is linear. And assuming no new data is created. It means that LLMs will get about 10x better. And who knows what that looks like.

1

Queue_Bit t1_jdxle5m wrote

Sure, there could be some theoretical wall that stops progress in its tracks. But currently, there is zero reason to believe that a wall like that exists in the near future. Even if AI only improves by a single factor, so 10x, it will STILL absolutely change the world as we know it in drastic ways.

And here's the funny part. Based on research, we KNOW a 10x improvement is guaranteed already. So, I get that you want to slow the hype and want people to think critically, but the truth is that many of us are. And importantly a greater then 10x improvement is almost certainly a guarantee.

Imagine an AI that is JUST as good as humans are at everything. Not better. Just equal. But, with the caveat that this AI can output data at a rate that is unachievable for a human. This much is certain. We will create a general AI that is as good as humans at everything. Once that happens, even if it never gets better, we will live in a world so different than today that it will be unrecognizable.

If you had asked me this time last year if we were going to see a singularity-type event in my lifetime, I would have been unsure, maybe even leaning towards no. But now? If massive societal and economical change doesn't happen by 2030 I will be absolutely shocked. It looks inevitable at this point.

65