Russila

Russila t1_ivxe531 wrote

I literally responded to someone with this exact attitude. I said I based my expectations based on what the best researchers working on the problem say and the response I got was "That's just selection bias" which for sure it could be. But if we assume even the best researchers in a field don't know what they are talking about then why tf are they there?

3

Russila t1_ivpiy2i wrote

This is true and here's the thing. It happens when it happens. None of us are divination wizards or prophets. We can only try to make guesses based on existing evidence.

What I do see very consistently across the board is people bringing AI timelines down. That makes me more optimistic I think.

1

Russila t1_ivpdbd9 wrote

This is true. But if we scrutinize and doubt every single thing we hear then we wouldn't believe anything is true. There is a fallacy for every possible argument that can be made.

Do I think it will happen in 10-15 years? Based on what researchers are currently saying, yes. Could that change when new information is brought to light? Yes. We should base expectations on existing evidence and change them when that evidence shifts. Hopeless optimism and hopeless pessimism helps no one.

Regardless we should continue to accelerate the coming of AGI as much as possible in my opinion. It's potential uses far outweigh its potential downsides.

1

Russila t1_itkkzye wrote

I don't think many people think we just need to scale. All of these things are giving us an idea of how to make AGI. So now we know how to get it to self improve. We can simulate a thinking process. When these things are combined it could get us closer.

If we can give it some kind of long term memory that it can use to retrieve and act upon that information and have some kind of common sense reasoning that that's very close to AGI.

18