sticky_symbols t1_j9w9b6e wrote

There's obviously intelligence under some definitions. It meets a weak definition of AGI since it reasons about a lot of things almost as well as the average human.

And yes, I know how it works and what its limitations are. It's not that useful yet, but discounting it entirely is as silly as thinking it's the AGI we're looking for.


sticky_symbols t1_j9w3kze wrote

The thing about chatGPT is that everyone talked about it and tried it. I and most ML folks hadn't tried GPT3

Everyone I know of was pretty shocked at how good GPT3 is. It did change timelines in the folks I know of, including the ones that think about timelines a lot as part of their jobs.


sticky_symbols t1_j9rezil wrote

ML researchers worry a lot less than AGI safety people. I think that's because only the AGI safety people spend a lot of time thinking about getting all the way to agentic superhuman intelligence.

If we're building tools, not much need to worry.

If we're building beings with goals, smarter than ourselves, time to worry.

Now: do you think we'll all stop with tools? Or go on to build cool agents that think and act for themselves?


sticky_symbols t1_j9m8yn3 wrote

Asimov's rules don't work, and many of the stories were actually about that. But they also don't include civilization ending mistakes. The movie I Robot actually did a great job updating that premise, I think.

One counterintuitive thing is that people in the field of AI are way harder to convince than civilians. They have a vested interest in research moving ahead full speed.

As for your bs detector, I'm don't know what to say. And I'm not linking this account to my real identity. You can believe me or not.

If you're skeptical that such a field exists, you can look at the Alignment Forum as the principle place that we publish.


sticky_symbols t1_j9gxu26 wrote

It's probably mostly a side effect of being able to simulate possible futures. This helps in planning and selecting actions based on likely outcomes several steps away.

And yes, that is also crucial for how we experience our consciousness.


sticky_symbols t1_j9gwa67 wrote

He's the direct father of the whole AGI safety field. I got interested after reading an article by him in maybe 2004. Bostrom credits him with many of the ideas in Superintelligence, including the core logic about alignment being necessary for human survival.

Now he's among the least optimistic. And he's not necessarily wrong.

He could be a little nicer and more optimistic about others' intelligence.


sticky_symbols t1_j8g5ij0 wrote

I'm pretty deep into this field. I have published in the field, and have followed it almost since it started with Yudkowsky.

I believe they both have strong arguments. Or rather, those who share Altmann's cautious-but-optimistic view have strong arguments.

But both arguments are based on how AGI will be built. And we simply don't know that. So we can't accurately guess our odds.

But it's for sure that working hard on this problem will improve our odds of a really good future over disaster.


sticky_symbols t1_j7y4etr wrote

This is an excellent point. Many of us are probably underestimating timelines based on a desire to believe. Motivated reasoning and confirmation bias are huge influences.

You probably shouldn't have mixed it with an argument for longer timelines. That will give an excuse to argue that and ig ore the point.

The reasonable estimate is very wide. Nobody knows how easy or hard it might be to create AGI. I've looked at all of the arguments, and have enough expertise to understand them. Nobody knows.


sticky_symbols t1_j5kqu80 wrote

You can definitely predict some things outside of five years with good accuracy. Look at Moore's Law. That's way more accurate than predictions need to be to be useful. Sure if nukes were exchanged all bets are off, but outside of that I just disagree with your statement. For instance: will China's gender imbalance cause them trouble in ten years? It almost certainly will.