Viewing a single comment thread. View all comments

Frumpagumpus t1_j8gkbvm wrote

the loop for ai to do recursive self improvement is a very very long supply chain unless it can get very far with just algorithmic improvements.

so i dont see why we shouldnt just assume the less hardware overhang the better,

which would pretty much mean we should go as fast as possible

3

CollapseKitty t1_j8gmpw4 wrote

We simply don't know.

AlphaZero became incomparably better than the sum total of all of humans over all of history at GO within 8 hours of self play.

AlphaFold took several months, and help, but was able to solve an issue thought to be impossible by humans.

The risk of assuming that a sufficiently advanced agent won't be able to self-scale, at least into something beyond our ability to intervene in, is incalculable.

If we have a 50% chance of succeeding in alignment if we wait 30 years, but a 5% chance if we continue at the current pace, isn't the correct choice obvious? Even if it's a 90% chance of success at current rates (the opposite is far more likely) why risk EVERYTHING when waiting could even marginally increase chances?

The payout is arbitrarily large as is the cost of failure. Every iota of extra chance is incomprehensibly valuable.

Unless you're making the argument from a personal perspective (I want to see AGI before I die) or you value the progress of intelligence at the cost of all other life, you should be in favor of slowing things down.

6

Frumpagumpus t1_j8go4k2 wrote

you'll have to convince tsmc, intel, all the other fabs and the govts of usa, china, europe, india, russia, and, if talking about 30 yrs, maybe nigeria, indonesia, malaysia, and a few others before you can convince me is all I'm saying

risk of nuclear war or other existential catastrophe is also non zero.

4

CollapseKitty t1_j8gvkye wrote

It's purely hypothetical unfortunaly. You're right that we are actively barreling toward uncontrollable systems and there is likely nothing, short of global catastrophe/nuclear war, that can shift our course.

I stand by the assessment and that we should acknowledge that our current path is basically mass suicide. For all of life.

The ultimate tragedy of the commons.

3

bildramer t1_j8htxr5 wrote

I think hardware overhang is already huge, there's no point in being risky only to make AI "ludicrously good/fast" instead of "ludicrously good/fast plus a little bit". Also, algorithms that give you AGI are so simple evolution could find one.

2