abc220022

abc220022 t1_jdzrsbu wrote

Part of the sales pitch behind LeetCode is that you are working on problems that are used in real coding interviews at tech companies. I believe that most LeetCode problems were invented well before they were published on the LeetCode website, so they still could appear in some form in their training data.

350

abc220022 t1_j9t681p wrote

The shorter-term problems you mention are important, and I think it would be great for technical and policy-minded people to try to alleviate such threats. But it's also important for people to work on the potential longer term problems associated with AGI.

OpenAI, and organizations like them, are racing towards AGI - it's literally in their mission statement. The current slope of ML progress is incredibly steep. Seemingly every week it looks like some major ML lab comes up with an incredible new capability with only minor tweaks to the underlying transformer paradigm. The longer this continues to happen, the more impressive these capabilities look, and the longer we see scaling curves continue with no clear ceiling, the more likely it looks that AGI will come soon, say, over the next few decades. And if we do succeed at making AI as capable or more capable than us, then all bets are off.

None of this is a certainty. One of Yudkowsky's biggest flaws imo is the certainty with which he makes claims backed with little rigorous argument. But given recent discoveries, the probability of a dangerous long term outcome is high enough that I'm glad we have people working on a solution to this problem, and I hope more people will join in.

1