Viewing a single comment thread. View all comments

SkyeandJett t1_je81vu0 wrote

In regards to the "we have no way of knowing what's happening in the black box" you're absolutely right and in fact it's mathematically impossible. I'd suggest reading Wolfram's post on it. There is no calculably "safe" way of deploying an AI. We can certainly do our best to align it to our goals and values but you'll never truly KNOW with the certainty that Eliezer seems to want and it's foolhardy to believe you can prevent the emergence of AGI in perpetuity. At some point someone somewhere will either intentionally or accidentally cross that threshold. I'm not saying I believe there's zero chance an ASI will wipe out humanity, that would be a foolish position as well but I'm pretty confident in our odds and at least OpenAI has some sort of plan for alignment. You know China is basically going "YOLO" in an attempt to catch up. Since we're more or less locked on this path I'd rather they crossed that threshold first.

https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/

3

GorgeousMoron OP t1_je86qge wrote

Thanks! I'll check out the link. Yes, I intuitively agree based on what I already know, and I would argue further that alignment of an ASI, a superior intelligence by definition to an inferior intelligence, ours, is flatly, fundamentally impossible.

We bought the ticket, now we're taking the ride. Buckle up, buckaroos!

1