Introsium t1_j5ax6rh wrote

Watching openAI’s best and brightest optimistically release ChatGPT only to have it “jailbroken” within hours by [checks notes] people asking it to do bad things “just as a joke bro” should be clear open-and-shut case: we are infants in AI safety, and we need to slow the fuck down because there are some mistakes where once is enough.

“Do not conjure up that which you cannot put back down” is basic wizard shit.


Introsium t1_j5awirb wrote

The narrow AIs are textbook cases of misalignment. When the algorithm is optimizing for a goal like “amount of time people spend watching YouTube videos”, we get exactly what we asked for, and what no one fucking wants.

The problem with these applications is that they’re not aligned with human values because they’re not designed for or by humans.


Introsium t1_j24d816 wrote

You could program a non-AI to perform any given task, but the entire point of my statement is that it casually passes the exam. It was not programmed to do that, but that doesn’t stop it from passing what’s commonly regarded as a very hard test. It simultaneously crushes programming challenges. But, most importantly, it can do most people’s jobs. It can’t do all of them perfectly but it can do them much cheaper than humans can for the loss in quality.

You’re looking at a Fabricator and saying “but that other machine can build a car, this isn’t really impressive”, which is entirely missing the point.


Introsium t1_j1jsnxy wrote

Reply to comment by b_lett in Future of Games by stoneman217

VR is still in its toddling stages: we know what it’s supposed to do, but it’s clunky and cumbersome. These aren’t fundamental limits. The first cell phones were clunky and cumbersome, too, but now they’re an omnipresent extension of most humans.

Media will continue to get more immersive; the question is of whether it’s headset-based VR, or if that technology gets leapfrogged by something like direct nerve stimulation.