Viewing a single comment thread. View all comments

Sashinii t1_iurkiad wrote

They address the potential negatives with built-in safety checks, while also encouraging suggestions for other methods to ensure that the AI is as safe as possible.

12

ProShortKingAction t1_iurmfoc wrote

Sorry, I took that as them saying they had built-in safety checks that are meant to prevent the robot from doing an unsafe physical action not prevent it from writing vulnerable code. I might have misinterpreted that.

Another thing I would like to bring up in the favor of this model of going about things is that vulnerabilities slip through in regular code all the time, this approach doesn't have to be perfect just more safe than the current approach. It's like with driverless cars, they don't have to be perfect just more safe than a car driven by a human which seems like a low bar. I just don't see anything from this post that implies a safe way to do this approach isn't rather far off

Edit: In the Twitter thread made by one of the researchers posted elsewhere in this thread they very vaguely mention "... and many potential safety risks need to be addressed" its hard to tell if this is referencing the robot physically interacting with the world, cybersecurity concerns, or both.

6