Comments

You must log in or register to comment.

ihateshadylandlords t1_iurej42 wrote

Cool, I’m excited to see how this impacts robotics down the road.

!RemindMe 5 years

8

Reddituser45005 t1_iurg3ol wrote

This is not unexpected but it is significant. It has always been tedious and time consuming to program robots via points or paths. Parallel improvements in vision systems and spatial awareness puts us on the threshold of a major revolution in robotics

49

ProShortKingAction t1_iurhi4z wrote

How do you prevent the robot from writing unsafe code? If it is continually adding new code without being checked by devs or a security team it seems like you'd run into the issue of there always being the possibility of it being one instruction away from generating code that includes a dangerous vulnerability

28

Reeferchief t1_iurikky wrote

What does this mean for robotics? I'm a simple man, I need a simple explanation.

9

Sashinii t1_iurj3x6 wrote

It's AI like this that will lead to blue collar jobs and white collar jobs becoming obsolete around the same time in the 2020's, so basic income has to be implemented as soon as possible.

30

ProShortKingAction t1_iurmfoc wrote

Sorry, I took that as them saying they had built-in safety checks that are meant to prevent the robot from doing an unsafe physical action not prevent it from writing vulnerable code. I might have misinterpreted that.

Another thing I would like to bring up in the favor of this model of going about things is that vulnerabilities slip through in regular code all the time, this approach doesn't have to be perfect just more safe than the current approach. It's like with driverless cars, they don't have to be perfect just more safe than a car driven by a human which seems like a low bar. I just don't see anything from this post that implies a safe way to do this approach isn't rather far off

Edit: In the Twitter thread made by one of the researchers posted elsewhere in this thread they very vaguely mention "... and many potential safety risks need to be addressed" its hard to tell if this is referencing the robot physically interacting with the world, cybersecurity concerns, or both.

6

HyperImmune t1_iursrgf wrote

I have a white collar job. I’m hoping things like this will augment my job so I can automate most of what I do with simple verbal prompts, and spend a majority of my time doing what I want while my AI assistant does most of my work. That is until we totally hit post scarcity and I can full dive into VR worlds for the rest of my existence.

8

WashiBurr t1_ius1q93 wrote

Wow, that's amazing. My optimism for the future is only growing as time goes on. We're going to have some really awesome stuff.

20

epSos-DE t1_iuscqxi wrote

generalization and maximalism is the issue of robot code.

The 9 e I have seen from the example all lack exception cases and stop limits = robot will keep making the same mistake for ever.

Also, do NOT worry the robot code could be translated by the AI to human words. A tool for this can be written by the AI itself.

Future job will be robot code debugger. Like a therapist for the ribots or AI 💕👍👍😱

3

DookieDemon t1_iusils1 wrote

What if the bot writes a code that is malicious somehow? But is also some complex/counterintuitive that humans stand no chance to defeat it?

Essentially like in the games of chess or go, robots have become unbeatable

5

visarga t1_iusk21l wrote

They do a few preventive measures.

> we first check that it is safe to run by ensuring there are no import statements, special variables that begin with __, or calls to exec and eval. Then, we call Python’s exec function with the code as the input string and two dictionaries that form the scope of that code execution: (i) globals, containing all APIs that the generated code might call, and (ii) locals, an empty dictionary which will be populated with variables and new functions defined during exec. If the LMP is expected to return a value, we obtain it from locals after exec finishes.

3

visarga t1_iuskkry wrote

The previous paper displayed common sense knowledge transfer from language model to robotics - such as how to clean a coke spill, this one adds Python on top for numerical precision and reliable execution.

Everyone here thinks blue collar jobs are still safe. They're wrong. Stupid robots + language model = smart robots. Don't look at Spot that it only knows how to open dors and climb stairs, it can be the legs for the LLM.

So LLMs besides being AI writers and task solvers, can also code, do data science, operate robots and control application UIs. Most of these have their own startups/large companies behind. I think it's gonna be the operating system of 2030.

3

ExtraFun4319 t1_iustvpq wrote

>so basic income has to be implemented as soon as possible.

And if all jobs don't become obsolete this decade (which is an extreme take)?

Though I will admit that UBI should be a thing regardless of how many people are employed. Nobody should have to work to survive.

14

tedd321 t1_iusxthw wrote

Congratulations… this is historical!

9

ninjasaid13 t1_iuszwmn wrote

>basic income has to be implemented as soon as possible.

but every job has to be replaced for everyone to think UBI needs to be implemented. Even all the pseudo work.

2

BIGBODYALI t1_iutm5dg wrote

Oh fuck, oh no. need my job

3

Awkward-Loan t1_iuvhu4s wrote

It's complex, but easier to spot then you think. When you break it down enough it's just patterns which the obvious sticks out like a sore thumb if you know what your looking for.

2

Different-Froyo9497 t1_iuxtse9 wrote

Maybe. Robots for blue collar work don’t scale the same as white collar work. Even if AGI were created today, it would take a long time before we can make enough construction/farming/manufacturing/etc. robots to take the place of billions of people. In fact even if robot growth were exponentially doubling each year it would take ~30 years to reach 5 billion robots. (I’m admittedly ignoring some important factors as to why we might need UBI sooner than 32 years)

2

Different-Froyo9497 t1_iuxuozp wrote

It’s the year 2030, and has been 3 months into the robot apocalypse. Teslabot version 69.420 has found your hiding spot in the ruins.

You - Please stop this madness! Your destroying humanity!

Teslabot - You’re*

proceeds to murder you

3