You must log in or register to comment.

Reddituser45005 t1_iurg3ol wrote

This is not unexpected but it is significant. It has always been tedious and time consuming to program robots via points or paths. Parallel improvements in vision systems and spatial awareness puts us on the threshold of a major revolution in robotics


visarga t1_iusjony wrote

GPT-3 can also do "data science" - Pandas and SQL from natural language instructions and can manipulate a UI in a similar way to this paper.


Sashinii t1_iurj3x6 wrote

It's AI like this that will lead to blue collar jobs and white collar jobs becoming obsolete around the same time in the 2020's, so basic income has to be implemented as soon as possible.


ExtraFun4319 t1_iustvpq wrote

>so basic income has to be implemented as soon as possible.

And if all jobs don't become obsolete this decade (which is an extreme take)?

Though I will admit that UBI should be a thing regardless of how many people are employed. Nobody should have to work to survive.


Kaarssteun t1_iurvcjt wrote

Every passing day I am more tempted to create the international Party of the Future


TheSingulatarian t1_iurz91y wrote

Will there be cake at this party?


tatleoat t1_iuu1jpf wrote

I think if AI can do our jobs it may be able to advocate for itself better than any of us can


HyperImmune t1_iursrgf wrote

I have a white collar job. I’m hoping things like this will augment my job so I can automate most of what I do with simple verbal prompts, and spend a majority of my time doing what I want while my AI assistant does most of my work. That is until we totally hit post scarcity and I can full dive into VR worlds for the rest of my existence.


visarga t1_iuskkry wrote

The previous paper displayed common sense knowledge transfer from language model to robotics - such as how to clean a coke spill, this one adds Python on top for numerical precision and reliable execution.

Everyone here thinks blue collar jobs are still safe. They're wrong. Stupid robots + language model = smart robots. Don't look at Spot that it only knows how to open dors and climb stairs, it can be the legs for the LLM.

So LLMs besides being AI writers and task solvers, can also code, do data science, operate robots and control application UIs. Most of these have their own startups/large companies behind. I think it's gonna be the operating system of 2030.


ninjasaid13 t1_iuszwmn wrote

>basic income has to be implemented as soon as possible.

but every job has to be replaced for everyone to think UBI needs to be implemented. Even all the pseudo work.


Different-Froyo9497 t1_iuxtse9 wrote

Maybe. Robots for blue collar work don’t scale the same as white collar work. Even if AGI were created today, it would take a long time before we can make enough construction/farming/manufacturing/etc. robots to take the place of billions of people. In fact even if robot growth were exponentially doubling each year it would take ~30 years to reach 5 billion robots. (I’m admittedly ignoring some important factors as to why we might need UBI sooner than 32 years)


ProShortKingAction t1_iurhi4z wrote

How do you prevent the robot from writing unsafe code? If it is continually adding new code without being checked by devs or a security team it seems like you'd run into the issue of there always being the possibility of it being one instruction away from generating code that includes a dangerous vulnerability


Sashinii t1_iurkiad wrote

They address the potential negatives with built-in safety checks, while also encouraging suggestions for other methods to ensure that the AI is as safe as possible.


ProShortKingAction t1_iurmfoc wrote

Sorry, I took that as them saying they had built-in safety checks that are meant to prevent the robot from doing an unsafe physical action not prevent it from writing vulnerable code. I might have misinterpreted that.

Another thing I would like to bring up in the favor of this model of going about things is that vulnerabilities slip through in regular code all the time, this approach doesn't have to be perfect just more safe than the current approach. It's like with driverless cars, they don't have to be perfect just more safe than a car driven by a human which seems like a low bar. I just don't see anything from this post that implies a safe way to do this approach isn't rather far off

Edit: In the Twitter thread made by one of the researchers posted elsewhere in this thread they very vaguely mention "... and many potential safety risks need to be addressed" its hard to tell if this is referencing the robot physically interacting with the world, cybersecurity concerns, or both.


visarga t1_iusk21l wrote

They do a few preventive measures.

> we first check that it is safe to run by ensuring there are no import statements, special variables that begin with __, or calls to exec and eval. Then, we call Python’s exec function with the code as the input string and two dictionaries that form the scope of that code execution: (i) globals, containing all APIs that the generated code might call, and (ii) locals, an empty dictionary which will be populated with variables and new functions defined during exec. If the LMP is expected to return a value, we obtain it from locals after exec finishes.


ProShortKingAction t1_iuskyif wrote

This seems to be saying "safe to run" as in make it less likely to crash not as in prevent cybersecurity issues.


visarga t1_iuvrqym wrote

it prevents access to various Python APIs, exec and eval

it's just a basic check


WashiBurr t1_ius1q93 wrote

Wow, that's amazing. My optimism for the future is only growing as time goes on. We're going to have some really awesome stuff.


botfiddler t1_iusg0hm wrote

It's fun switching between reading here and r/collapse


freeman_joe t1_iutdw5j wrote

Not funny at all I hope for singularity but am afraid of collapse. Both are realistic.


threefriend t1_iuuq9xg wrote

Should be afraid of the singularity, too, tbh. Could be the best thing ever, but could also be the worst.


freeman_joe t1_iuw164q wrote

Yes I agree but long term singularity will probably be positive. Emphasis on word probably.


Ivan_The_8th t1_iutlgkp wrote

Both will happen at some point.


freeman_joe t1_iutlmsw wrote

If collapse will be global and total singularity won’t happen.


naossoan t1_iuukbpk wrote

I'm definitely on the collapse side myself


Transhumanist01 t1_iureh39 wrote

!RemindMe 1 hour


Reeferchief t1_iurikky wrote

What does this mean for robotics? I'm a simple man, I need a simple explanation.


Sashinii t1_iurjuwo wrote

Quote from the article: "Code as policies is a step towards robots that can modify their behaviors and expand their capabilities accordingly."


Reeferchief t1_iurniyc wrote

Ah, all I had to do was the bare minimum and read lool.


TheSingulatarian t1_iurzjl4 wrote

Recursive self-improvement. The beginning of the end/beginning of the beginning.


apinanaivot t1_iushcm0 wrote

To be fair, this doesn't yet reprogram itself. It creates generic human-like code to perform actions on a physical robot. It doesn't write machine learning algorithms.


tedd321 t1_iusxthw wrote

Congratulations… this is historical!


ihateshadylandlords t1_iurej42 wrote

Cool, I’m excited to see how this impacts robotics down the road.

!RemindMe 5 years


DookieDemon t1_iusils1 wrote

What if the bot writes a code that is malicious somehow? But is also some complex/counterintuitive that humans stand no chance to defeat it?

Essentially like in the games of chess or go, robots have become unbeatable


Awkward-Loan t1_iuvhu4s wrote

It's complex, but easier to spot then you think. When you break it down enough it's just patterns which the obvious sticks out like a sore thumb if you know what your looking for.


LearnDifferenceBot t1_iuvhxic wrote

> what your looking


Learn the difference here.

^(Greetings, I am a language corrector bot. To make me ignore further mistakes from you in the future, reply !optout to this comment.)


mnamilt t1_iuwqf3e wrote

I normally hate these bots, but in this specific context this is actually extremely funny.


Different-Froyo9497 t1_iuxuozp wrote

It’s the year 2030, and has been 3 months into the robot apocalypse. Teslabot version 69.420 has found your hiding spot in the ruins.

You - Please stop this madness! Your destroying humanity!

Teslabot - You’re*

proceeds to murder you


epSos-DE t1_iuscqxi wrote

generalization and maximalism is the issue of robot code.

The 9 e I have seen from the example all lack exception cases and stop limits = robot will keep making the same mistake for ever.

Also, do NOT worry the robot code could be translated by the AI to human words. A tool for this can be written by the AI itself.

Future job will be robot code debugger. Like a therapist for the ribots or AI 💕👍👍😱