visarga

visarga t1_ivnccvy wrote

Putting a LLM on top of a simple robot makes the robot much smarter (PaLM-SayCan). The Chinese Room doesn't have embodiment, was it a fair comparison? Maybe the Chinese Room on top of a robotic body would be much improved.

The argument tries to say that intelligence is in the human, not in the "book". But I disagree, I think intelligence is mostly in the culture. A human alone, who grew up alone, without culture and society, would not be very smart or solve tasks in any language. Foundation models are trained on the whole internet today. They display new skills. Must be that our skills reside in the culture. So a model learning from culture would also be intelligent, especially if embodied and allowed to have feedback control loop.

7

visarga t1_ivkl9uz wrote

Not quite God, it will be limited by the speed of propagation of light. There's a volume only so large that people inside can interact in real time, larger than Earth but smaller than the orbit of the Moon (3s lag). The further you are, the worse you can participate in the virtual world. Even if AI turns everything to computronium, it can't bee too large.

5

visarga t1_ivkg33g wrote

I have thought about that and am ready to assume the risks. I want to leave as much data as possible to maximise the chance of being reconstructed. Someone will create a pre-AGI-world-simulation and will use all the internet scrapes as training data. The people who have more detailed logs will have better reconstructions.

Even GPT-3 is good enough to impersonate real people in polls. You can poll GPT-3 (aka "silicon sampling") and approximate the reality. In the future, whenever you ask yourself "who am I?" is going to be more probable you are a simulation of yourself than the real thing.

1

visarga t1_ivipogr wrote

In 2012 NLP was in its infancy. We were using recurrent neural nets called LSTMs but they could not handle long range contextual dependencies and were difficult to scale up.

In 2017 we got a breakthrough with the paper "Attention is all you need", suddenly long range context and fast/scalable learning was possible. By 2020 we got GPT-3, and in this year there are over 10 alternative models, some open sourced. They all trained on an amazing volume of text and exhibit signs of generality in their abilities. Today NLP can solve difficult problems, in code, math and natural language.

2

visarga t1_ivinkvl wrote

  1. Take a look at neural scaling laws, figures 2 and 3 especially. Experiments show that more data and more compute are better. It's been a thing for a couple of years already, the paper has 260 citations, authored by OpenAI.

  2. If you work with AI you know it always makes mistakes. Just like if you're using Google Search - you know you often have to work around its problems. Checking models not to make mistakes is big business today, called "human in the loop". There is awareness about model failure modes. Not to mention that even generative AIs like Stable Diffusion require lots of prompt massaging to work well.

  3. sure

9

visarga t1_iv93rk1 wrote

Wikipedia defines qualia as individual instances of subjective, conscious experience. Thinking is part of that.

How can we think without feeling? We're not Platonic entities, we have real bodies with real needs. Feeling good or bad about an action or situation is required in order to survive.

0

visarga t1_iuskkry wrote

The previous paper displayed common sense knowledge transfer from language model to robotics - such as how to clean a coke spill, this one adds Python on top for numerical precision and reliable execution.

Everyone here thinks blue collar jobs are still safe. They're wrong. Stupid robots + language model = smart robots. Don't look at Spot that it only knows how to open dors and climb stairs, it can be the legs for the LLM.

So LLMs besides being AI writers and task solvers, can also code, do data science, operate robots and control application UIs. Most of these have their own startups/large companies behind. I think it's gonna be the operating system of 2030.

3

visarga t1_iusk21l wrote

They do a few preventive measures.

> we first check that it is safe to run by ensuring there are no import statements, special variables that begin with __, or calls to exec and eval. Then, we call Python’s exec function with the code as the input string and two dictionaries that form the scope of that code execution: (i) globals, containing all APIs that the generated code might call, and (ii) locals, an empty dictionary which will be populated with variables and new functions defined during exec. If the LMP is expected to return a value, we obtain it from locals after exec finishes.

3

visarga t1_iu8bzyj wrote

GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.

Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.

I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.

On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.

3