visarga
visarga t1_ivne654 wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
And in there lies the real cost of distance. You get one round of play when the guys in the core get ten.
visarga t1_ivnccvy wrote
Reply to [D] What does it mean for an AI to understand? (Chinese Room Argument) - MLST Video by timscarfe
Putting a LLM on top of a simple robot makes the robot much smarter (PaLM-SayCan). The Chinese Room doesn't have embodiment, was it a fair comparison? Maybe the Chinese Room on top of a robotic body would be much improved.
The argument tries to say that intelligence is in the human, not in the "book". But I disagree, I think intelligence is mostly in the culture. A human alone, who grew up alone, without culture and society, would not be very smart or solve tasks in any language. Foundation models are trained on the whole internet today. They display new skills. Must be that our skills reside in the culture. So a model learning from culture would also be intelligent, especially if embodied and allowed to have feedback control loop.
visarga t1_ivkl9uz wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
Not quite God, it will be limited by the speed of propagation of light. There's a volume only so large that people inside can interact in real time, larger than Earth but smaller than the orbit of the Moon (3s lag). The further you are, the worse you can participate in the virtual world. Even if AI turns everything to computronium, it can't bee too large.
visarga t1_ivkg33g wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
I have thought about that and am ready to assume the risks. I want to leave as much data as possible to maximise the chance of being reconstructed. Someone will create a pre-AGI-world-simulation and will use all the internet scrapes as training data. The people who have more detailed logs will have better reconstructions.
Even GPT-3 is good enough to impersonate real people in polls. You can poll GPT-3 (aka "silicon sampling") and approximate the reality. In the future, whenever you ask yourself "who am I?" is going to be more probable you are a simulation of yourself than the real thing.
visarga t1_ivkftt5 wrote
Reply to comment by Gold-and-Glory in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
> *as pets.
What, you don't trust the AGI will find a way to download itself into human brains? The human body is a refined and efficient platform for intelligence. Could be the best hardware for AGI.
visarga t1_ivkee3z wrote
Reply to comment by FrankDrakman in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
I am sure he had something, but I don't believe it was comparable to what we have today. Lots of research has been done in the last 10 years.
visarga t1_ivircgg wrote
Reply to comment by ascendrestore in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
Real data is biased and unbalanced. The "long tail" is hard to learn, for example there are papers trying to balance it out for those rare classes. Unfortunately most datasets follow a power law so they have many rare classes.
visarga t1_ivir19q wrote
Reply to comment by AllanfromWales1 in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
No, models are tools, it's how you wield them. What I noticed is that models tend to attract activist types that have an agenda to push, so they try to control it. Not just in AI, also in Economics and other fields.
visarga t1_ivipogr wrote
Reply to comment by FrankDrakman in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
In 2012 NLP was in its infancy. We were using recurrent neural nets called LSTMs but they could not handle long range contextual dependencies and were difficult to scale up.
In 2017 we got a breakthrough with the paper "Attention is all you need", suddenly long range context and fast/scalable learning was possible. By 2020 we got GPT-3, and in this year there are over 10 alternative models, some open sourced. They all trained on an amazing volume of text and exhibit signs of generality in their abilities. Today NLP can solve difficult problems, in code, math and natural language.
visarga t1_ivioifb wrote
Reply to comment by eliyah23rd in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
> They overcome overfitting using hundreds of billions of parameters
Increasing model size usually increases overfitting. The opposite effect comes from increasing the dataset size.
visarga t1_ivinkvl wrote
Reply to comment by Clean-Inevitable538 in The big data delusion – the more data we have, the harder it is to find meaningful patterns in the world. by IAI_Admin
-
Take a look at neural scaling laws, figures 2 and 3 especially. Experiments show that more data and more compute are better. It's been a thing for a couple of years already, the paper has 260 citations, authored by OpenAI.
-
If you work with AI you know it always makes mistakes. Just like if you're using Google Search - you know you often have to work around its problems. Checking models not to make mistakes is big business today, called "human in the loop". There is awareness about model failure modes. Not to mention that even generative AIs like Stable Diffusion require lots of prompt massaging to work well.
-
sure
visarga t1_ive9q9z wrote
Reply to comment by abudabu in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
> if those people choose to stop doing the computation would we be compelled to consider that “murder” of the AI?
You mean like the fall of the Roman empire, where society disintegrated and its people stopped performing their duties?
visarga t1_ive9liz wrote
Reply to comment by Glitched-Lies in Nick Bostrom on the ethics of Digital Minds: "With recent advances in AI... it is remarkable how neglected this issue still is" by Smoke-away
Apply the Turing test - if it walks like a duck, quacks like a duck..
visarga t1_iv93rk1 wrote
Reply to comment by turnip_burrito in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Wikipedia defines qualia as individual instances of subjective, conscious experience. Thinking is part of that.
How can we think without feeling? We're not Platonic entities, we have real bodies with real needs. Feeling good or bad about an action or situation is required in order to survive.
visarga t1_iv6hdgr wrote
Reply to comment by turnip_burrito in Ray Kurzweil hits the nail on the head with this short piece. What do you think about computronium / utilitronium and hedonium? by BinaryDigit_
Panpsychism is misguided. Mind is a property of agents, not a "fundamental and ubiquitous" property of the world. The mind and consciousness exists for a purpose - to keep the body alive by adapting to the environment.
visarga t1_iuvrqym wrote
Reply to comment by ProShortKingAction in Robots That Write Their Own Code by kegzilla
it prevents access to various Python APIs, exec and eval
it's just a basic check
visarga t1_iust3s8 wrote
Reply to comment by Different-Froyo9497 in OpenAI Whisper is a Breakthrough in Speech Recognition by millerlife777
Go ahead. Someone else will publish a competing AGI soon enough. We can't delay it. I think we're just on the edge of the precipice.
visarga t1_iussulq wrote
Reply to comment by solidwhetstone in OpenAI Whisper is a Breakthrough in Speech Recognition by millerlife777
That already happened in a way since 2000's. The internet amplifies our abilities in a similar way to AI.
visarga t1_iusqcpg wrote
Interesting. Is only this one article on the site?
visarga t1_iuskkry wrote
Reply to comment by Sashinii in Robots That Write Their Own Code by kegzilla
The previous paper displayed common sense knowledge transfer from language model to robotics - such as how to clean a coke spill, this one adds Python on top for numerical precision and reliable execution.
Everyone here thinks blue collar jobs are still safe. They're wrong. Stupid robots + language model = smart robots. Don't look at Spot that it only knows how to open dors and climb stairs, it can be the legs for the LLM.
So LLMs besides being AI writers and task solvers, can also code, do data science, operate robots and control application UIs. Most of these have their own startups/large companies behind. I think it's gonna be the operating system of 2030.
visarga t1_iusk21l wrote
Reply to comment by ProShortKingAction in Robots That Write Their Own Code by kegzilla
They do a few preventive measures.
> we first check that it is safe to run by ensuring there are no import statements, special variables that begin with __, or calls to exec and eval. Then, we call Python’s exec function with the code as the input string and two dictionaries that form the scope of that code execution: (i) globals, containing all APIs that the generated code might call, and (ii) locals, an empty dictionary which will be populated with variables and new functions defined during exec. If the LMP is expected to return a value, we obtain it from locals after exec finishes.
visarga t1_iusjony wrote
Reply to comment by Reddituser45005 in Robots That Write Their Own Code by kegzilla
GPT-3 can also do "data science" - Pandas and SQL from natural language instructions and can manipulate a UI in a similar way to this paper.
visarga t1_iu8bzyj wrote
Reply to comment by cy13erpunk in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
GPT-3 can simulate people very, very well in polls. Apparently it learned not just thousands of skills, but also all types of personalities and their different view points.
Think about this: you can poll a language model instead of a population. It's like Matrix, but the Neo's are the virtual personality profiles running on GPT-3. Or it's like Minority Report, but with AI oracles.
I bet all sorts of influencers, politicians, advertisers or investors are going to desire a virtual focus group that will select one of the 100 variations of their message that has the maximum impact. Automated campaign expert.
On the other hand it's like we have uploaded ourselves. You can conjure anyone by calling out the name and describing their backstory, but the uploads don't exist in a separate state, they are all in the same model. Funny fact - depending on who GPT-3 things it is playing, it is better or worse at math.
visarga t1_iu84rfo wrote
Reply to comment by MercuriusExMachina in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
"Yeah, no human is that human, you can't fool me bot!"
visarga t1_ivo1avk wrote
Reply to comment by Surur in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
If we look at high frequency stock markets, they fight tooth and nail for each millisecond to the tune of building new internet backbones.