visarga
visarga t1_iu7r0tk wrote
Reply to audiobook recommendations... by theferalturtle
Why don't you try to read survey papers? They don't add anything new but they summarise a whole year or more of activity in a subfield and are pretty high-level overview.
For example, this one is nice: https://arxiv.org/pdf/2111.06091.pdf
visarga t1_iu7onr9 wrote
Reply to If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
> Tell me the first 10 digits of PI, if you succeed you're a bot.
Tested it on GPT-3, got 3.141592653, anyone here could answer that from memory?
Asked for 100 digits, it spits out this:
> 3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679
checks out, can't be a human unless their name is also Pi or Piscine.
visarga t1_iu7nryj wrote
Reply to comment by SlenderMan69 in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Humans fool and lie to themselves all the time, one thing coming to mind is anti-vaxxers protesting vaccines then still going to the hospital when they get sick, or worse, protesting abortion, and then having one in secret.
Similarly, neural nets will learn the training set perfectly but fail on new data, they give you the illusion of learning if you're not careful. That's why in all papers they report the score on a separate block of tests the model has not seen yet. It's a lying, cheating basterd when it comes to learning. This game AI found a clever way to win points without having to do the whole course.
visarga t1_iu7nbwy wrote
Reply to comment by 3Quondam6extanT9 in If you were performing a Turing test to a super advanced AI, which kind of conversations or questions would you try to know if you are chatting with a human or an AI? by Roubbes
Q: Why does a boy dying from an allergic reaction to a bee sting make you cry?
GPT-3:
>There are a few reasons why this might make someone cry. First, it is tragic when anyone, especially a child, dies unexpectedly. Second, an allergy to bee stings is relatively rare, so it may be shocking or upsetting to see someone die from something that seems so innocuous. Finally, the death may be a reminder of our own mortality; even if we are not allergic to bee stings, we could still die from something seemingly innocuous.
Does that pass your test? It's one trial, temp 0.7, first answer that popped up.
In my opinion it's better than average human response.
visarga t1_iu3etz0 wrote
Reply to comment by daxophoneme in Full-band General Audio Synthesis with Score-based Diffusion by xutw21
I think it's ok in large scale, the model learns the noise separately from the content, and it works as "free augmentation".
visarga t1_iu3em7n wrote
Reply to comment by MasterFruit3455 in Teen Glues Hand To Historic Computer to Protest A.I. Takeover [satire] by canadian-weed
Ass wiping you mean? Specialised robots are already on the job.
visarga t1_iu3b3o0 wrote
Reply to comment by sonderlingg in The Great People Shortage is coming — and it's going to cause global economic chaos | Researchers predict that the world's population will decline in the next 40 years due to declining birth rates — and it will cause a massive shortage of workers. by Shelfrock77
The singularity won't come by itself, we have to work hard towards it. There's lots of work in transitioning.
visarga t1_iu3aj81 wrote
Reply to comment by Mobile-Bid-9848 in [D] Python function that changed your life in regard to Machine Learning by popcornn1
poor debugability
visarga t1_itz7mx1 wrote
Internet use grew by 14x between 1997-2007. Mobile phone users grew by 7x between 2000-2010. Smartphones grew by 12x between 2007-2017. In this time we got e-commerce, social networks, online media, taxi and booking apps, educational materials, open source everywhere, the early discoveries in deep neural nets ... Many or most of these were unexpectedly useful and changed society.
We are in a wild west, 2000's bubble period now with AI. I don't think there will be a crash, it's not that, but I think it will take 10 years to see a profoundly transformed world, and 20 years to go beyond our current horizons.
Who will become the rulers of this new era? People like to bet on big corporations because they got the hardware, money and brains. But I think it's misleading. You can run a model on your computer but you can't run 'a Google' on your computer, it will force you to disclose your private data to use it.
But it's possible that AI models will democratise access compared to the centralised internet services. You can install SD or a language model on your own box in privacy. You don't need to wade through spam, you can chat your questions directly to a polite and knowledgeable assistant. Don't need to see any original site at all, or be online for that matter. It's all in the model and maybe a curated corpus of additional content sitting on your drive. Nobody knows what you're doing and they can't put ads in your face. You don't even need to know how to code or know about AI, because its interface is so natural everyone can use it, and use it for new things without needing to reprogram it.
I just described a lifestyle where humans are surrounded by a loyal AI cocoon, a private space for dreaming and creativity that seems to be on the verge of extinction today. That's my dream, what I want to see.
visarga t1_itx6vs1 wrote
Reply to comment by HyperImmune in [DEEPMIND] Transformers have shown remarkable capabilities - but can they improve themselves autonomously from trial and error? by Danuer_
They use a large context model to learn (distill) from the gameplay generated by other agents. They put more history in the context so the model needs less samples to learn.
This is significant for robots, bots and AI agents. Transformers are found to be very competent at learning to act/play/work relative to other methods, and this paper shows they can learn with less training.
visarga t1_itwxzgs wrote
Reply to comment by Southern-Trip-1102 in [D] What's the best open source model for GPT3-like text-to-text generation on local hardware? by AuspiciousApple
My experience is that models that have not had the instruction tuning treatment don't behave nice.
visarga t1_ittgjnj wrote
Reply to comment by AdditionalPizza in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
Just imagine 25 years ago, could you have predicted the explosion of work related to the internet? There's development, content creation, communications, commerce and education. Practically it's a double of the physical world. It made us more efficient by a large margin and yet here we are, employed with jobs. Even delivery people and taxi drivers get jobs from the internet.
How is that logic "automating even part of a job leads to layoffs" standing up to the test? I think the correct answer is that we scale up work to match the available capacity instead of firing people. Our desires scale up faster than automation or resources.
visarga t1_ittbiag wrote
Reply to comment by blueSGL in With all the AI breakthroughs and IT advancements the past year, how do people react these days when you try to discuss the nearing automation and AGI revolution? by AdditionalPizza
> Is there going to be enough companies left doing things 'the old way' to keep employment numbers up even though it's less cost effective?
In the medium term there will be new jobs and applications that were impossible before. A company should expand and diversify instead of firing their people, if they care about profits that is. We also have to tackle global warming and other pesky problems on a grand scale. In the long term I think we'll be post scarcity by a combination of automation and smart materials.
visarga t1_itr1f9w wrote
Reply to It's important to keep in mind that the singularity could create heaven on Earth for us. *Or* literal hell. Human priorities are the determining factor. by Pepperstache
It's gonna be both the good and the bad and some surprising bits we never even imagined. But on the whole I think generative AI has given a wide empowerment to people. AI is more accessible than desktop apps and even mobile apps. You can just talk to it, don't even need to read. It helps developers with snippets of code. It helps artists generate stunning images. But it's not hard to learn, it lowers the entry barrier. It basically adds a few IQ points to everyone who uses it. It will be what Google should have been before it choked on spam and ads - a way to make all information more accessible and reusable. It will also run on your own machine, in privacy.
visarga t1_itkcw8e wrote
Any of the recent LLMs can blow away the commercial voice assistants we have today. But why are voice assistants so primitive? It's probably too expensive to give everyone GPT-3 powers in their speaker, but that should change fast because they have models 50x smaller with comparable quality.
But probably that was not the only reason LLMs are not in voice assistants, I bet they are afraid it will be prompted into saying racist things and make bad PR. Who wants a 'MS Tay' on their hands?
visarga t1_itil03g wrote
You don't program AI with "statements", it's not Asimov's positronic brain. What you do instead is to provide a bunch of problems for the AI to solve. These problems should test the alignment, fuzz out the risks. When you are happy with its calibration you can deploy it.
But an interesting and recent development - GPT-3 can simulate people in virtual polls. Provided with the personality profile, it will assume the personality and answer the poll questions from that perspective.
>GPT-3 has biases that are “fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups.”
Apparently GPT-3 not only is aligned with humans in general, but it is precisely aligned with each demographic. So it knows our values really well.
The problem is now we have to specify the desired bias we want from it and that's a political problem, not an AI problem. It is ready to oblige and have the bias we want, it's even more aligned than we want, aligned to our stupid things as well.
visarga t1_itij9o0 wrote
Reply to comment by StarKiller2626 in What will you do to survive in the time between not needing to work anymore to survive and today? by wilsonartOffic
People with skills, no job and lots of needs to fulfil have to become self reliant, alone or in a larger group. Fortunately there are going to be great technologies to support that - solar panels, water filtration, automation and of course AI. A community can organise their own school, clinic and store, build their own houses, repair their equipment. The total dependency on the outside should be diminished.
Of course they can't make their own computer chips and are going to use open source software, same for medicine and construction materials. And they need some initial capital. But that should provide a way for people to use their hands to improve their situation. Own your means of production so you don't need a job. We don't need corporations to UBI us, just our share of resources to build with.
visarga t1_itgu5bi wrote
Reply to comment by ChronoPsyche in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
Not exponential, let's not exaggerate. It's quadratic. If you have a sequence of N words, then you can have NxN pairwise interactions. This blows up pretty fast, at 512 words -> 262K interactions, at 4000 words -> 16M interactions. See why it can't fit more than 4000 tokens? It's that pesky O( N^2 ) complexity.
There is a benchmark called "Long Rage Arena" where you can check to see the state of the art in solving the "memory problem".
visarga t1_itgs3bn wrote
Reply to comment by ChronoPsyche in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
People working on AI projects also don't know how they will turn out, it's alchemy. I mean, who among the AI community predicted Alpha GO, GPT-3 and Dall-E? Nobody. Being an expert in the field did not mean they knew what was around the corner.
visarga t1_itgrbjt wrote
Reply to comment by overlordpotatoe in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
Before that there will be AI that can do smaller parts of a movie - background, costumes, music, short pieces of the dialogue, etc. A human can combine them into a coherent movie. Just like Codex is useful to write code but can't write a whole project on its own.
visarga t1_itgqug0 wrote
Reply to comment by ChronoPsyche in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
There is also exponentially less long-form content than short form. The longer it gets, the fewer samples we have to train on.
visarga t1_itgqoj0 wrote
Reply to comment by ChronoPsyche in Given the exponential rate of improvement to prompt based image/video generation, in how many years do you think we'll see entire movies generated from a prompt? by yea_okay_dude
There are workarounds for long input, one is the linear transformer family (Linformer, Longformer, Big Bird, Performer, etc), the other is the Perceiver, who can reference a long input sequence using a fixed size transformer.
visarga t1_itcxq70 wrote
Reply to comment by Ortus12 in When do you expect gpt-4 to come out? by hducug
A friendly race since they share so much of their findings periodically.
visarga t1_itc1tbb wrote
Reply to comment by zero_for_effort in When do you expect gpt-4 to come out? by hducug
U-PaLM and Flan PaLM come to mind.
The first one shows we were noising the data incorrectly, different kind of noising has major benefits. Second shows that training on thousands of tasks boosts the model capability to follow instructions and also get better score. So maybe OpenAI had to change their plans midway.
It's also possible that they don't want to scale up even further because it's very impractical. Too expensive to use, not just to train. And recent models like Flan get GPT-3 scores on many (not all) tasks with just 3B parameters.
There's also a question about training data - where can they get 10x or 100x more? I bet they transcribe videos, probably all videos that can be accessed. Another approach is to use raw audio instead of text, works well. I bet they have a large team just for dataset building. BLOOM was managed by an organisation with 1000 people and a lot of their effort was into the dataset sourcing, trying to reduce biases.
visarga t1_iu7real wrote
Reply to Artificial Intelligence — The Revolution Hasn’t Happened Yet by Xaneris47
Ah, the famous Michael Jordan of AI.
He's like Hinton, Yann LeCun, Bengio and Schmidthuber, built some of the foundations of AI in his career.