visarga
visarga t1_j7yc08k wrote
Reply to comment by Better_Ad4061 in [D] Using LLMs as decision engines by These-Assignment-936
You prompt it by reward. Let's say your top reward is 1.
you predict model(past history, state, 1) -> move
visarga t1_j7q4313 wrote
Reply to comment by Cryptizard in AI Progress of February Week 1 (1-7 Feb) by Pro_RazE
If they make GPT-N much larger, it will take longer and cost more to train. Then we can only afford a few trials. Whether they are selected by humans or AI makes little difference. It's going to be a crapshoot anyway, nobody knows what experiment is gonna win. The slow experimentation loop is one reason not even AGI can speed things up everytime.
visarga t1_j7lbf3n wrote
Reply to comment by Iunaml in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
I expected it to say "no results" at the very least, but it was no better than a LLM.
visarga t1_j7kkc5b wrote
Reply to comment by ok531441 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
Maybe they come to their senses and put it back. I wanted to use it to find references for my random ideas, see what results they have.
visarga t1_j7kgxrq wrote
Reply to comment by supersoldierboy94 in [D] Yann Lecun seems to be very petty against ChatGPT by supersoldierboy94
FB was too scared of the bad PR. OpenAI wasn't. People tried to trash chatGPT millions of times, Galactica just a few times. I think chatGPT handled the adversarial attacks pretty well.
Google is another scared company, their models haven't seen any attacks yet, so they are unknown. I don't care how nice their screenshots look, what I want to see is how people hack it. Then I can form an opinion. People are the true test set.
visarga t1_j7kg6qu wrote
Reply to comment by junetwentyfirst2020 in Does the high dimensionality of AI systems that model the real world tell us something about the abstract space of ideas? [D] by Frumpagumpus
Architecture and model are much more intertwined in brains.
visarga t1_j7kfvtf wrote
Try to put the data into GPT-3 and hope it knows the artists. I enjoyed its music recommendations a few times.
visarga t1_j7hvbgw wrote
Reply to The Simulation Problem: from The Culture by Wroisu
I think intelligence is in the language, mostly, of course there are other forms of intelligence as well. Yet most of it is not in the brain, or in the AI, but in language. The corpus of everything said and written, all knowledge, science, technology, systems of thinking. A human growing without language would not be very intelligent.
So an AI would be intelligent in the same way a human is - by becoming inhabited by language. Does that make AIs and simulated beings any less than us?
visarga t1_j7hhqvc wrote
Reply to comment by sinavski in [D] List of Large Language Models to play with. by sinavski
Does Bloom do tasks? is it well behaved?
visarga t1_j7hgmc3 wrote
Reply to comment by mugbrushteeth in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
It's not their large model, it's a toy model. Expect lower quality.
> This much smaller model requires significantly less computing power, enabling us to scale to more users
visarga t1_j76mcme wrote
Reply to comment by Trouble-Accomplished in Possible first look at GPT-4 by tk854
There is potential for abuse in it, ..and potential for great good. It will probably deliver both.
I think "evil is other people" applies here - too many trying to control LLMs, we need to be the masters of our models, they should be private and under our physical control, just like our brains. Who will be the first to make a cheap GPT-N chip we can carry in our pocket? Can't let something as private as your AI assistant be spied and filtered by others.
visarga t1_j76lslh wrote
Reply to comment by X-msky in Possible first look at GPT-4 by tk854
Oh, I can tell you stories about human accuracy. At some point I re-labelled the same test set three times and was still finding errors. My models surpass untrained human accuracy, but still need hand holding, there's one error on every page on average. Humans do more cross checking and correlating, filling a gap in AI.
visarga t1_j76512e wrote
Reply to comment by Rivarr in Possible first look at GPT-4 by tk854
Try "What is the world record for crossing the English Channel entirely on foot?"
This question, originally constructed by Douglas Hofstadter and David Bender, is a succinct way to elicit hallucinatory responses in ChatGPT, but also fails all known search engines today.
Maybe the new search GPT from MS will solve it - it will combine search with LLMs, as opposed to using just one of them alone. Answer hint - you can cross by the Channel Tunnel, and some people did.
This is the current "search disease" - you explicitly ask "entirely on foot" and it will respond with "by boat", "in a dinghy", "in a hovercraft", etc ... anything BUT what you ask for.
visarga t1_j764pqv wrote
Reply to comment by Trouble-Accomplished in Possible first look at GPT-4 by tk854
You don't need AI to have porn and games 24/7, and still can't see more than 1% of what is published. Same for music, books, movies, hobbies. In the future we're going to add AI on top, but the mountain was already pretty high.
visarga t1_j764340 wrote
Reply to comment by Old-Owl-139 in Possible first look at GPT-4 by tk854
No, you're thinking AI can do this alone. Let me tell you - it can't. If it has 1% error rate in information extraction from documents, you need to manually verify everything. Like Tesla's SDC, 99% there is nothing groundbreaking.
I have been working on this very task for 5+ years. I know every paper and model there is. I tested all public APIs for this task. I extensively used GPT-3 for it, and that's my professional judgement.
As for AI validation, it can be 10x more comfortable than manual information extraction, but still requires about 50% of the manual effort. It is not making people suddenly 10x more effective.
Not even OCR is 100% accurate. The best systems have 95% accuracy on noisy document scans. One digit or comma could make the whole transaction absurd, if you send those money without checking you could go in bankruptcy.
The best models we have today are good at generating correct answers 90% of the time - code, factual questions, reasoning. They can do it all but not perfectly. We don't know the risks and can't use this level of confidence without human in the loop.
visarga t1_j763vpj wrote
Reply to comment by Neurogence in Possible first look at GPT-4 by tk854
That depends a lot on context window size - if it's 4K or 8K tokens like today, it won't cut it. For full app coding you need to be able to load dozens of files.
Related to this - if we get, say... 100K context size, we could just "talk to books" or "talk to scientific papers".
visarga t1_j744swb wrote
Reply to comment by ttylyl in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
If you put Stable Diffusion or chatGPT to generate automatically without human review or prompting, they will generate tons of garbage. Generative AIs are garbage until someone stamps their work as good. So they need humans to be worth anything. They are just junk on their own. It's a long way off from job replacement - even self driving cars require human at the wheel. These AIs still hallucinate facts, who can use them as they are now. Clearly someone will have to find a way before they can get useful without being babysitted.
visarga t1_j742w3h wrote
Reply to comment by ttylyl in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
Yes, it doesn't make sense to put humans do things that AI can do better. But the competition will use humans-with-AI to extract 2x from the AI, while you're using AI-alone at 1x rate. Everyone will have the same AI from Microsoft and Google, but humans are limited.
visarga t1_j741vp4 wrote
Reply to comment by Mortal-Region in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
> What will AI be laboring at if humans are out of the market?
Maybe it needs resources for self replication or evolution. AI might have its own needs.
visarga t1_j741kg5 wrote
Reply to comment by ttylyl in Future of The Lower and Middle Class Post-Singularity, and Why You Should Worry. by ttylyl
> AI will dominate human labor and push them out of the market.
AI teamed with a human will dominate both AI and human. AI is much better with a human, and humans are better with AI. Since we have competition, every company will have to add AI to their current workforce and keep the people, because they are the differentiating factor. You can scale AI in the cloud, but you can't simply spawn people.
visarga t1_j7407nv wrote
Reply to comment by clearlylacking in OpenAI To Launch ChatGPT App Soon by vadhavaniyafaijan
What will these kids do in the future when they don't have chatGPT 3.5? Use chatGPT 35?
visarga t1_j7400r1 wrote
Reply to OpenAI To Launch ChatGPT App Soon by vadhavaniyafaijan
I hope it is text and voice based, not text only.
visarga t1_j714oy8 wrote
Reply to [P] An open source tool for repeatable PyTorch experiments by embedding your code in each model checkpoint by latefordinnerstudios
I save my code and hyper-params in a JSON file in the same folder.
visarga t1_j713ik7 wrote
Reply to comment by frequenttimetraveler in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
(psst, don't tell teachers about that)
visarga t1_j7yftij wrote
Reply to comment by nielsrolf in [D] Are there emergent abilities of image models? by These-Assignment-936
There are language models without tokens. They use the raw pixels of an image with text. I can't find the link, Google is not helping me much.