Submitted by Dr_Singularity t3_xu0oos in singularity
Comments
Professional-Song216 t1_iqu1sdg wrote
If I could upvote this comment again I would lol
Bataranger999 t1_iqulkrw wrote
I did it for you
Nmanga90 t1_iqt8fju wrote
Holy shit a 540B LLM. That’s like 3 times the size of GPT-3. Why are the authors anonymous? There’s only a few orgs that this could realistically be
CommentBot01 t1_iqtizzd wrote
Maybe the author is LLM :)
GoodToKnowYouAll t1_iqx7eyd wrote
😳
manOnPavementWaving t1_iqtkra0 wrote
Actually we know what the LM is, it's PaLM, developed by google under Jeff Dean.
Anonymous peer review is a fucking joke
2Punx2Furious t1_iqv2rvn wrote
I mean, in this case it's obvious, but usually it's not that easy to guess who the authors are.
manOnPavementWaving t1_iqv3mmi wrote
Its in the author's best interests to show of who they are, misaligning that tends to just result in subtly cheating the system
Peer review in AI has been less and less important though, trial by twitter tends to perform much better
Tavrin t1_iqtombh wrote
It's anonymous for double peer reviewing (to try to prevent review biases) but like someone said, it's probably PaLM since the model is the same size, so the authors are probably from Google.
2Punx2Furious t1_iqv2q9n wrote
> double peer reviewing
Wasn't it called "double blind"? (I'm not a researcher).
space_spider t1_iqum8oo wrote
This is close to nvidia’s megatron parameter count: https://developer.nvidia.com/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/
It’s also the same as PaLM: https://ai.googleblog.com/2022/04/pathways-language-model-palm-scaling-to.html?m=1
This approach (chain of thought) has been discussed for a few months at least, so I think this could be a legit paper from nvidia or google
[deleted] t1_iqtjtdn wrote
[deleted]
dalayylmao t1_iqt5vcc wrote
Does this mean what I think it means?
Smoke-away t1_iqtjxlx wrote
Yes.
AGI 🤖 2022
jlpt1591 t1_iqts7s5 wrote
If you think agi 2022 your iq is probably below average
Phoenix5869 t1_iqu3akd wrote
Yh some ppl on here are very optimistic, to put it nicely
GeneralZain t1_iqttze5 wrote
>If you think agi 2022 your iq is probably below average
Ironic
[deleted] t1_iqup47u wrote
[deleted]
jlpt1591 t1_iqwsu52 wrote
Agi tomorrow is actually near genius level
FusionRocketsPlease t1_iqtrem9 wrote
Shut up.
GoldenRain t1_iqurjcv wrote
It does not seem to continuously improve. Given the chance for even more self training in the study, the answers actually decreased some in quality.
A huge step forward but not quite there yet.
AKnightAlone t1_iqt59wh wrote
Just imagine when an AI creates a superior dialect that catches on for use by people. Now there's an interesting thought. Human culture being swayed significantly by casual AI creations.
letharus t1_iqvhl53 wrote
Considering how easily we are influenced by things like social media algorithms there’s an argument to be said that we’re already being “programmed” by A.I. Just replace the instigators (currently humans making marketing decisions) with A.I.
quantum1eeps t1_iqvucdq wrote
Yes there will be no reason to learn your mother’s native tongue when there’s true babelfish. We will miss language eventually
Aggravating_Ad5989 t1_iqv5jyl wrote
Half these comments feel religious af, i really wouldn't be surprised if most people tried worshiping AGI as their God.
yurituran t1_iqvs0iy wrote
Some people absolutely will whenever ASI is developed. Then the christians will go nuts because a powerful "material god" will be in the world solving problems and they will be like "This is just like revelations! ASI is the anti-christ!" and it will be a whole big thing.
Fel1ace t1_iqvw1qf wrote
I mean, think of the benefits:
- it exists
- it is smart
- it is just and unbiased
[deleted] t1_iqx1f97 wrote
[deleted]
AI_Enjoyer87 t1_iqtxew2 wrote
Is this it? Now begins the curve? AGI in a year? Lol
Akimbo333 t1_iqtxpzu wrote
What about GPT4?
2Punx2Furious t1_iqv2vn6 wrote
It's not coming, we skipped directly to AGI.
MercuriusExMachina t1_iqx2aqv wrote
ASI 2022 woohoo + happy cake day!
Akimbo333 t1_iqv2z12 wrote
How did we do that exactly?
2Punx2Furious t1_iqv352f wrote
Just a joke (mostly), considering how fast AI progress is going.
Akimbo333 t1_iqvj507 wrote
Oh ok lol!
doodlesandyac t1_iqwh8ou wrote
This is basically just active learning with chain of thought right?
ReasonablyBadass t1_iqv5mze wrote
Guys, relax. This is just about finetuning a few percentage points.
doodlesandyac t1_iqwhczz wrote
Yeah it’s called active learning lol
sparcity_of_time t1_ir2ux0u wrote
self-training has been around for a while (earlier: https://arxiv.org/abs/2204.12639v1), also likely shows up in humans as System 2 -> System 1 compression based on minimizing prediction losses: https://www.youtube.com/watch?v=75d_29QWELk. that said, neat to see.
Scientific_Thinking t1_iqv8dws wrote
exciting news boys and girls! we're getting there! time to designate a robot bible and start worshiping our new overlords!
Lawjarp2 t1_iqupv5x wrote
The cringe in this subreddit is crazy. First they celebrated gato like idiots and now this.
Schpaedzles t1_iquxrjs wrote
I wouldnt say cringe, but the constant overreactions are a bit annoying lol
salaryboy t1_iqusrej wrote
What's the cringe here?
NTaya t1_iqv3v9w wrote
Gato was not cringe. It was very impressive due to multimodality—definitely worth celebrating, but for reasons other than this subreddit's.
Self-improving PaLM is interesting, but it uses the same old techniques for that, and it's not continuously improving nor superhuman, so yeah. Comments are definitely full of hopium. The article is still great, though, but as an incremental upgrade.
EOE97 t1_iqv21rq wrote
A decade ago and most people wouldn't expect this amount of progress any decade soon. Let them have their hopium.
Aggravating_Ad5989 t1_iqv60cm wrote
I wont be celebrating Gato until they can show it can be scaled up. Until then its just a toy.
TheSingulatarian t1_iqv0u5x wrote
Religious fanaticism.
Heizard t1_iqu0d15 wrote
Most beautiful news I had to read this year.
Welcome to the world child, let your potential shine like unlimited light.