Submitted by Scarlet_pot2 t3_104svh6 in singularity
Scarlet_pot2 OP t1_j39e8ao wrote
Reply to comment by KSRandom195 in We need more small groups and individuals trying to build AGI by Scarlet_pot2
The goal shouldn't be to develop AGI, The goal should be to make discoveries that could lead to parts of AGI when extrapolated out.. Like the first word generation model was made that led to the LLMs today, we need small teams trying new things and sharing the results.
Let's assume "guess the next word" fills the part of the brain for prediction down the line for when AGI is developed. Maybe a small group develops the first thing that will later on fit another part of the brain, like how to make memory work. or how to develop reasoning. or any other parts.
and at least some of those can be found by small groups trying new approaches. John Carmack said that all the code of AGI would be able to fit on a USB. the goal should be to find parts of that code.
It won't be easy or quick, but I'm sure if we had 100k people with beginner-intermediate base understanding of the subjects related to AI, all trying different approaches and sharing their results, some working together, after a few years we would probably have at least a few new methods worth trying that may lead to a part of AGI.
KSRandom195 t1_j39fo3z wrote
John Carmack is a very smart person, but he’s making a prediction out his ass. We have no idea how much code would actually be required. Let’s also be clear that he’s trying to run an AI startup which requires funding. So he has reason to be very rosy about what can be accomplished. Maybe he’s onto something revolutionizing in the realm of AGI, I hope he is, maybe he is not. Until he builds it end-to-end it’s hypothetical.
Some scientists believe that what gives us consciousness (something some argue is required for AGI) is that there are parts of our brain that are quantum entangled to other parts, but we have no idea how or why. Trying to make small pieces that might help that code aren’t going to be super useful if quantum entanglement hardware is required. It’s fundamentally different from what you would build on a classical computer.
Yes people should experiment and play around with it. But they’re not going to get something that looks like intelligence in their basement.
LoquaciousAntipodean t1_j3a42k9 wrote
I find this whole idea of intelligence as a quantity that AI just needs 'more of' to be perplexing; as far as I know intelligence simply is not a quality that can be mapped in this linear, 'FLOP's sort of way. The brain isn't doing discrete operations at all, its a continuous probabilistic cascade of differential potentials flowing across a vast foamy structure of neural connections.
Intelligence is like fire, not like smoke. A bigger hotter fire will make more smoke, but fire is just fire, big or small. It's a concept, not a definition of state.
The language models give such a striking impression of 'intelligence' because they are simulating, in a very efficient, digital way, the effect of the language centre of a human cognition. The brain is just foamy meat that's essentially just a heavily patched version of the same janky hardware that fish and frogs are using, for all its incredible complexity it might not be terribly efficient, we just don't know.
It might be easier than we think to 'surpass human intelligence', we just need to think in terms of diversity, not homogeneity. Like I said elsewhere, our brains are not single-minded; every human sort of contains their own committee. This true golden goose of AGI will be the collective of a multitude of subunits, and their diversity, not their unity, will be how they accrete strength - that's how evolution always works
Relative_Purple3952 t1_j3jeg9e wrote
Sounds very much like Ben Goertzel's approach and despite him not delivering on the AI front, I think he is very much correct that scaling a language model to get to true, general intelligence will never work. Language is a necessary but not sufficient quality of higher intelligence.
LoquaciousAntipodean t1_j3kxka8 wrote
I think that the problem with the brute-force, 'make it bigger!!!' approach is that it ignores subtitles like misinformation, manipulation, outdated or irrelevant information, spurious or bad-faith arguments - this is why I think there will need to be a multitude, not a Singularity.
These LLMs will, I think, need to be allowed to develop distinct, individual personalities, and then be allowed to interact with each other with as much human interaction in the 'discussions' as possible. The 'clock rate' of these AI debates would need to be deliberately slowed down for clarity for the humans perhaps, at least at first.
This won't necessarily make them 'more intelligent', but I do think it stands a good chance of rapidly making them more wise.
Viewing a single comment thread. View all comments