Viewing a single comment thread. View all comments

LoquaciousAntipodean t1_j3a42k9 wrote

I find this whole idea of intelligence as a quantity that AI just needs 'more of' to be perplexing; as far as I know intelligence simply is not a quality that can be mapped in this linear, 'FLOP's sort of way. The brain isn't doing discrete operations at all, its a continuous probabilistic cascade of differential potentials flowing across a vast foamy structure of neural connections.

Intelligence is like fire, not like smoke. A bigger hotter fire will make more smoke, but fire is just fire, big or small. It's a concept, not a definition of state.

The language models give such a striking impression of 'intelligence' because they are simulating, in a very efficient, digital way, the effect of the language centre of a human cognition. The brain is just foamy meat that's essentially just a heavily patched version of the same janky hardware that fish and frogs are using, for all its incredible complexity it might not be terribly efficient, we just don't know.

It might be easier than we think to 'surpass human intelligence', we just need to think in terms of diversity, not homogeneity. Like I said elsewhere, our brains are not single-minded; every human sort of contains their own committee. This true golden goose of AGI will be the collective of a multitude of subunits, and their diversity, not their unity, will be how they accrete strength - that's how evolution always works

3

Relative_Purple3952 t1_j3jeg9e wrote

Sounds very much like Ben Goertzel's approach and despite him not delivering on the AI front, I think he is very much correct that scaling a language model to get to true, general intelligence will never work. Language is a necessary but not sufficient quality of higher intelligence.

2

LoquaciousAntipodean t1_j3kxka8 wrote

I think that the problem with the brute-force, 'make it bigger!!!' approach is that it ignores subtitles like misinformation, manipulation, outdated or irrelevant information, spurious or bad-faith arguments - this is why I think there will need to be a multitude, not a Singularity.

These LLMs will, I think, need to be allowed to develop distinct, individual personalities, and then be allowed to interact with each other with as much human interaction in the 'discussions' as possible. The 'clock rate' of these AI debates would need to be deliberately slowed down for clarity for the humans perhaps, at least at first.

This won't necessarily make them 'more intelligent', but I do think it stands a good chance of rapidly making them more wise.

1