Viewing a single comment thread. View all comments

KSRandom195 t1_j398ztl wrote

The problem is it is expensive.

You want an AGI, but you don’t just want an AGI, you likely want it at least as smart as a person.

They say a single human brain has 2.5 Petabytes of information in it. Backblaze, which specializes in just storage, can do $35,000 per Petabyte. So that’s $87,500 in just storage, and that’s not redundant or fast storage, that’s just raw storage.

You need redundancy at that scale, so probably multiply that by ~2, so $175,000, again, only in storage.

Now you need compute. They estimate the human brain operates at 1 exaFLOP. The world fastest super computer is currently only ~ 1.1 exaFLOPs. It cost $600,000,000 to build, and that doesn’t include the cost to maintain and run it.

And that is even assuming we can do 1:1 the speed we need.

This isn’t something you can just do in your basement, not with the tech we have today.

10

Scarlet_pot2 OP t1_j39e8ao wrote

The goal shouldn't be to develop AGI, The goal should be to make discoveries that could lead to parts of AGI when extrapolated out.. Like the first word generation model was made that led to the LLMs today, we need small teams trying new things and sharing the results.

Let's assume "guess the next word" fills the part of the brain for prediction down the line for when AGI is developed. Maybe a small group develops the first thing that will later on fit another part of the brain, like how to make memory work. or how to develop reasoning. or any other parts.

and at least some of those can be found by small groups trying new approaches. John Carmack said that all the code of AGI would be able to fit on a USB. the goal should be to find parts of that code.

It won't be easy or quick, but I'm sure if we had 100k people with beginner-intermediate base understanding of the subjects related to AI, all trying different approaches and sharing their results, some working together, after a few years we would probably have at least a few new methods worth trying that may lead to a part of AGI.

5

KSRandom195 t1_j39fo3z wrote

John Carmack is a very smart person, but he’s making a prediction out his ass. We have no idea how much code would actually be required. Let’s also be clear that he’s trying to run an AI startup which requires funding. So he has reason to be very rosy about what can be accomplished. Maybe he’s onto something revolutionizing in the realm of AGI, I hope he is, maybe he is not. Until he builds it end-to-end it’s hypothetical.

Some scientists believe that what gives us consciousness (something some argue is required for AGI) is that there are parts of our brain that are quantum entangled to other parts, but we have no idea how or why. Trying to make small pieces that might help that code aren’t going to be super useful if quantum entanglement hardware is required. It’s fundamentally different from what you would build on a classical computer.

Yes people should experiment and play around with it. But they’re not going to get something that looks like intelligence in their basement.

2

LoquaciousAntipodean t1_j3a42k9 wrote

I find this whole idea of intelligence as a quantity that AI just needs 'more of' to be perplexing; as far as I know intelligence simply is not a quality that can be mapped in this linear, 'FLOP's sort of way. The brain isn't doing discrete operations at all, its a continuous probabilistic cascade of differential potentials flowing across a vast foamy structure of neural connections.

Intelligence is like fire, not like smoke. A bigger hotter fire will make more smoke, but fire is just fire, big or small. It's a concept, not a definition of state.

The language models give such a striking impression of 'intelligence' because they are simulating, in a very efficient, digital way, the effect of the language centre of a human cognition. The brain is just foamy meat that's essentially just a heavily patched version of the same janky hardware that fish and frogs are using, for all its incredible complexity it might not be terribly efficient, we just don't know.

It might be easier than we think to 'surpass human intelligence', we just need to think in terms of diversity, not homogeneity. Like I said elsewhere, our brains are not single-minded; every human sort of contains their own committee. This true golden goose of AGI will be the collective of a multitude of subunits, and their diversity, not their unity, will be how they accrete strength - that's how evolution always works

3

Relative_Purple3952 t1_j3jeg9e wrote

Sounds very much like Ben Goertzel's approach and despite him not delivering on the AI front, I think he is very much correct that scaling a language model to get to true, general intelligence will never work. Language is a necessary but not sufficient quality of higher intelligence.

2

LoquaciousAntipodean t1_j3kxka8 wrote

I think that the problem with the brute-force, 'make it bigger!!!' approach is that it ignores subtitles like misinformation, manipulation, outdated or irrelevant information, spurious or bad-faith arguments - this is why I think there will need to be a multitude, not a Singularity.

These LLMs will, I think, need to be allowed to develop distinct, individual personalities, and then be allowed to interact with each other with as much human interaction in the 'discussions' as possible. The 'clock rate' of these AI debates would need to be deliberately slowed down for clarity for the humans perhaps, at least at first.

This won't necessarily make them 'more intelligent', but I do think it stands a good chance of rapidly making them more wise.

1