JavaMochaNeuroCam t1_j6csr04 wrote

You won't be you in even the first 100 years after AGI. You will evolve and transcend so fast that your future self will be less like you than you are like your parents. Of course, there may be a choice:

A: Stay human with a human brain and live forever B: Have a human form but 2x intelligence C: Become a God


JavaMochaNeuroCam t1_j64e3yg wrote

Alan was really psyched about GATO (600+ tasks/domains)

I think it's relatively straightforward to bind experts to a general cognitive model.

Basically, the MOE, Mixture of Experts, would dual train the domain-specific model with simultaneous training of the cortex (language) model. That is, a pre-trained image-recognition model can describe an image (ie, a cat) in text to an LLM, but also bind it to a vector that represents the neural state that captures that representation.

So, you're just binding the language to the domain-specific representations.

Somehow, the hippocampus, thalamus and claustrum are involved in that in humans. If I'm not mistaken.


JavaMochaNeuroCam t1_j5cmn9t wrote

Read Stanislas Dehaene "Consciousness and the Brain". Also, Erik R Kandel, Nobel Laureate, "The Disordered Mind" and "In Search of Memory".

Those are just good summaries. The evidence of disparate regions serving specific functions is indisputable. I didn't believe it either for most of my life.

For example, Wernicke's area comprehends speech. Broca's area generates speech. Neither are necessary for thought or consciousness.

Wernicke's area 'encodes' into a neural state vector the meaning of the word sounds. These meaning-bound tokens are then interpreted by the neocortex, I believe.

But, this graphic isn't meant to suggest we should connect them together. I think he points out that the training done for each model could be employed on a common model, and it would learn to fuse the information from the disparate domains.


JavaMochaNeuroCam t1_iy3aaa1 wrote

You guys are thinking bottom-up. To predict AI/robotic's impact you have to think from the perspective of market drivers, costs and competition. Companies that compete will do whatever it takes to stay competitive. Manufacturing in particular, is obviously going to transition fastest. Also, in service businesses, there are micro tasks that get automated. These market drivers push automation and AI development. Investors can drive some, but in the end, there has to be a market. So, automation spreads along the paths of least resistance, or greatest return on investment. Automated vehicles, home automation, customer services.

So, there will be points along the curve of the AI's intelligence and robotic dexterity, cost and efficiency, at which you can say, what skills can this do more efficiently than humans, given total cost of ownership (build, train, maintain, power). Obviously, with above-human intelligence AND robots that can do any task a human can, it's game over.

The question is, along that curve there will be increasing disruption, and gradual but accelerating macroeconomic effects. Companies will evolve or die. People will adapt or end up homeless. The homeless will probably be given welfare. Drugs and crime will run rampant. There will be an industry to contain the growing homeless. It will probably involve robotic police. In the end, the upper layer will merge with the AI and the billions of displaced will just be contained. In 100 years they will die off mostly. The AI will build gleaming cities and expand to interstellar space.

That is, if we don't have AI wars. If we do, AI+robotics accelerates outside market forces. The dominant AI will know how to exterminate us ... and it will.


JavaMochaNeuroCam t1_isb9gcn wrote

'Robot' is just the non-biological mechanical interface of a computational system into the physical world. Bodies are just biological interfaces of computational systems into the physical world.

The question is misdirected. You don't become friends with actuators, servos, power delivery. Nor, do you 'friend' a body independent of the mind.

Obviously, the essence of 'friendship' is things like mutual interests, trust, goals etc. For that, you need computational entities capable of complex world modeling, planning, analysis, prediction, learning ... all various forms of goals set by the evolutionary imperative to survive by becoming smarter and more effective at exploiting and manipulation physical resources than the competition.

Therefore... given that humans have extremely slow evolutionary rates, and computers improve along exponentials, the ONLY semblance of humans in the (near?) future will be those who adapt to leverage and/or merge with robotically fascilitated silicon computational systems.

You are already friends with robots. You probably wouldn't be here if you weren't. Your phone, car, home ... for example.


JavaMochaNeuroCam t1_iqzbiyo wrote

The irony, is that the joke is that the COLLECTIVE IQ is worse than a decomposing fruit ... and you demonstrated an example of that by exhibiting 1 sample in 8+ billion as your proof of statistical significance. Of course, the average (mean) IQ is 100 by definition. But, the intelligence exhibited by a mob of professors can, an usually does, drop exceptionally fast. With the selfish stupidity of humanity in sum, we have the same chances of survival as a rotting apple.
Collectively, we are fools racing to the cliffs.