Submitted by Green-Future_ t3_126211o in singularity

I am in two minds about the recent AI innovations of the past few weeks... on one hand I think the new products released are phenomenal and will replace traditional search engines.

On the other hand, I think the LLMs we have seen released in the past few weeks are still fundamentally "Alien", and they are only able to resemble / improve upon human output in some dimensions. Taking a birds eye view, they appear to suffer the same problems that have always cursed deep learning models... the fact that there isn't one model that can do everything. For example, at one point I was working on a deep learning model for classifying Alzheimer's Disease (AD) patients from controls, using brain waves (EEGs). However, it became apparent that whilst I could create a model to distinguish between AD patients and controls, validation of a tool to do this in the REAL WORLD would be completely unfeasible. It would require this "early diagnostic aid" could also distinguish between EEGs of patients with other similar diseases, other unrelated diseases, who have taken stimulants, etc. Obviously, with all this data, and with infinite computing power, this could be learned...But, that's not a system which will be able to invent. In similar fashion (BUT with "all this data") LLMs can understand the meaning and context behind words, but ultimately that is their work. They are still not inventors, or innovators. They are simply machines doing what they are trained to do - which is accurate next token prediction.

How can AGI be trained if it is not for a specific task? (and thus "general" intelligence). When we as humans are "trained" we learn from such a broad range of sensory inputs, a lot of the time with no supervision. A child learns language through having to infer through the actions of others, not from direct labelling. A child learns words from different people who exhibit different tones of voice, different gestures, and different facial expressions. An LLM can't be the only piece of the puzzle to AGI if AGI learns as a human does.

11

Comments

You must log in or register to comment.

simmol t1_je77ct0 wrote

The training via broad sensory inputs will probably come in the multimodal LLMs. So essentially, the next generation LLMs will be able to look at an image and either be able to answer questions regarding that particular image (GPT-4 probably has this capability) or just treat the image itself as the input and say something about the image unprompted (GPT-4 probably does not have this capability). I think the latter ability will make the LLM seem more AGI like given that the current LLMs only respond to the inquiry of the users. But if the AGI can respond to an image and if you put this inside a robot, then presumably, the robot can respond naturally to the ever-changing image that is seen from its sensors and talk about it accordingly.

I think once this happens, then the LLM will seem less like a tool and more like a being. This probably does not solve the symbolic logic part of building up knowledge from simple set of rules, but that is probably a separate task on its own that will not be solve by multimodality but by layering the current LLM with another deep learning model (or via APIs/plugins with third party apps).

11

brianberns t1_je7iej4 wrote

"On the highway towards Human-Level AI, Large Language Model is an off-ramp."

Yann LeCun

3

ReadSeparate t1_jed4ghg wrote

Yann LeCun is consistently bad and dumb on everything so I assume this means that LLMs are a direct route to AGI.

1

NarrowTea t1_je81s5n wrote

Well ai still sucks at remembering what it did.

1