All-DayErrDay t1_j204aob wrote
Reply to comment by seventyducks in [D] DeepMind has at least half a dozen prototypes for abstract/symbolic reasoning. What are their approaches? by valdanylchuk
What are the central limitations were considering here? Let’s define them in concrete terms.
Cheap_Meeting t1_j21v6mi wrote
I think the main limitations of LLMs are:
- Hallucinations: They will make up facts.
- Alignment/Safety: They will sometimes give undesirable outputs.
- "Honesty": They cannot make reliable statements about their own knowledge and capabilities.
- Reliability: They can perform a lot of tasks, but often not reliably.
- Long-context (& lack of memory): They cannot (trivially) be used if the input size exceeds the context length.
- Generalization: They often require task-specific finetuning or prompting.
- Single modality: They cannot easily perform tasks on audio, image, video.
- Input/Output paradigm: It is unclear on how to use them for tasks which don't have a specific inputs and outputs (e.g. tasks which require taking many steps).
- Agency: LLMs don't act as agents which have their own goals.
- Cost: Both training and inference incur significant cost.
Flag_Red t1_j22aiul wrote
Only #1 here really relates to their symbolic reasoning capabilities. It does imply that symbolic reasoning is a secondary objective for the models, though.
seventyducks t1_j21gedt wrote
To be honest I'm not going to spend a long time thinking it through and being intellectually precise for a Reddit comment, I'd recommend you check out the AGI Debate I mentioned above for experts' opinions.
Viewing a single comment thread. View all comments