Viewing a single comment thread. View all comments

Cheap_Meeting t1_j21v6mi wrote

I think the main limitations of LLMs are:

  1. Hallucinations: They will make up facts.
  2. Alignment/Safety: They will sometimes give undesirable outputs.
  3. "Honesty": They cannot make reliable statements about their own knowledge and capabilities.
  4. Reliability: They can perform a lot of tasks, but often not reliably.
  5. Long-context (& lack of memory): They cannot (trivially) be used if the input size exceeds the context length.
  6. Generalization: They often require task-specific finetuning or prompting.
  7. Single modality: They cannot easily perform tasks on audio, image, video.
  8. Input/Output paradigm: It is unclear on how to use them for tasks which don't have a specific inputs and outputs (e.g. tasks which require taking many steps).
  9. Agency: LLMs don't act as agents which have their own goals.
  10. Cost: Both training and inference incur significant cost.
3

Flag_Red t1_j22aiul wrote

Only #1 here really relates to their symbolic reasoning capabilities. It does imply that symbolic reasoning is a secondary objective for the models, though.

4