Viewing a single comment thread. View all comments

BadassGhost t1_j5a7cip wrote

Fair, I should have swapped them!

What leads you to believe LLMs don't have first-order logic? I just tested it with ChatGPT and it seems to have a firm grasp on the concept. First-order logic seems to be pretty low on the totem pole of abilities of LLMs. Same with symbolic reasoning. Try it for yourself!

I am not exactly sure what you mean by abstraction for neural nets. Are you talking about having defined meanings of inputs, outputs, or internal parts of the model? I don't see why that would be necessary at all for general intelligence. It doesn't seem that humans have substantial, distinct, and defined meanings for most of the brain, except for language (spoken and internal). Which LLMs are also capable of.

The human brain seems to also be a giant function, as far as we can tell (ignoring any discussion about subjective experience, and just focusing on intelligence).

> This type of training detects concrete local patterns in the dataset, but that’s it - these models can’t generalize their observations in any way.

No offense, but this statement seems to really show a lack of knowledge about the last 6+ years of NLP progress. LLMs absolutely can generalize outside of the training set. That's kind of the entire point of why they've proved useful and why the funding for them has skyrocketed. You can ask ChatGPT to come up with original jokes using topics that you can be pretty certain have never been put together for a joke, you can ask it to read your code that has never been seen before and give recommendations and answers about it, you can ask it to invent new religions, etc etc.

These models are pretty stunning in their capability to generalize. That's the whole point!

1