Viewing a single comment thread. View all comments

Slow-Schedule-7725 t1_japinwh wrote

also as to how everyone is so sure they aren’t able to infer meaning and concepts from that training- someone made them, built them. its the same way someone knows what makes a car engine work or an airplane fly, just much more complicated. i’m not saying a machine won’t eVER be able to do these things, no one can say that for sure, but LLMs cannot. they do “learn,” but only to the extent of their programming, which is why AGI and ASI would be such a big deal.

−2

wisintel t1_japjask wrote

Actually, the makers of chatgpt can’t tell how it decides what to say in answer to a question. My understanding is there is a black box between the training data and the answers given by the model.

13

gskrypka t1_jar05nt wrote

As far as I understand we cannot reverse engineer the way text is generated due to large amount of parameters but I believe we understand basic principles of how those work.

1

Baldric t1_japrrod wrote

I understand the meanings of both '2' and '3+6,' while a calculator does not comprehend the significance of these numbers. However, the only difference between me and a calculator is that I had to learn the meaning of these numbers because my mind was not pre-programmed. The meanings of numbers are abstract concepts that are useful in the learning process, and creating these abstractions in my mind was likely the only way to learn how to perform calculations.

Neural networks have the ability to learn how to do math and create algorithms for calculations. The question is, whether they can create these abstractions to aid in the learning process. I believe that the answer is almost certainly yes, depending on the architecture and training process.

The statement, "they do 'learn,' but only to the extent of their programming," is open to multiple interpretations. While it is true that the learning ability of neural networks is limited by their programming, we use neural networks specifically to create algorithms that we cannot program ourselves. They are capable of performing tasks that we are unable to program them to do, maybe one of these task is to infer meaning and concepts from the training.

4

ShowerGrapes t1_jar0z6x wrote

>my mind was not pre-programmed

in a very real way, your mind was programmed - just through millions of years of evolution.

2

Baldric t1_jarp9rr wrote

Yes, it was programmed, but sadly not for mathematics.

Interestingly, I think the architectures we create for neural networks are or can be similar to the brain structures evolution came up with. For example, groups of biological neurons correspond to hidden layers, action potentials in dendrites are similar to activation functions, and the cortex might corresponds to convolutional layers. I’m pretty sure we will eventually invent the equivalent of neuroplasticity and find the other missing pieces, and then singularity or doomsday will follow.

1

Surur t1_jaqcbpd wrote

> In a recent paper, he proposed the term distributional semantics: “The meaning of a word is simply a description of the contexts in which it appears.” (When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”)

This interpretation makes more sense, else how would we understand concepts we have never or will never experience? E.g. the molten core of the earth is just a concept.

1