Viewing a single comment thread. View all comments

wisintel t1_japg1ua wrote

The whole premise is flawed. The Octopus learned English, and while it may not have the embodied experience of being a human, if it understands concepts it can infer. Everytime I read a book, through nothing but language I “experience” an incredible range of things I have never done physically. Yes the AI is trained to predict the next word, but how is everyone so sure the AI isn’t eventually able to infer meaning and concepts from that training?

12

Slow-Schedule-7725 t1_japinwh wrote

also as to how everyone is so sure they aren’t able to infer meaning and concepts from that training- someone made them, built them. its the same way someone knows what makes a car engine work or an airplane fly, just much more complicated. i’m not saying a machine won’t eVER be able to do these things, no one can say that for sure, but LLMs cannot. they do “learn,” but only to the extent of their programming, which is why AGI and ASI would be such a big deal.

−2

wisintel t1_japjask wrote

Actually, the makers of chatgpt can’t tell how it decides what to say in answer to a question. My understanding is there is a black box between the training data and the answers given by the model.

13

gskrypka t1_jar05nt wrote

As far as I understand we cannot reverse engineer the way text is generated due to large amount of parameters but I believe we understand basic principles of how those work.

1

Baldric t1_japrrod wrote

I understand the meanings of both '2' and '3+6,' while a calculator does not comprehend the significance of these numbers. However, the only difference between me and a calculator is that I had to learn the meaning of these numbers because my mind was not pre-programmed. The meanings of numbers are abstract concepts that are useful in the learning process, and creating these abstractions in my mind was likely the only way to learn how to perform calculations.

Neural networks have the ability to learn how to do math and create algorithms for calculations. The question is, whether they can create these abstractions to aid in the learning process. I believe that the answer is almost certainly yes, depending on the architecture and training process.

The statement, "they do 'learn,' but only to the extent of their programming," is open to multiple interpretations. While it is true that the learning ability of neural networks is limited by their programming, we use neural networks specifically to create algorithms that we cannot program ourselves. They are capable of performing tasks that we are unable to program them to do, maybe one of these task is to infer meaning and concepts from the training.

4

ShowerGrapes t1_jar0z6x wrote

>my mind was not pre-programmed

in a very real way, your mind was programmed - just through millions of years of evolution.

2

Baldric t1_jarp9rr wrote

Yes, it was programmed, but sadly not for mathematics.

Interestingly, I think the architectures we create for neural networks are or can be similar to the brain structures evolution came up with. For example, groups of biological neurons correspond to hidden layers, action potentials in dendrites are similar to activation functions, and the cortex might corresponds to convolutional layers. I’m pretty sure we will eventually invent the equivalent of neuroplasticity and find the other missing pieces, and then singularity or doomsday will follow.

1

Surur t1_jaqcbpd wrote

> In a recent paper, he proposed the term distributional semantics: “The meaning of a word is simply a description of the contexts in which it appears.” (When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”)

This interpretation makes more sense, else how would we understand concepts we have never or will never experience? E.g. the molten core of the earth is just a concept.

1

Slow-Schedule-7725 t1_japi03f wrote

well you may not have personally experienced them, but you inevitably will have thoughts and opinions and memories in reaction to the experiences in the book and, as a result, emotions. all these happen without your knowledge or effort and will, in some way, inform how you go about your life after reading said book. even if you haven’t personally “experienced” the specific events in the book, what you hAVE experienced will inform your reaction to and opinion of the event(s). experience is uniquely and wholly different from inference and you can’t compare human inference to machine inference- we simply don’t know enough about the human mind to do so, however, what we do know is every single experience in one’s life somehow informs every inference that we make, which, at this current moment and as far as i know, is impossible for a machine as it cannot “experience” the way we can.

−3

wisintel t1_japj0au wrote

How do you, this lady writing about octopuses or anyone else “know” that. No one knows how consciousness works. No one really understands how LLMs convert training data into answers. So how can anyone say so definitively what is or isn’t happening. I understand different people have different opinions and some people believe that chatgpt is just a stochastic parrot. I can accept anyone having this opinion, I get frustrated when people state this opinion as fact. The fact is no one knows for sure at the moment.

9

ShowerGrapes t1_jar15jt wrote

what if it experiences emotion similar to a vary autistic human would? like maybe it's unable to process thses emotions (right now) and so looks like it has none.

1