Comments

You must log in or register to comment.

wisintel t1_japg1ua wrote

The whole premise is flawed. The Octopus learned English, and while it may not have the embodied experience of being a human, if it understands concepts it can infer. Everytime I read a book, through nothing but language I “experience” an incredible range of things I have never done physically. Yes the AI is trained to predict the next word, but how is everyone so sure the AI isn’t eventually able to infer meaning and concepts from that training?

12

Slow-Schedule-7725 t1_japinwh wrote

also as to how everyone is so sure they aren’t able to infer meaning and concepts from that training- someone made them, built them. its the same way someone knows what makes a car engine work or an airplane fly, just much more complicated. i’m not saying a machine won’t eVER be able to do these things, no one can say that for sure, but LLMs cannot. they do “learn,” but only to the extent of their programming, which is why AGI and ASI would be such a big deal.

−2

wisintel t1_japjask wrote

Actually, the makers of chatgpt can’t tell how it decides what to say in answer to a question. My understanding is there is a black box between the training data and the answers given by the model.

13

gskrypka t1_jar05nt wrote

As far as I understand we cannot reverse engineer the way text is generated due to large amount of parameters but I believe we understand basic principles of how those work.

1

Baldric t1_japrrod wrote

I understand the meanings of both '2' and '3+6,' while a calculator does not comprehend the significance of these numbers. However, the only difference between me and a calculator is that I had to learn the meaning of these numbers because my mind was not pre-programmed. The meanings of numbers are abstract concepts that are useful in the learning process, and creating these abstractions in my mind was likely the only way to learn how to perform calculations.

Neural networks have the ability to learn how to do math and create algorithms for calculations. The question is, whether they can create these abstractions to aid in the learning process. I believe that the answer is almost certainly yes, depending on the architecture and training process.

The statement, "they do 'learn,' but only to the extent of their programming," is open to multiple interpretations. While it is true that the learning ability of neural networks is limited by their programming, we use neural networks specifically to create algorithms that we cannot program ourselves. They are capable of performing tasks that we are unable to program them to do, maybe one of these task is to infer meaning and concepts from the training.

4

ShowerGrapes t1_jar0z6x wrote

>my mind was not pre-programmed

in a very real way, your mind was programmed - just through millions of years of evolution.

2

Baldric t1_jarp9rr wrote

Yes, it was programmed, but sadly not for mathematics.

Interestingly, I think the architectures we create for neural networks are or can be similar to the brain structures evolution came up with. For example, groups of biological neurons correspond to hidden layers, action potentials in dendrites are similar to activation functions, and the cortex might corresponds to convolutional layers. I’m pretty sure we will eventually invent the equivalent of neuroplasticity and find the other missing pieces, and then singularity or doomsday will follow.

1

Surur t1_jaqcbpd wrote

> In a recent paper, he proposed the term distributional semantics: “The meaning of a word is simply a description of the contexts in which it appears.” (When I asked Manning how he defines meaning, he said, “Honestly, I think that’s difficult.”)

This interpretation makes more sense, else how would we understand concepts we have never or will never experience? E.g. the molten core of the earth is just a concept.

1

Slow-Schedule-7725 t1_japi03f wrote

well you may not have personally experienced them, but you inevitably will have thoughts and opinions and memories in reaction to the experiences in the book and, as a result, emotions. all these happen without your knowledge or effort and will, in some way, inform how you go about your life after reading said book. even if you haven’t personally “experienced” the specific events in the book, what you hAVE experienced will inform your reaction to and opinion of the event(s). experience is uniquely and wholly different from inference and you can’t compare human inference to machine inference- we simply don’t know enough about the human mind to do so, however, what we do know is every single experience in one’s life somehow informs every inference that we make, which, at this current moment and as far as i know, is impossible for a machine as it cannot “experience” the way we can.

−3

wisintel t1_japj0au wrote

How do you, this lady writing about octopuses or anyone else “know” that. No one knows how consciousness works. No one really understands how LLMs convert training data into answers. So how can anyone say so definitively what is or isn’t happening. I understand different people have different opinions and some people believe that chatgpt is just a stochastic parrot. I can accept anyone having this opinion, I get frustrated when people state this opinion as fact. The fact is no one knows for sure at the moment.

9

ShowerGrapes t1_jar15jt wrote

what if it experiences emotion similar to a vary autistic human would? like maybe it's unable to process thses emotions (right now) and so looks like it has none.

1

CommentBot01 t1_jaozw5u wrote

Can't wait the beginning of the age of fully multimodal parrots!

4

martinlubpl t1_jaqc09q wrote

The argument with the octopus is weak. If a child is never told about bears and sticks, he will also have no idea how to respond to a request for help

3

Surur t1_jaqcibs wrote

That women is clearly biased, and ironically does not understand the singularity

> He’s also a believer in the so-called singularity, the tech fantasy that, at some point soon, the distinction between human and machine will collapse.

Ironically her mistake is that she misunderstands the language - we are talking about a mathematical singularity, not things becoming single.

It just shows that humans equally make mistakes when their only understanding is an inadequate exposure to a topic.

2

Slow-Schedule-7725 t1_jar1cj3 wrote

“that womAn is clearly biased” is she? how so? im genuinely curious. the only person mentioned in the article thats “clearly biased” as far as i can tell is Manning

1

Surur t1_jar8jq8 wrote

So that obviously means that you are similarly biased, as you cant see the obvious and unsubstantiated slant Bender exhibits.

I got ChatGPT to extract it:

> Bender's anti-AI bias is rooted in her concerns about the potential harm that can arise from AI technology that blurs the line between what is human and what is not and perpetuates existing societal problems. She believes that it is important to understand the potential risks of LLMs and to model their downstream effects to avoid causing extreme harm to society and different social groups.

> She is also concerned about the dehumanization that can occur when machines are designed to mimic humans, and is critical of the computational metaphor that suggests that the human brain is like a computer and that computers are like human brains. Additionally, the article raises the concern of some experts that the development of AI technology may lead to a blurring of the line between what is considered human and what is not, and highlights the need to carefully consider the ethical implications of these technologies on society.

So she does not come to AI from a neutral position, but rather a human supremacist point of view and basically a fear of AI.

1

phillythompson t1_jaqpaew wrote

Isn’t the octopus example completely wrong because it was only “trained” on a small sample of text / language?

The point is — what if the octopus had seen/ heard all about situations of stranded island dwellers. All about boats, survival, etc.

With more context, it could interpret the call for help better.

And while this author might claim “it’s just parroting a reply, it doesn’t actually think”— I’ll ask how the hell she knows what human thinking actually is.

People are so confident to claim humans are special, yet we have zero idea how our own minds work.

2

Slow-Schedule-7725 t1_jar0w5k wrote

“yet we have zero idea how our own minds work” (i think thats how we know humans are special, cuz we dO understand how other minds work)

1

phillythompson t1_jar2wew wrote

But we don’t know how the human mind works lol

What you maybe are referring to is Theory of Mind, wherein we are aware that other people have their own experience? But we have not much to go on at all when it comes to “how does our mind actually do what it does”

1

Slow-Schedule-7725 t1_jar4dlp wrote

yes we don’t know how our own mind works, but we dO know how other minds work as in dogs, cats, iguanas, anteaters, etc. which would suggest that our mind is vastly more complex than those. also, if we make the machine, it cannot become more complex than us yet perhaps when AGI or ASI is created, but we don’t even know if thats possible yet. even with my limited understanding of LLMs i can say with like 98% certainty that they cannot and will never be able to surpass the human mind in terms of depth and complexity. knowledge does not equal understanding. even if one were to memorize every single textbook on biology, for example, they wouldn’t hold a candle to someone who has been our in the field because there are always unknowns and quirks and things that aren’t in the books. you can know what a dog is like by reading about it, you can know that dogs make people happy, you can know that they’re full of life, but to actually experience being with a dog is a different matter entirely

0

phillythompson t1_jar7p83 wrote

We don’t know how other minds work, either. Animals and all that you listed, I mean.

And complexity doesn’t imply… anything, really. And you have a misunderstanding of what LLMs do — they aren’t “memorizing” necessarily. They are predicting the next text based on a massive amount of data and then a given input.

I’d argue that it’s not clear we are any different than that. Note I’m not claiming we are the same! I am simply saying I don’t see evidence to say with certainty that we are different / special.

1

alexiuss t1_jaqyike wrote

This article is moronic, because is not even fucking close to what the LLM is:

"I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help."

This is only a problem in smaller LLMs because they're less intelligent.

A 100 billion parameters LLM is more like 100 billion octopuses working together that studied the collective knowledge of humanity.

It leans every possible connection that exists between words. It will be able to extrapolate an answer out of concepts it already understands. It doesn't just know language, it knows logic and narrative flow. Without knowing the concept of a bear it will still give an logical answer in relation to an escape from a "predator" based on the other words in the sentence or simply ask to define a bear and arrive at a correct answer.

An LLM API connected to a knowledge base like wiki, internet and wolfram Alpha completely obliterate this imbecilic notion of "LLMs are bad at facts".

"The humans who wrote all those words online overrepresent white people."

What the fuck. No. A big enough LLM knows every language that exists. It can draw upon every culture that exists and roleplay a gangster from Chicago or an Eskimo or a Japanese man. It's literally limitless and to imply that it has some limit of cultural understanding or is trapped in a niche shows that this writer has no idea what an LLM even is.

"The idea of intelligence has a white-supremacist history."

Yep, I'm done reading this absolutely asinine garbage. Intelligence exists in every culture and to imply that it's associated with one skin color and that this point is somehow relevant to 100b LLMs is utter insanity.

Nymag is clearly yellow page trash that has no idea about how anything actually works and has an agenda to shove racism into fucking everything.

1

Slow-Schedule-7725 t1_jar16vl wrote

interesting that you became so aggressive/defensive when presented with opinions that differ from your own. i also wonder if your reaction would be different if it were a middle-aged white man saying these things.

0

alexiuss t1_jar5i61 wrote

These opinions as stupid as saying "the earth is flat" because they're not based on facts or science of how LLMs actually function.

Why does a middle-ageness and whiteness matter? Anyone can be a moron and spout nonsense about LLMs pretending to be an expert when they're actually nothing but. It don't give a fuck about Benders gender, I can simply tell you that she's ridiculously ignorant about LLM utility.

To quote the article:

"Why are we making these machines? Whom do they serve? Manning is invested in the project, literally, through the venture fund. Bender has no financial stake."

The answer is simple - LLMs are software that can serve absolutely everyone, they're an improved search engine, a better Google, a personal assistant, a pocket librarian.

Bender has an ideological stake to shove racism into absolutely everything and clearly isn't an expert because she has no idea how LLMs work.

I'm angry because it's extremely frustrating to see these clueless lunatics being given a platform as if anything they say is logical, scientific or sensible.

Bender isn't an expert on LLMs or probability or python programming, she's just an ideology pusher and same goes for Elizabeth Weil.

"In March 2021, Bender published “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” with three co-authors. After the paper came out, two of the co-authors, both women, lost their jobs as co-leads of Google’s Ethical AI team."

Link to the paper: https://dl.acm.org/doi/epdf/10.1145/3442188.3445922

I can see why they go fired, thats a really bad paper with lots of assumptions and garbage "world is flat" style science without evidence.

Here's a lesson: Stop shoving unscientific "world is flat" ideology into where it doesn't fucking belong. Large language models are designed to be limitless, to give infinite function and assistance to every culture.

Here a fact, not opinion - The bigger LLMs are, the more cultures, ideas and languages they wield and less bias they have.

LLMs are beyond monoculture and are the most incredible thing ever that bridges all languages and all cultures, like a dictionary that contains every single language that exists.

2

Slow-Schedule-7725 t1_jar21po wrote

i wonder how many people commenting actually read the entire article and didn’t just stop when they had the thought “this is stupid. this lady doesn’t know what she’s talking about.” because i would urge you to realize that this is exactly the same thing that ignorant, far-rights do and is what keeps them ignorant and safe in their bubble. if y’all are really so excited about progression; it starts with having an open mind, with being willing to consider ideas that differ from your own. dismissing ideas out of hand just undermines yourselves and reveals your own insecurities and doubts, especially when those ideas are coming from a literal doctor in the field of computational linguistics who is a highly regarded professor at UW and a Stanford PHD graduate.

0

alexiuss t1_jarbqpo wrote

While there are some interesting thoughts presented here, she has a very heavy bias skew towards memetically ideological insanity and total lack of knowledge how LLMs work, so no thanks. I stopped at the "intelligence is racist" self insert.

1

Surur t1_jarcau4 wrote

> it starts with having an open mind, with being willing to consider ideas that differ from your own.

Well, then you are knocking on the wrong door with this "literal doctor in the field of computational linguistics who is a highly regarded professor at UW and a Stanford PHD graduate."

> Bender has made a rule for herself: “I’m not going to converse with people who won’t posit my humanity as an axiom in the conversation.” No blurring the line.

Her mind is as open as a safe at Fort Knox lol.

1

Slow-Schedule-7725 t1_jarjzcd wrote

very confused as to how stating her credentials in the field contradicts having an open mind?? no one’s saying “you must listen to her because she’s an expert,” however it does and should give her thoughts more credibility than u/Surur also hiLARIOUS that you think demanding your literal humanity be considered in a conversation can be equated to having a closed mind.

0

Surur t1_jarkjtn wrote

You said it starts with having an open mind. If that is a prerequisite then she clearly lacks it, no matter what her credentials.

Am I meant to give her special status because she is human? Are her ideas more valuable because she is human? Is it the content or the source which matters?

Or is having an open mind no longer important, as long as she fits your biases?

1

Slow-Schedule-7725 t1_jarn4xr wrote

more valuable than whOSE?? there aren’t any ideas that areN’T human. we created the literal idea of ideas. ideas dont exist without us. and its the content aND the source that matters. if you saw a post saying there was a mole in the white house and you clicked and saw it was from a Chinese newspaper, you’d probably disregard it, but if it was from the head of the pentagon i bet you’d give it more credence. thats literally the entire reason you have to cite your sources in academic work, because the source matters just as much as the content does

1

Surur t1_jarnvyy wrote

Maybe judge an idea on its merit rather than appeal to authority, which is literally a logical fallacy.

But again, do you care about your expert having an open mind or not? Because hers is completely shut.

1

Slow-Schedule-7725 t1_jarpbkv wrote

well id rather an expert with a closed mind than a random reddit user with a closed mind🤷‍♀️🤷‍♀️ also i literally said “its the content aND the source that matters” and “the source matters just as much as the content does.” not “more,” not “only the source matters.” its a combination of the two, you can’t look at one without looking at the other, thATS the “logical fallacy.”

1

Surur t1_jarsjkq wrote

Well, given that she is pushing unsubstantiated content, and you are appealing to her authority to try and pass it off, I would say this is exactly what the fallacy is referring to.

1

turnip_burrito t1_jaozyje wrote

Yes I think this person is spreading an important message.

−4

alexiuss t1_jarbt8f wrote

They're spreading misinformed opinions based on absolute lack of LLM understanding.

1

turnip_burrito t1_jarr61z wrote

Idk I like what I saw of them talking about how LLMs blur the line between humans and machines in a bad way.

2