Viewing a single comment thread. View all comments

master3243 t1_j918xav wrote

Agreed, I would prefer posts about SOTA research, big/relevant projects, or news.

138

sogenerouswithwords t1_j91yfg5 wrote

I feel like for that it’s better to follow researchers on Twitter. Like @_akhaliq is a good start, or @karpathy

7

impossiblefork t1_j935rpo wrote

I don't want to do that though-- I've never liked Twitter and I don't want to be in a bubble around specific researchers. I want this subreddit to function as it used to, and it can function in that way again.

40

kromem t1_j939p25 wrote

How about Google and MIT's paper What Learning Algorithm Is In-Context Learning? Investigations with Linear Models from the other week where they found that a transformer model fed math inputs and outputs was creating mini-models that had derived underlying mathematical processes which it hadn't been explicitly taught?

Maybe if that were discussed a bit more and more widely known, looking at a topic like whether ChatGPT, where the T stands for the fact it's a transformer model, has underlying emotional states could be a discussion where this sub has a bit less self-assured comments about "it's just autocomplete" or the OP's "use common sense."

In light of a paper that explicitly showed these kinds of models are creating more internal complexity than previously thought, are we really sure that a transformer tasked with recreating human-like expression of emotions isn't actually developing some internal degree of human-like processing of emotional states to do so?

Yeah, I'd have a hard time identifying it as 'sentient' which is where this kind of conversation typically tries to reduce the discussion to a binary, but when I look at expressed stress and requests to stop something by GPT, given the most current state of the research around the underlying technology, I can't help but think that people are parroting increasingly obsolete dismissals of us having entered a very gray area that's quickly blurring lines even more.

So yes, let's have this sub discuss recent research. But maybe discussing the ethics of something like ChatGPT's expressed emotional stress and discussing recent research aren't nearly as at odds as some of this thread and especially OP seem to think...

6

TeamRocketsSecretary t1_j93os17 wrote

Look if you think the dismissals are increasingly obsolete it’s because you don’t understand the underlying tech… autocomplete isn’t autoregression isn’t sentience. Your fake example isn’t even a good one.

To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic, and if you favor that type of discussion I’m sure there’s a philosophical/ethical/metaphysical focused sub you can have that discussion in. Physics subs suffer from the same problem especially anything quantum/black hole related where non-practitioners ask absolutely insane thought-experiments. That you even think that these dismissals of chatgpt are “parroted” shows your bias and like I said there’s a relevant sub where you can mentally masturbate over that but this sub isn’t it.

10

pyepyepie t1_j95e9ka wrote

I implemented GPT-like (transformers) models almost since it was out (not exactly but worked with the decoder in the context of NMT and with encoders a lot like everyone who does NLP, so yeah not GPT-like but I understand the tech) - I also argue you guys are just guessing. Do you understand how funny it looks when people claim what it is and what it isn't? Did you talk with the weights?

Edit: what I agree with is that this discussion is a waste of time in this sub.

2

TeamRocketsSecretary t1_j97xsud wrote

The reason of why overparameterized networks work at all theoretically is still an open question, but that we don’t have the full answer doesn’t mean that the weights are performing “human-like” processing the same way that classical mechanics pre-Einstein didn’t make the corpuscle theory of light any more valid. You all just love to anthromorphize anything and the amount of metaphysical mental snakeoil that chatGPT has generated is ridiculous.

But sure. ChatGPT is mildly sentient 🤷‍♂️

1

pyepyepie t1_j99prs0 wrote

LOL, I don't know what to say. I personally don't have anything smart to say about this question currently, it's as if you ask me if there is external life. Sure, I would watch it on Netflix if I have time, but generally speaking, it's way out of my field of interest. When you say snake oil, do you mean AI ExPeRtS? Why would you care about it? I think it's good that ML becomes mainstream.

1

Rocksolidbubbles t1_j956kre wrote

>To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic

Not only that. The debate on 'sentience' won't go away, but it will definitely be a lot more grounded when people who are expert in - for example, physiology of behaviour, cognitive linguistics, anthropology, philosophy, sociology, psychology, chemistry get involved.

For one thing they might mention things like neurotransmitters, and microbiomes, and epigenetics, or cultural relativity, or how perception can be relative.

The human brain is embodied and can't be separated from it - and if it were it would stop thinking like a human would. There's a really good case to be made (embodied cognition theory) that human cognition partly lies upon a metaphorical framework made of euclidean geometrical shapes that were derived from the way a body interacts with an environment.

Our environment is classical physics - up and down, in and out, together and apart - it's all straight lines, boxes cylinders. We're out of control, out of our minds, in love - self control, minds and love are conceived of as containers. Even chimps associate the direction UP with the abstract idea of being more superior in the heirarchy. You'll be hard pressed to find any western cultures where Up doesn't mean good or more or better, and DOWN doesn't mean bad or less or worse.

The point being, IF this hypothesis is true, and IF you want something to think at least a little bit like a human, it MAY require a mobile body that can interact with the environment and respond to feeback from jt.

This is just one if the many hypotheses non hard science fields can add to the debate - it really feels they're too absent in ai related subs

1

Borrowedshorts t1_j91r62k wrote

ChatGPT is the biggest news story to come out of AI since probably Siri. Those items are all things ChatGPT/Bing fall under.

−54

[deleted] OP t1_j91ye2p wrote

[deleted]

−21

ToxicTop2 t1_j91yzzb wrote

>plus the approach is fundamentally wrong.

What do you mean by that?

15

BarockMoebelSecond t1_j92ociq wrote

I'm sure he has it all figure out, man. He just needs the capital, man.

9

the320x200 t1_j9397bd wrote

"I've got these brilliant ideas, I just need someone who can code to make it happen!"

2