Viewing a single comment thread. View all comments

stimulatedecho t1_jdhlv4w wrote

>> nobody with a basic understanding of how transformers work should give room to this

I find this take to be incredibly naive. We know that incredible (and very likely fundamentally unpredictable) complexity can arise from simple computational rules. We have no idea how the gap is bridged from a neuron to the human mind, but here we are.

>> There is no element of critique and no element of creativity. There is no theory of mind, there is just a reproduction of what people said, when prompted regarding how other people feel.

Neither you, nor anybody else has any idea what is going on, and all the statements of certainty leave me shaking my head.

The only thing we know for certain is that the behavioral complexity of these models is starting to increase almost exponentially. We have no idea what the associated internal states may or may not represent.

60

Snoo58061 t1_jdjdy56 wrote

I like to call this positive agnosticism. I don't know and I'm positive nobody else does either.

Tho I lean towards the theory of mind camp. General intelligence shouldn't have to read the whole internet to be able to hold a conversation. The book in the Searle's Chinese Room is getting bigger.

8

E_Snap t1_jdjug2q wrote

That’s a magical requirement, dude. We as humans have to study for literal years on a nonstop feed of examples of other humans’ behavior in order to be a competent individual. Why are you saying that an AI shouldn’t have to go through that same kind of development? At least for them, it only has to happen once. With humans, every instance of the creature starts out flat out pants-on-head rtrdd.

5

Snoo58061 t1_jdjvybp wrote

I'm saying it's not the same kind of development and the results are different. A human works for a long time to grasp the letters and words at all, then extracts much more information from many orders of magnitude smaller data sets with weaker specific recall and much faster convergence for a given domain.

To be clear I think AGI is possible and that we've made a ton of progress, but I just don't think that scale is the only missing piece here.

1

E_Snap t1_jdjwmkp wrote

Honestly, I have a very hard time believing that. Machine learning has had an almost trailblazing relationship with the neuroscience community for years now, and it’s pretty comical. The number of moments where neuroscientists discover a structure or pattern developed for machine learning years and years ago and and then finally admit “Oh yeah… I guess that is how we worked all along,” is too damn high to be mere coincidence.

3

Snoo58061 t1_jdjxmti wrote

The brain almost certainly doesn't use backpropgation. Liquid nets are a bit more like neurons than the current state of the art Most of this stuff is old theory refined with more compute and data.

These systems are hardly biologically plausible. Not that biological plausibility is a requirement for general intelligence.

3

Western-Image7125 t1_jdnvnu7 wrote

Well your last line kinda makes the same point as the other person you are debating with? What if we are getting really close to actual intelligence, even though it is nothing like biological intelligence which is the only kind we know of

3

cyborgsnowflake t1_jdkruku wrote

We know the nuts and bolts of what is happening since it's been built from the ground up by humans. Gptx is essentially a fancy statistical machine. Just rules for shuffling data around to pick word x+ 1 on magnetic platters. No infrastructure for anything else. Let alone a brain. Unless you think adding enough if statements creates a soul. I'm baffled why people think gpt is sentient just because it can calculate solutions based on the hyperparameters of the knowledge corpus as well or better than people. Your Casio calculator or linear regression can calculate solutions better than people. Does that mean your Casio calculator or the x/y grid in your high school notebook is sentient?

0

agent_zoso t1_jdlgre2 wrote

The use of neural nets (ReLU + LayerNorms) layered between each attention step counts as a brain, no? I know the attention mechanism is what gets the most ... attention, but there's still traditional neural nets sandwiched between and in some cases the transformer is just a neck feeding into more traditional modules. ReLU is Turing complete so I can always tune a neural net to have the same response pattern of electrical outputs as any neuron in your brain.

The million dollar question according to David Chalmers is, would you agree that slowly replacing each neuron with a perfect copy one at a time will never cause you to go from completely fine to instantly blacked out? If you answered yes, then it can be shown (sections 3&4) that you must accept that neural nets can be conscious, since by contradiction if there was a gradual phasing out of conscious experience rather than sudden disappearance, that would necessarily require the artificial neurons to at some point begin behaving differently than the original neurons would (we would be aware of the dulling of our sensation).

Considering we lose brain cells all the time and don't get immediately knocked out, I think you can at least agree that most people would find these assumptions reasonable. It would be pretty weird to have such a drastic effect for such a small disturbance.

4

cyborgsnowflake t1_jdoz0ia wrote

In a very general sense neurons and nns are the same in that they are both networks but the brain from what we know is structured very differently to gpt which is a more or less simply a linear circuit for processing tensors. I'm not sure what the reasoning is to jump to the conclusion that a living being is popping into existence when you run GPT just because the output 'looks human'. You could just conclude that' knowledge tasks to a certain degree can be approximated statistically'. As anyone who watched horny men get fooled by chatbots in the 90s should know.

If you believe the former than logically if you replaced the computer circuits with humans than even people writing equations on paper together should if there was enough of them theoretically also cause these 'calculation beings' with minds independent of the humans themselves to pop into existence. Which maybe you can argue for under certain philosophies but thats veering off into territory far from orthodox computer science.

2

agent_zoso t1_jdpbe5o wrote

It sounds like you're pretty hard set on there being no ghost in the shell and pretty judgmental of anyone who thinks otherwise. I'm just saying you're far too certain you have the answers, as my example demonstrates. I also never said I believe a living being is jumping into existence because of whatever successful Turing test. I'm actually agnostic on that and think it's a waste of time trying to answer something that will never be answered. It's always going to come down to bickering over definitions and goalpost-shifting ("It can't be sentient if it doesn't model glial cells/recurrent cortical stacks/neurotransmitter densities/electrochemistry/quantum computational effects inside microtubules/the gut microbiome/the embedding within the environment/the entire rest of the universe like us"). I'd much rather play it safe and treat it as though it is conscious.

Maybe I'm misunderstanding you, but it sounds like you're now also being far too dismissive of the representational power tensors/linear operations and especially eigendecompositions can have (I could probably blow your mind with the examples I've seen), and of statistics as a loss function. After all, we as humans are literally no more than statistical mechanical partition functions of Von Neumann density matrices, what would you even use for a loss function instead? MSE, cross-entropy (perplexity), KL, L1/L2 are statistical and used to build every neural net you've heard about. The only difference between us and say a Turing-complete (nonlinear ReLU + attentional) Kalman filter for text like you're making GPT out to be is how the hyperparameters are adjusted. A Kalman filter uses Bayesian inference with either Laplace's method or maximum-likelihoodist rules, whereas we (and ChatGPT) are genetically rewarded for minimizing both cost (resp. perplexity) and nonlinear human feedback. Keep boiling things down and you'll find you're surrounded by philosophical zombies.

Edit: Didn't see the second paragraph you added. I'm not sure what ML orthodoxy you're from, but Chalmers' result is pretty well accepted in CogSci. The setup that you're describing, the Chinese room, is an appeal to common sense, but a lot of what motivates scientific development is trying to understand paradoxes and counter-intuitive results. Sure, it sounds absurd, but so does Schrödinger's cat or black holes, both of which failed to disprove the underlying phenomena. Chalmer's 1995 result came after the Chinese Room thought experiment (by about 15 years in fact) and updated the consensus since on the Chinese Room by stressing the importance of universality. Since your example has humans performing the computation, I would say it could be alive (depending on the complexity of the equations, are they reproducing the action potentials of a human brain?), and case in point I think the internet even before ChatGPT is the most likely and well-known contender for a mass of human scribbles being conscious.

1

Maleficent_Refuse_11 t1_jdhmiab wrote

What lmao

−30

Darkest_shader t1_jdjcmih wrote

That's the dumbest way you could have responded.

7

Maleficent_Refuse_11 t1_jdji46u wrote

How do you quantify that? Is it the downdoots? Or is it the degree to which I'm willing to waste my time discussing shallow inputs? Or do you go by gut feeling?

On a serious note: Have you heard of brandolinis law? The asymmetry described there has been shifted by several magnitudes with generative "ai". Unless we are going to start to use the same models to argue with people (e.g. chatbot output) on the net, we will have to choose much more carefully what discussions we involve ourselves on, don't you think?

−5