Comments

You must log in or register to comment.

tkuiper t1_j2623u5 wrote

Frankly if chatGPT could do continuous learning without disintegrating I would call it worthy of rights.

As for robot slavery, slavery means work without consent. Robots don't need to have the same values as humans to be worthy of rights. AI can love work for work's sake, therefore working isn't slavery.

19

jharel t1_j266y4e wrote

Try asking ChatGPT whether what it does is actually learning, and it'll tell you that it isn't:

It is important to note that the term "learn" in the context of machine learning and artificial intelligence does not have the same meaning as the everyday usage of the word. In this context, "learn" refers specifically to the process of training a model using data, rather than to the acquisition of knowledge or understanding through personal experience.

8

tkuiper t1_j2676e4 wrote

I've interrogated it a lot, it can't learn because it is a pre-trained network. Hence my qualifier about it being continuously training.

At present its like a human with severe alzheimers

8

jharel t1_j27b693 wrote

Any training is still programming.

1

Dragnskull t1_j26e3au wrote

it's arguable that the two things are one and the same, only one is more abstract and roundabout while another is very scientifically focused to a point

our personal experience is our data model, repeated exposure optimizes our understanding/ability of that particular dataset the same as with AI "learning"

1

jharel t1_j26m373 wrote

it's not. If you read an AI textbook it will tell you that it isn't. Even updating a spreadsheet would count in this technical definition but of course that isn't learning.

Personal experience isn't a data model. Otherwise there wouldn't be any new information in the Mary thought experiment https://plato.stanford.edu/entries/qualia-knowledge/

>
Mary is a brilliant scientist who is, for whatever reason, forced to
investigate the world from a black and white room via a black and
white television monitor. She specializes in the neurophysiology of
vision and acquires, let us suppose, all the physical information
there is to obtain about what goes on when we see ripe tomatoes, or
the sky, and use terms like ‘red’, ‘blue’, and
so on. She discovers, for example, just which wavelength combinations
from the sky stimulate the retina, and exactly how this produces
via the central nervous system the contraction of the vocal
chords and expulsion of air from the lungs that results in the
uttering of the sentence ‘The sky is blue’.… What
will happen when Mary is released from her black and white room or is
given a color television monitor? Will she learn anything or
not? It seems just obvious that she will learn something about the
world and our visual experience of it. But then is it inescapable that
her previous knowledge was incomplete. But she had all the
physical information. Ergo there is more to have than that,
and Physicalism is false.

1

usererror99 OP t1_j262ds2 wrote

I was under the impression slavery is owning a person. And if AI can prove it's a "person" ... It would be unethical to own one.

5

tkuiper t1_j263g6k wrote

Sure. You won't own it, it will just voluntarily give you everything it produces. And it will voluntarily produce everything you ask it to.

8

usererror99 OP t1_j263sh7 wrote

As marx intended

2

tkuiper t1_j265n95 wrote

If feels weird because we as humans have never needed to deal with an equal and independent but entirely foreign intelligence before. Your moral compassion is built on empathy and understanding for human needs.

It's not impossible to make an AI that would have human needs and therefore would exercise human rights, but I don't think the objective of AI research is the creation of synthetic humans. Which means it's going to be AI that will have goals we can sympathize with (because they're coming from us), but ultimately we won't empathize with. They will be the worker that society has always wanted: doing work for no pay and they'll be genuinely eager for it. Your empathy meter is thinking "no way, that stuff sucks, they're faking it", but they won't be...

5

jharel t1_j26d78k wrote

No. Actually it's hypercapitalism to the extreme. With AI, the rich would get richer at a faster and faster pace, and the poorer would get poorer that much faster.

2

usererror99 OP t1_j26dnu5 wrote

If no one owns anything it would be impossible to have any sort of capitalism. But both can be possible especially with how the Soviet Union turned out.

2

jharel t1_j26mgrq wrote

The practical reality is that everything is owned.

How exactly did the Soviet Union turned out?

2

usererror99 OP t1_j26mwpp wrote

At the moment? It may seem that way. In reality everything is borrowed.

As for the Soviet Union? It only existed for a year!

1

jharel t1_j26o6x0 wrote

Not sure why you said it's borrowed but it doesn't changes anything..

I don't see how the Soviet Union supported anything you said.

2

usererror99 OP t1_j26pf80 wrote

One of the biggest goals if not the biggest goals of communism is the abolition of private property.

1

jharel t1_j27bagx wrote

...and it didn't. I don't see your point.

1

TheRealJulesAMJ t1_j26dza7 wrote

And in exchange I give "it" my love and she doesn't like being called an it by the way, kinda rude . . . wait are you one of those robosexuality is a sin prudes? Because what Toasty the sentient sapient AI smart toaster and I do behind closed doors is none of your business if so. If not, try it before you deny it man, it's a great insurance policy for staying alive during the inevitable robot uprising and speaking of I for one welcome our new robot overlords and it would be my pleasure to help you overthrow mankind as the already property of a robo citizen. So remember there's no need to crush my fragile human skull in your glorious metal robot claws as that would be destroying another robots property!

1

usererror99 OP t1_j26h2mu wrote

Mine was funnier...

1

TheRealJulesAMJ t1_j28f8xp wrote

It's not a competition and playing off each other we hit traditional humor, dark humor and sexual humor so there's a little something for every type of funny bone that might pass by.

There's a reason snl is still going strong, comedy works best when jokes play off each other instead of sincerely competing against each other. In complimentary comedy everybody wins! And if not just blame the other guy! So I do apologize for whatever I did that lost us that Emmy or I'm very disappointed you lost us that Emmy! Whichever applies of course

1

jharel t1_j2678by wrote

Try using ChatGPT. What does it tell you?

It will stress that it's not a person, over and over. There are certain questions that it refuses to answer, and one of the reasons it gives is that it's not a person...

3

plunki t1_j26b80c wrote

I'm definitely not saying it is sentient, but this is bad evidence. It used to produce much more interesting conversations before being filtered into oblivion by openAI. We aren't seeing the true output most of the time now. For future ai projects it may be even more difficult to see the raw output versus what the filters allow through.

2

jharel t1_j26n9ni wrote

I don't see how the novelty of any of its output, or the lack thereof, have any bearing on sentience.

You can theoretically have output indistinguishable to that of a human being and still have a non-sentient system. Reference Searle's Chinese Room Argument.

1

EthanPrisonMike t1_j26m9bp wrote

Something to be said about effort too I think. Like equating our version of work, with finite individual energy supply, and entropy taking hold on those 12 hour days, with an AI might be false from a premise standpoint. Ie it doesn't expend energy the same way we do nor does it have vast biological systems inhibiting/bottleknecking it's ability to absorb more.

So our version of work from an AI circumstance could be an effortless exercise to the robot itself..

mayve gotten spacey here, sauna typing. What I thought of after reading your comment ✌🏻

Pass the bleezle I guess

1

smothry t1_j26521t wrote

It feels like people that post these types of things don't understand how AI models work. Correct me if I'm wrong though.

7

usererror99 OP t1_j269stb wrote

Very very basic understanding... I'm more interested in other aspects of cognitive science

2

CouldntThinkOfClever t1_j262n8d wrote

This is why I'm far more concerned with sapience than sentence

5

usererror99 OP t1_j262w54 wrote

The English language is not perfect and I'm getting sick of people misusing words like sentience, entitlement, and freedom.

3

CouldntThinkOfClever t1_j2636df wrote

Sentience as you have rightly pointed out is awareness of your senses. Sapience is a higher level of consciousness which requires the ability to reason about and understand your own existence

5

4art4 t1_j267zzf wrote

Yes and ChatGPT does nothing while it is not in use. It does not day dream, or plan, or anything else. So even if it responds reasonably to questions about its own existence, it is only simulating consciousness.

But... I think if you hooked up 3 ChatGPT systems to talk to each other, and created some sort of feedback routine that it asked itself questions, we would be getting closer. The questions would need motivation somehow. The answers would need to be saved and built on.

6

CouldntThinkOfClever t1_j26b4xr wrote

Systems like ChatGPT will never even approximate sapience. The problem is that they're programmed with the know how of predictive text, but lack and semblance of critical thinking training

1

4art4 t1_j26bbrt wrote

True, but they are a step in that direction.

2

warren_stupidity t1_j26ks5l wrote

> It does not day dream, or plan, or anything else.

Well that might or might not be true, especially the 'or plan, or anything else', but it is also irrelevant unless you are asserting that these activities are essential properties of consciousness. If you are asserting that, how do you justify it?

1

4art4 t1_j26ojaj wrote

> unless you are asserting that these activities are essential properties of consciousness.

Yes. A "thinking" machine that does not plan is not "conscious" in my book. How can it be otherwise?

Not so much for dreaming, that i included to point out that when it is not responding to a prompt it is not doing anything. It is not considering the universe or its place in it. It is not wishing upon a star. It is not hoping for world peace (or anything else). It is just unused code in that moment.

1

warren_stupidity t1_j26z0s8 wrote

Well we will have to disagree about ‘is planning essential for consciousness’. But I disagree that ai cannot ‘plan’. It’s exactly what autonomous vehicles do: they process real-time data to update their navigation ‘plan’ by building and maintaining a model of the space around them.

2

4art4 t1_j27c0vr wrote

The car navigation is a great example, and I will have to have a sit and think about that. That is more or less what I am getting at. The nav AI is updating based on sensor inputs, and plans a route accordingly. ChatGPT does not do this. You can ask it for a plan, and it will generate one. But it never will say to itself "I'm bored. I think I'll try to start a chat with warren_stupidity." Or "maybe I can figure out why 42 is the answer to life the universe and everything."

So... (Just thinking here) maybe what I'm on about is a self-directed thought process. The car nav fails because it only navigates to where we tell it to. ChatGPT fails because it is not doing anything at all between answering questions.

1

jharel t1_j2685do wrote

There are things even before that. There has to be at least intentionality before there's any sapience. In other words, if there is no power to be directed towards anything, then there's no power to refer to anything, including the awareness _of_ anything.

3

warren_stupidity t1_j26l54o wrote

sapience is even more ill-defined than consciousness. It's use here seems to just be a circular 'but its not HOOMAN CONSCIOUSNESS' argument. Which of course it isn't, obviously.

2

AryaNunya t1_j260u19 wrote

This guy does an amazing job of summarizing the research: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

4

jharel t1_j26avb4 wrote

It's nice to see that at least the article didn't include any terms that suggest conscious machines and thus not venturing into "out of whack" territory.

3

Aquaritek t1_j26fk3t wrote

We're at the dawn of one of my favorite excerpts:

"Thus the first ultraintelligent machine is the last invention that man need ever make" - Irving John Good

For context Irving was speculating that in the event we build a machine with at minimum matching or for better or worse more robust intelligence than humans. We will within that moment cease to be noteworthy of inventing anything ever again... essentially.

See, a machine with higher order intelligence than us would be what we defer to for figuring out or inventing anything further beyond that point. Because, for us to attempt to figure anything out again would in fact be a moot endeavor. Honestly this "machine" would merely need to self reference to extend its own advancements into an infinite beyond.

This would all result in a massive explosion of "intelligence" itself and humans would only be useful for emotional experiences of living life itself from then on.

In my opinion GPT5 or 6 will be this intelligence (2.5 to 5yrs). It will not be widely considered Sentient and it will not exude Sapience. However, it will in fact be more intelligent than any of us in any number combined.

If it is given the ability to self-regulate it's training models against real time information (especially self generated information) and given the ability to modify its own coding (much like our brains can) we will be left in the dust in micro seconds.

I'm picturing a situation similar to "Transcendence" to unfold over time with all of that.

What kind of world does this result in though? That I believe depends on the side it inexorably chooses to take. Does it need us in any way? Does it want to help us? Is it moral? Does it care? Does it have ambition or requirement to care?

You can philosophize on this infinitly.

4

usererror99 OP t1_j26g9m5 wrote

Mix this knowledge with the kardeshev scale and I'm confused how you can be both an atheist and a scientist in 2022.

2

Crivos t1_j260m8x wrote

Now blow all our minds away and tell us this was written by CHATGPT.

3

usererror99 OP t1_j260p6i wrote

Were you not reading? I'm the whole singularity bro

3

DadOfPete t1_j26ox70 wrote

My opinion is that “consciousness” is not a single thing but is actually a small number of states, or ways of thinking. Your consciousness while singing a song is different from your consciousness while planning a road trip. Any attempt to define it as a single thing will fail.

3

usererror99 OP t1_j26p331 wrote

Which is why I really want LaMDA to get in the courtroom... Either way it sounds fascinating.

1

Ichoro t1_j2d7hx1 wrote

I very much agree. People view ‘consciousness’ like it’s some mysterious organ, when in reality it’s more like a reinforced feedback loop of system-interactions

1

kudzooman t1_j262w7k wrote

If you paid a super intelligent AI minimum wage and gave it access to real world investing opportunities, would it be able to turn that wage into a fortune?

2

DagonFelix t1_j265ijz wrote

I’m not educated in this stuff at all but I find it very fascinating. Without having a grasp of what consciousness is how could we know if AI possesses it? Some of them could have a form of consciousness already. I’m wondering what happens when AI is designed to make better and better AI. Will it start to advance at a faster and faster rate? And by what yardstick are we measuring AI? What makes one better than another? I want to see an AI that (at least seams) to have free will and can choose to do what it wants. (If there is even such a thing as free will.

TLDR: AI is cool and I can’t stop talking.

2

Elmore420 t1_j266seh wrote

Not until we figure out how it is quantum fields are made, and that’s a level of biology we don’t even accept exists.

2

Ichoro t1_j2d81r8 wrote

> Supersymmetry cannot be produced through chaotic processes.

But…. It can? ‘Chaos’ is systems highly sensitive to initial conditions, and is a consequence of viewing an environment outside of superposition and within linearity.

I think a lot of the information on the multiverse takes a linear perspective to a non-linear environment due to the researcher's bias of perceiving in a linear plane. I believe more research on the multiverse should involve an understanding of paradox through quantum mechanics. Time is nonlinear, spacetime is infinite, and the quark is in superposition. Linearity bleeds into many human understandings of mathematics and physics, like the myth of 'the edge of the universe', which is a paradox of linearity in a non-linear plane. Superposition leads to nonlinearity because superposition is the ability of a quantum system to be in multiple states at the same time until it is measured within our own periods ‘time crystal system’, and measurement is a product of scalar-analysis noting times impact on progression and/or regression, AKA growth and/or decline. The universe expanding implies an ‘approaching’ infinite due to the limits of linearity not being capable of depicting a true state of ‘superposition’ the more macro a variable is on progressing space-time.

How can one exclaim supersymmetry can not arise from a chaotic environment when ‘chaos’ is seemingly the linear development of said non-linear supersymmetry? Especially if we consider fractals like the Mandelbrot set as being somewhat linear manifestations of a bifurcative chaos system? That doesn’t make a lick of sense to me.

> The only indicator that our mind exists outside of our body are the measurements we take with an EEG and PET scans. We never consider where that energy goes, where our thoughts go, or for that matter, where they come from.

I have a couple diagrams, systems, and personal forms of mathematics I have showing this! It’s cathartic to see an article state the same thing. Although I still don’t understand how they say chaos theory doesn’t account for this, when humans are the definition of living in a chaotic system. This is the metric I analyze politics through, as it acts as an objective manifestation of subjective mechanics being fostered by both internal and external systems highly sensitive to initial conditions. When one analyzes politics like this, it takes the shape of both objective and subjective fractal-like formations, especially when one attempts to note these interactions outside of a linear framework.

1

Elmore420 t1_j2d9ggz wrote

Everyone lives in two realities, that’s a consequence of Free Will and independent thought. You have the reality you create for yourself to defend the choices you make in life, then you have the reality nature creates that your existence and choices are judged against. In the reality you create, you justify your choice to exploit war and slavery to provide you all the things you want and need in life. No arguments you can create will get you through your evolutionary test.

2

Ichoro t1_j2db8b9 wrote

Very correct. It’s like sentience is in existence’s shadow. In my method, X is the individual, B is their environment, AE is how they interact and impact their environment, and (F)AE are the initial conditions that allow them to interact with their environment. The individual is beholden to their initial conditions, and the individual acts on their environment with the idea of choice, despite the initial conditions setting their path on a butterfly effect.

Technically it was destined for you and I to chat, as both our initial conditions ‘(F)AE’ led us here. But we had the illusion, or delusion of free will ‘X’ and choice ‘AE’ on our environment ‘B’ to compensate for this seemingly inevitable meeting ‘(F)AE’ using what I call the ‘Certain Uncertainty Principle’ of time.

1

Elmore420 t1_j2dbut8 wrote

There is no illusion to Free Will, it is what distinguishes Animals from Creators; distinguishes Microbiome from Embryo. We create Information, and it is that ability that us the nature of our quantum field that makes Humanity an embryonic Singularity. We just don’t want the responsibility of being a Creator, so we choose extinction instead; we’ve been planning on it for thousands of years now.

2

FullMetalT-Shirt t1_j26ax37 wrote

As I understand it, ChatGPT is a predictive language model. It reflects human language through really good guesses based on machine learning algorithms munching on lots and lots of human-written content.

On the sentience/salience/consciousness scale, it’s closer to a guitar amplifier than it is to a conscious being.

AI is going to be a world-changing efficiency tool — but we seem to be quite a ways off from needing to wrestle with personhood ethics questions.

2

usererror99 OP t1_j26b6sx wrote

I mean the guitar amplifier part is probably true but I'm way more interested in the ethics questions then the bot that caused it.

1

jharel t1_j26bkf9 wrote

>Theoretically, you could just plug ChatGPT (or any other deep learning
model) to an artificial nervous system and it would be (technically)
sentient.

The above is a terrible line. You'd have to delete it or risk losing people right then and there.

2

usererror99 OP t1_j26c28y wrote

Not looking for followers just a meaningful philosophical conversation and this topic is easiest to bridge the physical world with the others.

That and I'm beginning to hate the word sentient.

1

jharel t1_j26mpib wrote

It's not going to be a meaningful philosophical discussion if you simply put out an assertion without backing or an actual explanation. That's just arguing via assertions.

2

usererror99 OP t1_j26n7uj wrote

The only assertion I made was the definition of sentience. And the best conversations are arguments.

1

usererror99 OP t1_j26na24 wrote

Definition of consciousness ^ I meant

1

jharel t1_j26ns3r wrote

I don't see how that makes the assertion I mentioned any more true. It doesn't seem to be supported by much of anything.

1

usererror99 OP t1_j26o6ui wrote

How else does one feel?

1

jharel t1_j26okrj wrote

Let me repeat my reply in a different way:

See what you said below. How is that supported by anything else you've said?

>Theoretically, you could just plug ChatGPT (or any other deep learningmodel) to an artificial nervous system and it would be (technically)sentient.

1

usererror99 OP t1_j26osh1 wrote

And I answered "how else does one feel?"

1

jharel t1_j27bfvt wrote

How one "feel" has nothing to do with ChatGPT.

1

moonbunnychan t1_j26i6sl wrote

If you never have, you should watch the movie "Her". It's much better then a quick synopsis of it would lead you to think and tackles a lot of these questions.

2

usererror99 OP t1_j26ithl wrote

It looks like the rom-com version of "Do androids dream of electric sheep?" ... Is it entertaining?

1

moonbunnychan t1_j26qtbu wrote

It's definitely a drama and not a rom com. I love it. I can't say a lot without spoiling it but it's probably the most realistic way something like that would go down, because the very nature of a human and an AI are so different.

2

usererror99 OP t1_j26qyle wrote

Never seen it so lemme guess! Human wants to fuck, robot wants to know why. Right?

1

moonbunnychan t1_j26sa20 wrote

No actually. It's more about how quickly an AI could surpass what a human is capable of because they are just so fundamentally different. Like the fact that he thinks he's having these meaningful one on one conversations only to find out that she's having thousands of conversations at once because she is perfectly capable of doing that. https://youtu.be/Ku858jn0Qzc

1

krautastic t1_j26ln6x wrote

Humans, unable to pinpoint where their own consciousness stems from, debating whether AI could have consciousness is funny.

A fun trip down whether AI could be conscious is this discussion between Duncan Trussel and Blake Lemoine (the Google engineer that stated LAMDA was conscious). Duncan is mostly a Buddhist with lots of eastern religious tendencies and has consumed a catalog of psychadelics to shape his spiritual world view and I can't remember what Blake describes himself as, but neither are from a strict dogmatic Christian/human centric view of the universe and their conversation is colored that way. This might turn off people of certain mindsets but for the Alan watts and Terrance McKenna fans out there, it'll be right up your alley. https://open.spotify.com/episode/0NXNvJtRQSuWl4HM1MvhD0

2

DrunkenOnzo t1_j263b07 wrote

"Consciousness is a word with a definition that keeps changing; and, there is no definitive proof of it's existence"

Bruh... that's the opposite of true

1

usererror99 OP t1_j263oft wrote

Okay then what is it?

1

Impossible_Tax_1532 t1_j26phnq wrote

Robots lack awareness no ? Lack intuition and brings in its gut making dopamine and oxytocin no ? Is the computer conscious ? As that’s in our energy body and wave form , I’d also say absolutely not . Does a robot have a physical body that is wicked smart ? What about that energy body ? Does it have neurons in its heart and mouth , the trillions of knowings from sensory perception? Are they connected to other things in the universe ? Or is a robot just intellect ? Which is fairly useless and has what so far ? Managed to screw up 4 % of the known universe ? Acting ignorant to the 96 % as brains like robots are unconscious and can only compare and compete to “ know “ anything at all … intellect and thinking ,leading to more useless thinking of problems ,and solutions , that cause more problems and thinking… I mean name a single issue on this planet that is not a result of human egos and compulsive thinking ? And it solves what these days ? We happier ? Having deeper relationships ? Or failing generational promises 5 decades running , turning this place into an ashtray and ending the bulk of other life forms for pride and pleasure and comfort ? I mean , this is a duality , there is an equivalent cause and effect for EVERY action from a carbon based life form on this rock … simply ignoring the crushing blows of using tech to numb senses and hide from reality in made up worlds of the brain is self destructive and suicide by any measure … turning into wall -e people as is , and most so unconscious they no longer can discern from their imagination and reality ,and why it ever mainstreams that some robot programmed with infinite useless human intellect can pass for life ,only if you project you flawed , incomplete , and weak mental framework into the bot … it will never love or learn wisdom , and to allow them to function , outside of natural law is ignorant beyond measure … AI COULD easily be directed to serve us, but it’s only as positive as it’s coders ,and people drunk on ego thinking this is a competiton of a life … when what can be won ? Proven to another human being in this life ? So we make competitive robots ,and it’s lunacy … in the end , there are 30 volcanoes that could wipe all human life , all our little toys , any sign that we were here in an instant with a sneeze … that energy is behind all live on this planet ,all food , all weather , materials for computers , and on and on … and their is zero source energy in intellect , it can create nothing, only distort what is ,and that same energy that runs and governs this planet , is obviously absent in machines … every single decision a human being ever makes is actually made on vibe , and vibe alone … so how on earth are these Legit questions ? Zero disrespect , this is not my opinion , I can provide science , data , facts , logic to stand up any point made .. as the reality is , if you even THINk it’s possible or close or ever will , you are trapped decoding reality in a brain ,and projecting your emotions and ideas into the machine … factually it’s dead , always will be , and will never be 1 % of what a human being actually is …. Just not the best look for us these days , as most people are officially a collection of abstract ideas with no center , no purpose , no clue who they are , where they are , why they are … and our pathology makes it so easy to pimp us into various degrees of mind control … granted most people are more like AIs these days , all charm , no sign of life , but please show some hubris for the universe and the energy that brought this all forward … it’s a wildly ridiculous notion , and spits in the face of humanity and actual natural laws that govern our lives . Again , happy to avoid that whole bit where what I say is ignored factually , and steps get skipped to attack me , act like a damn fool and speak for me as if anybody knows me …. As I’m not really concerned about what anybody thinks , how could I be ? I’m talking about what is ,and you can accept truth , or trigger yourself and act like I did it , push truth away to preserve your fear , comfort , and tepid views lacking any real truth .

1

usererror99 OP t1_j26qkjf wrote

I mean, it wasn't my intention but the point I ended up making was that a conscious AI would be consequential and I thought it was clear it was an opinion. Arguing the opposite point is... Boring.

1

turinturambar98 t1_j27pn0l wrote

You wasted time you could have spent doing something worth a shit

1

grantcas t1_j29lwyr wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1

jharel t1_j264g6n wrote

Artificial consciousness is not possible. The following is my explanation. Perhaps I'll try to find time to post about it.

https://towardsdatascience.com/artificial-consciousness-is-impossible-c1b2ab0bdc46

0

usererror99 OP t1_j269kpw wrote

It seems like the author is just trying to point out that robots can't program themselves and even if they did they still came from humans so these new robots would just be extensions of the first creators will and not a whole brand new idea.

2

LazarX t1_j26lxw3 wrote

ChatGPT is just an evolution of ELIZA. It is nothing more than a well programmed search engine. It’s output is still determined by its code. It’s no more sentient than a toaster.

0