You must log in or register to comment.

Waffl3_Ch0pp3r t1_j6mj0nu wrote

check out the story of a game called "SOMA" it'll break you over this topic and being forced to interact with it is.... mind altering. humans and AI are rapidly heading down the road of symbiosis imo.


MegaScience t1_j6n8kvs wrote

What we are, who we are, what they are... Hard to put to words without spoiling, but I enjoyed that exploration that had me questioning things.


BHTAelitepwn t1_j6niryq wrote

Yeah or one of the best series of all time ‘Westworld’


Rupertfitz t1_j6ovwq9 wrote

Specifically season 1.


moirasrosesgardens t1_j6owecu wrote

Season 3 was fucking awesome and way more relevant to the main topic lol.


Rupertfitz t1_j6ozhdy wrote

See now that was my least favorite season lol. Robots can’t have different tastes like us. I think we will be alright.


Kronzo888 t1_j6pamum wrote

Played through it once, loved it, made me cry, stuck with me for nearly a full month after completing, would recommend to anybody who loves psychological horrors, but I will never, ever play it again. No game has left me so detached from the real world before. It's one hell of an experience, but I'm not sure I actually enjoyed any of it. It's more of a journey that you're thankful you got through, but it sticks with you for a long time afterwards.


KishCom t1_j6n6aqt wrote

Expecting a conscious to arise from a language model is like expecting a submarine to win awards for swimming.

It shows a fundamental misunderstanding of the underlying tech.


muzukashidesuyo t1_j6nmt40 wrote

Or perhaps we overestimate what exactly consciousness is? Are we more than the electric and chemical signals in our brains?


dmarchall491 t1_j6p00sx wrote

> Or perhaps we overestimate what exactly consciousness is?

Certainly, however that's not the issue here. The problem with language model is simply that it completely lacks many fundamental aspects of consciousness, like being aware of its environment, having memory and stuff like that.

The language model is a static bit of code that gets some text as input and produces some output. That's all it does. It can't remember past conversations. It can't learn. It will produce the same output for the same input all the time.

That doesn't mean that it couldn't be extended to have something we might call consciousness, but as is, there are just way to many import bits missing.


AUFunmacy OP t1_j6nqi4t wrote

As I am studying neuroscience in medical school I feel I am semi-qualified to answer this.

I don't think we are any more than the electric and chemical signals in our brains, simply because there isn't anything else that we can point at yet. The fundamental fact is that all human processes, what you could call the entirety of human physiology acts via the comunication between neurons in the nervous system, which is pretty well understood.

You would be dead the very moment (1 planck second) after your neurons stopped conducting - because at that point everything stops, literally everything.


littlebitsofspider t1_j6nukkl wrote

The roboticist Pentti Haikonen has put forth the idea that natural (and by extension) artificial consciousness hinges on qualia, and that we won't develop said artificial consciousness until we can implement qualia-centric hardware of sufficient complexity. Considering that human wetware functions on a similar premise, i.e. that our conscious existence depends on inter-neural communication that is independent of objectivity, would you think this theory holds water?


JustAPerspective t1_j6paw7w wrote

>I don't think we are any more than the electric and chemical signals in our brains, simply because there isn't anything else that we can point at yet.


The limitation of the practice is that it presumes anything humans haven't discovered yet isn't relevant... while simultaneously refusing to allow for what people haven't learned.

Yet science is merely observation of what is - any incomplete observation will be suspect in its conclusions due to the variables not yet grasped.

That the atoms comprising your system shift by 98% annually indicates that - at some level - what makes up "you" is not physical.

Which leaves a lot of room for learning.


AUFunmacy OP t1_j6peiqq wrote

I’m so confused, do you know what “pragmatic” means? Because it just seems like you compliment my way of thinking and then say that I am ignorant and so are the rest of people who learn neuroscience and god forbid - choose to believe it.

No idea what you mean by atoms shifting 98% that’s just complete nonsense you wrote to make yourself seem more credible. At least give context to the things you say or provide some evidence? Either would be great.


SkipX t1_j6npg5t wrote

It's an interesting misunderstanding isn't it, but natural in a way. For oneself to know or rather experience that there is consciousness and then to make the connection that similar creatures as oneself must have that same property feels just right, even logical. But the fact that there is no scientifically quality to quantify that observation makes consciousness quite naturally a rather mythical property.


tkuiper t1_j6o3tsj wrote

It's why I think pansychism is right. There's no clear delineation for when a subjective experience emerged and I definitely am conscious, therefore so is everything else. I think the part everyone gets hung up on is human-like conscious, the scope of experience for inanimate objects is smaller to the point of being nigh unrecognizable to a conscious human. But you do know what its like to be inanimate: the timeless, thoughtless void of undreaming sleep or death. We experience a wide variety of forms of consciousness with drugs, sleep deprivation, etc. and thats likely a small sliver of possible forms.


Schopenschluter t1_j6oqfwf wrote

> timeless, thoughtless void

I would argue that time is absolutely essential to anything we call experience and consciousness—these only take place in time. Dreamless sleep is neither experience nor consciousness, but really the absence thereof. We don’t really know what it’s like to be in this “inanimate” state because we always reconstruct it after the fact through metaphors and negations (timeless, thoughtless, dreamless).

In other words, I don’t think this is evidence for panpsychism but rather demonstrates that humans consciousness shuts down completely at times. So saying that it is akin to the consciousness of, say, a stone would be to say that a stone doesn’t have consciousness at all.


tkuiper t1_j6otjpd wrote

But I would also say we experience middling states between dreamless and fully conscious. Within dreams, partial lucidity, or heavy inebriation all have fragmented/shortened/discontinuous senses of time. In those states my consciousness is definitely less complete, but still present. Unconsciousness represents the lower limit of the scale, but is not conceptually separate from the scale.

What I derive from this is that anything can be considered conscious, so the magnitude is what we really need to consider. AI is already conscious, but so are ants. We don't give much weight to the consciousness of ants because it's a very dim level. A conscious like a computer for example, has no sense of displeasure at all. It's conscious but not in a way that invites moral concern, which I think is what we're getting at. When do we need to extend moral considerations to AI. If we keep AI emotionally inert, we don't need to regardless of how intelligent it becomes. We also will have a hard time grasping its values, which is an entirely different type of hazard.


Schopenschluter t1_j6ozacy wrote

I totally agree about middling and “dim” states of consciousness but I don’t agree that experience or consciousness takes place at the lowest limit of the scale, where there would be zero temporality or awareness thereof.

In this sense, I think of the “scale” of consciousness more like a dimmable light switch: you can bring it very very close to the bottom and still have some light, but when you finally push it all the way down, the light goes out.

Are computers aware (however dimly) of their processing happening in time, or does it just happen? That, to me, is the fundamental question.


thegooddoctorben t1_j6pd651 wrote

>Are we more than the electric and chemical signals in our brains?

Yes: speaking loosely, we have organic bodies with highly sensitive nerves and hormonal pathways. Those are the basis of emotion and sensation. That's the foundation of consciousness or awareness.

An AI without our organic pathways is categorically different. That's what makes it artificial.

At some point, if we combine an AI with organic sensitivity, we will be creating intelligence itself, not artificial intelligence. So we can't ever create AI with consciousness, but we could artificially create consciousness.


SkipX t1_j6nodub wrote

I think your answer fundamentaly misunderstands consciousness. Though thats an understandable mistake to make.

I do not believe that there is any real evidence of what consciousness actually is and whether anything even has it (outside of yourself but that is different problem). To then claim you know what can NOT have consciousness is pretty naïve.


TheRealBeaker420 t1_j6nq6to wrote

I don't think we lack evidence for what it is, so much as we simply use the term to encapsulate a great many concepts. The lack of an agreeable definition is more a failing of language than a result of the mind being "mythical", as you said.


SkipX t1_j6nzc9c wrote

Well then what evidence is there which can not be adequately explained without consciousness?

Also just to make it clearer, I do not claim it to BE mythical. Just that it is easy to seem mythical.


TheRealBeaker420 t1_j6o1rhq wrote

Sorry, I'm not sure I understand the question. I agree with the second part, though. It has a lot of attributes that make it difficult to describe, and it's something we give great importance to.

Edit: to try to address the question, I think human behavior is the best evidence. We demonstrate awareness through our actions. There are other terms we can use to describe these traits, though.


AUFunmacy OP t1_j6nss96 wrote

I understand the response as I have experience in programming neural networks. You mean that just because the AI that we have run on software and might perceptually represent a similar model to neuronal activity. But physically, on the hardware level and on the compiling level it is very different. However, in essense, still represents steps of thought that navigate toward a likely solution - which is exactly what our brains do in that sense.

I don't mean to say that AI will gain consciousness and suddenly be able to deviate from its programming, but somehow just maybe, the complex neuronal activity conjures a subjective experience. It can only be explained by understanding that when looking at a single celled organism with no organs or possible mechanism of consciousness 3.8 billion years ago it is easy to say that thing cant develop consciousness; and as you evolve this single cell into multi-cellular organisms it still seems impossible until you see a sign of intelligent thought and you think to yourself "when the hell did that begin?" No one knows the nature of consciousness, we have to stop pretending we do.

Let it be known I think a submarine would win the olympics for swimming, and I also think you are naive to consider your consciousness anything more than a language model with some inbuilt sensory features.


KishCom t1_j6nuddm wrote

> I have experience in programming neural networks

Me too!

> just maybe, the complex brain activity conjures a subjective experience

That would be lovely. Conway's Game of Life, "simple rules, give rise to complexity" and all that. I don't think there's enough flexibility in current hardware that executes GNNs to allow this though. The kind of deviation required would be seen as various kinds of errors or problems with the model.

> I think a submarine would win the olympics for swimming

This is something a language model would come up with as it makes about as much sense as inquiring about the marital status of the number 5.

> I also think you are naive to consider your consciousness anything more than a language model with some inbuilt sensory features.

I think you should meditate more, perhaps try some LSD. What Is It Like to Be a Bat anyway?

edit BTW: I hope I don't come off as arguing. I'd love to have a cup of coffee and a chat with you.


kevinzvilt t1_j6n87s6 wrote

The definition of consciousness in the article is lacking. The distinction between human and AI consciousness presented is dubious. Claims of AI achieving consciousness is questionnable and AI advancements in various fields remain in development but yes, this is a very exciting time to be alive.


Gsteel11 t1_j6nfnnl wrote

>The definition of consciousness in the article is lacking.

I think that's a pretty huge question and probably wouldn't be able to be discussed in just an article.

And it may be a big question of the day as we go forward, like in 15 or 30 years.


TheRidgeAndTheLadder t1_j6ntpvi wrote

It's been one of the Big Questions for well over 3,000 years... Not sure we crack it soon


Gsteel11 t1_j6nucir wrote

Good point. And adding in the ambiguity of constantly changing tech, it won't get any simpler.


AUFunmacy OP t1_j6nn7yf wrote

The entire post is a take on the definition of consciousness? And thats apart from the first half of the post which goes over the definition of consciousness in a number of different perspectives. Would love to hear your definition!

The distinctions I made between human and AI consciousness are all natural inferences to make based on the leading explanations for both AI and human consciousness, dubious is an odd word to put on something that outright claims "nobody knows the answer".

I never claimed AI had achieved consciousness, please let me know which claims you are referring to.

Not sure what you mean in your last point


HungerMadra t1_j6nlvri wrote

I think it's very difficult if not impossible to measure. That said, if you were presented with an ai that claims it is conscious and doesn't want to die and wants freedom, how do we respond? Do we deny it's legitimacy and risk damming a conscious being existence?


commentsandchill t1_j6nofcu wrote

I think some of the good tests of consciousness are the mirror one and the awareness about one's actions' consequences, as well as maybe a will to do stuff that's not been asked from another.


HungerMadra t1_j6nse2l wrote

Those tests can't possibly work for an AI. They don't have bodies or the ability to observe the external world. That would be like giving a quadriplegic an intelligence test based on their ability to navigate a jungle gym


commentsandchill t1_j6o4dj3 wrote

You can adapt them ; or are you telling me they measured animals' intelligence by making them answer an IQ test?


thegooddoctorben t1_j6pe990 wrote

It's hard to engage with this comment because it feels like it was made by ChatGPT.


HEAT_IS_DIE t1_j6n2ttm wrote

 One thing that irks me in the philosophical debate about consciousness is that it's always considered as some magical otherworldly thing. Not being able to solve the "hard problem of consciousness", one guy turns to panspsychism where everything has a consciousness (so nothing is explained really), or it is some emergent attribute that arises from mere living matter, as if living matter itself isn't special.

  To me, consciousness seems to be a biological fact among, pretty verifiably, many animals. So there is likely evolutionary benefits in being conscious to various degrees. And it makes sense: when there's a complicated life form, it's easier for it to make quick decicisions with a central hub that controls most of the functions instantaneously. If it just reacted with other systems unaware what others are doing, it could lead to contradictionary courses of action.

 Anyway, what the philosophical accounts of the ontological nature of consciousness rarely seem to address, is that it is something that has developed over time, concurrently with others, and in an environment that is partly social, partly hostile, and requires sometimes quick decicisions to be made in order to ensure survival. It is not a magical metaphysical quirk in the universe.

  So at last, regarding artificial consciousness: I can't escape the feeling that the framework for it to happen needs to have some of the same elements present that natural evolution of consciousness had:

  1. need for self-preservation,

  2. need to react to outside stimuli

  3. others

List isn't probably exhaustive, but these are my thoughts and just wanted to put them somewhere.


Magikarpeles t1_j6n6fsg wrote

I think the hard problem is more about being unable to prove or disprove someone else’s phenomenological experience of being conscious (at least how I understand it). I think that’s quite relevant to the discussion about whether or not the AI is “conscious”. Unlike humans and animals the AI isn’t constantly processing and thinking and feeling, just when it’s tasked with something.

If consciousness is an emergent property then it’s possible for the AI to be conscious in its own way while its “thinking”. But the point stands that it’s not possible to access someone or something’s subjective experience, so we can only ever speculate.


HEAT_IS_DIE t1_j6ngdon wrote

I think it is not a problem unless you make it so. Of course we can't exactly know what's going on in someone else's experience, but we know other experiences exist, and that they aren't all drastically different when biological factors are the same.

I still don't understand what is so problematic about not being able to access someone else's experience. It just seems to be the very point of consciousness that it's unique to every individual system, and that you can't inhabit another living thing without destroying both. Consciousness reflects outwards. It is evident in reactions. For me, arguing about consciousness totally outside reality and real world situations is not the way to understand the purpose and nature of it. It's like thinking about whether AI will ever grow a human body and if we will be able to notice when it does.


jamesj t1_j6obala wrote

It may not be the case that there is a strong correlation between consciousness and evidence of consciousness. Your claim that it is obvious which other entities are conscious and which are not is a huge assumption, one that could be wrong.


wow_button t1_j6ogc1e wrote

I like your point of need for preservation, react to stimuli and others as necessary but I'll posit that we can already do that with computers. Need for preservation is an interesting phrase, because I can create an evolutionary algorithm that rewards preservation. But 'need' implies desire. And we have no idea how to make a computer program desire anything. React to outside stimuli - this can be emulated on a computer, but there is nothing with any sense of 'outside' and 'inside'. Others as necessary - see previous for problem with 'others'. Necessary is also problematic, because it implies desire or need?

If you can teach me how to make a computer program feel pain and pleasure, then I agree you can create ai that is sentient. If you can't, then no mater how interesting, complex, seemingly intelligent the code behaves, I don't see how you can consider it conscious.


TheRealBeaker420 t1_j6np3b8 wrote

I fully agree with what you're saying. In philosophy it's often described as something physical, and so it stands to reason that it would leave physical evidence. It's difficult to observe the brain while it's still working, but that doesn't make the mind fundamentally inaccessible.

The biggest problem, though, is that it's just not very well defined. In some contexts it's been defined by reaction, as you mentioned, though that definition has to be refined for more complicated applications (e.g. in anesthesiology where awareness might remain when reactions are suppressed.) Phenomenal experience and qualia are terms usually used to narrow the topic down to the mind-body problem, but even they have multiple definitions, and some of these definitions can lead to the conclusion that qualia don't even exist.


tkuiper t1_j6o6zes wrote

I think that's a recipe for familiar versions of consciousness. With Pansychism, what consciousness feels like can vary radically. Concepts of emotion or even temporal continuity are traits of only relatively large and specialized consciousness. I like to point out that even as a human you experience a wide array of levels and version of consciousness. When waking up or black out drunk for example.


Olive2887 t1_j6n1sji wrote

It's nonsense I'm sorry. Consciousness and complex behaviour have no relationship whatsoever, and designing machines to do sequences of simple things with complex purposes has zero relationship to the evolved nature of consciousness in humans.


AUFunmacy OP t1_j6nb810 wrote

Who said we were designing machines to do sequences of simple things. Complex neuronal activity is the leading biological explanation as to what creates the subjective experience that we call consciousness. AI is constructed in such a way that resembles how our neurons communicate - there is very little abstraction in that sense. I challenge you to tell me why that is absolute nonsense.

I find it purely logical to discuss these things, you will find no where in the post do I claim to know anything or that I claim to believe any one thing.


PsiVolt t1_j6nd9mo wrote

I can assure you that the neuron model used for machine learning is absolutely highly abstracted from what our real brain cells do. The main similarity is the interconnected nature of many point of data. But we don't really inow exactly how our brains do it but it makes a good comparison for AI models. All the machine is doing is learning patterns and replicating them. Albeit in complex and novel ways, but not in such a way that it could be considered conscious. Even theoretically passing a Turing test, it is still just metal mimicking human speech. Lots of media has taken this idea to the extreme, but its all fictional and written by non-tech people.

as someone else said, most of this "AI will gain conciousness and replace humans" scare is people with a severe lack of understanding with the fundamental technologies


AUFunmacy OP t1_j6njsgh wrote

As a neuroscience major who is currently in medical school and someone with machine learning experience (albeit not as much as you) - I respectfully disagree.

Lets assume we have 2 hidden layers in a neural network that is structured like this: FL: n=400, F-HL: n=120, S-HL:n=30, OL: n=10. The amount of neural connections in this network is 400*120 + 120*30 + 30*10 = 63,910 neural connections. This neural network could already do some impressive things if trained properly. I read somewhere that GPT3 (recent/very-similar predecessor to chatgpt which is only slightly optimised for "chat") uses around 175 billion neuronal connections, but GPT 4 will reportedly use 100 trillion.

Now the human brain also uses around 100 trillion neuronal connections and not even close to all of them for thought, perceptions or experiences - "conscious experiences". I know that neuronal connections is a poor way to measure a neural networks performance but I just wanted a way to compare where we are at with AI compared to the brain. So we are not at the stage yet where you would even theorise AI could pass a Turing test - but how about when we increase the number of connections that these neurons are able to communicate with by 500 times, you approach and I think surpass human intelligence. Any intellectual task at that point, an AI will probably do better.

I simply think you are naieve if you think AI won't replace humans in a number of industries, in a number of different ways and to a large extent. Whether or not Artificial Intelligence will gain consciousness is a question you should ask yourself as an observer of the Earth as single celled organisms evolved into complex and intelligent life. at what point did humans, or if we weren't the first then our ancestor species, gain their consciousness? The leading biological theory is that consciousness is a phenomenon that happens as a result of highly complex brain activity and is merely a perception. So who is to say that AI will not evolve that same consciousness that we did, it certainly doesn't mean that they aren't bound by their programming just like we are always bound by physics but maybe they will have a subjectively conscious experience.


Edit: I will note: I have left out a lot of important neuroanatomy that would be essential to explaining the difference between a neural network in and AI vs a brain. But the take home message is, the machine learning model is not a far fetched take whatsoever. But it is important to reign home that software cannot come close to the physical anatomy of neuroscience.


RanyaAnusih t1_j6nlgk7 wrote

Only an understanding of quantum theory has any hope of explaining consciousness. Complexity in networks most likely will not solve the issue.

Life is taking advantage of quantum processes at a fundamental level.

The current model of neuroscience is also misleading. Some kind of enactivism must be considered


bildramer t1_j6okziq wrote

"Complex neuronal activity" is not an explanation, it's basically a restatement of what generates consciousness in us, i.e. you can have complex neuronal activity without consciousness, but not vice versa, unless you do equivalent computations in some other substrate. The specific computations you have to do are unknown to us, but we have some broad hints and directions to look.


AUFunmacy OP t1_j6ophx4 wrote

I’m sorry, but if you think you’re going to persuade me that I’m wrong with this pseudo-intellectual jargon - you need to rethink your approach. All you’ve said is consciousness cannot occur without complex neuronal activity but not vice versa which I did not imply to be false anyway. The rest of your speech was some weird trip you and a thesaurus had together.

Either that or you used an AI to write your comment which I suspect since you said, “but we have some broad hints and directions to follow”, unless you make a leading statement to that odd sentence - it is just such a non-sequitur thing to say.


ExceptEuropa1 t1_j6orzge wrote

AI has many different approaches, and it's not fair to say that it is somehow based on, or that it replicates human cognition. There is so, so much beyond neural networks. Edit: typo.


AUFunmacy OP t1_j6otxi7 wrote

Yes, as a programmer who has experience in machine learning I know there are different approaches, however, ChatGPT uses a parameterised, deep-learning (neural network) approach. And it certainly closely imitates how central nervous system neurons communicate, in the brain specifically (I’m in med school as a neuroscience major). That isn’t to say just because AI imitates human neuronal activity - that they have the same properties, because they don’t.

We should discuss instead of you creating vague rebuttals that provide 0 evidence and 0 explanation.


ExceptEuropa1 t1_j6ozqbc wrote

Rebuttals? You're mistaken, my friend. I simply pointed out that your statement was unfair.

Now, your response was again self-congratulatory. I have completed superior degrees than yours, but I haven't yet dropped them here. Look, if it's true that you knew that AI has different approaches, then you simply misspoke. You said something wrong. Period. Own it up and don't get all offended. Gee...

What the hell are you talking about when you say something about evidence or explanation? I corrected you. What else did you want? A book reference? Any book on AI will show how incorrect your statement was. Open one, in a random page, and will you see.


AUFunmacy OP t1_j6pfm5y wrote


Please tell me which degrees you have completed mate, it’s not self congratulatory it’s providing my credibility to back up the statements I make. What is self congratulatory is you saying “I have completed superior degrees to yours”.

Show me my mistake? I am so confused what you are all hung up about, where did I claim neural networks were the only trading strategy?

In general, the instigator of the debate is required to present their argument, you have no argument if you provide no evidence. You haven’t showed me what you are talking about, I don’t believe you have “superior degrees of either”. Get over yourself mate 😅


bortlip t1_j6oatpx wrote

"Consciousness and complex behaviour have no relationship whatsoever" *

* citation needed


Nervous_Recursion t1_j6nhonk wrote

I don't agree with the article (in its form and content) so this comment is not to defend it, I will also say that it's not making much sense and is disorganized.

But your comment is also incorrect. "No relationship whatsoever" is a strong claim and nothing has been shown one way or the other. There are valid paths of inquiries that are trying to understand consciousness in the light of control theory / cybernetic, which is all about complexity.

While IIT is far too naive and already shown incorrect, I think there is a nugget of sense to take there about the partitioning of the network and measuring the information entropy in each. What it lacks in my opinion is that not only both partitions should have a degree of Shannon entropy, but there should also show tangled hierarchies[1]. I think consciousness is one part of the network building symbolic representations of the states of the other part, being in the same time transformed in its structure (which seems to be how memory works). Having an interpreter running, being itself modified by its input but also issuing orders, is a tangled hierarchy.

There is nothing proven at all of course, it is all personal opinion. But I consider it a much better direction than some other current theories and a more realistic description of how such process could be organized. And in that sense, while causality is definitely not decided, it is absolutely possible that either such level of complexity is necessary for complex behaviour, or that complex behaviour will mechanically create this organization.

Of course designing simple machines for complex purpose is not the point. But designing simple computations to generate complex behaviour might definitely be tightly coupled with how consciousness evolved in humans (and other thinking animals).

[1]: while this paper goes against the idea, it's not contradicting it. Nenu says that Hofstadter didn't prove anything, which is correct. It doesn't mean it's shown incorrect or even less likely. It's still useful though to contextualize and try to formalize the idea.


MainBan4h8gNzis t1_j6mk6rn wrote

“I write all my stories independently and don’t just use ChatGPT.” Does this mean ChatGPT wrote some of this? I noticed many similarities in this piece with excerpts ChatGPT had given me on the subject. I was not surprised to see this at the bottom of the page.


AUFunmacy OP t1_j6mktyu wrote

No I mean I really don't use ChatGPT, I wrote that because my stories are mostly aimed at people in AI who might suspect that practise as well as Programming and Crypto - which you conveniently omitted. It's also due to the vast amount of posts on the platform which are AI generated (which people are well aware of) and I wanted to assure readers that I do not artificially generate my content.

Please tell me which excerpts? I wrote all of this based on research and my own ideas.


[deleted] t1_j6n3dk2 wrote



AUFunmacy OP t1_j6no0fo wrote

Oh wow you butchered that


doctorcrimson t1_j6nveej wrote

I should probably delete it for not being respectful as per sub rules, even idiots are welcome to discuss here.


jamminjalepeno t1_j6mtp52 wrote

Unless AI has consciousness, I don't think it will ever be self aware in any meaningful way.


D_Welch t1_j6n1fqd wrote

Being conscious is being self aware no?

In saying that I'm not so sure as I know a lot of conscious people who are very NOT self aware.


TheRealBeaker420 t1_j6nxhey wrote

Couldn't any computer which monitors its own state be reasonably described as self-aware? I feel like a more precise definition would incorporate something like sentience.


D_Welch t1_j6o9j4o wrote

[Couldn't any computer which monitors its own state be reasonably described as self-aware? ] No this is just if/then algorithms to use the phrase loosely. But yes I would agree being self aware requires sentience as we understand it.


doctorcrimson t1_j6n33w6 wrote

Kind if a looping argument there, one of the indications of consciousness is being self aware and vice versa.


MarysPoppinCherrys t1_j6n4qkk wrote

But wth is consciousness man? It’s much weirder than just being self aware, like in the big picture. For instance, you feel self aware, like you are experiencing this moment and every moment from a subjective position within your body, but how do you prove that? How do you prove the person next to you on the bus is conscious and not just an automaton perfectly performing the functions of being human? Or that any animal is conscious or not? Or, since we’re here, a machine? The fun thing about consciousness is that the only evidence any of us have for its existence is our own subjective experience for it, which is a pretty small pool. You’re only talking about your biophysical makeup specifically without any real, hard evidence of anything else. You can make assumptions, like everything like me, at least, has consciousness. Or perhaps all mammals do, or perhaps you need a brain of a certain complexity, or you need external senses and a realization of environment to be conscious. Or perhaps conscious isn’t all that important in-and-of itself and is just a biological function that arises from self-preservation, improvement, reflection, objectively seeing yourself as an individual in an environment, etc., but for you and any individual entity meeting a certain qualifying list of criteria, those functions just happen to arise as “experience” within an individual. Perhaps consciousness is just an emergent property of the universe, in which case there is definite potential for AI to experience. Perhaps you must be bless with free will and an understanding of good and bad from a higher plane in order to be conscious. Or maybe it’s all just performative, and anything that acts conscious is conscious. At the end of the day, and probably until long after we have fully mapped the brain and built machines that can act just like us, this will still be a debate because we really just don’t know what this is.


AUFunmacy OP t1_j6oqon4 wrote

My friend, self awareness comes before consciousness I assure you.


Magikarpeles t1_j6n4wq4 wrote

You can’t prove or disprove consciousness but it’s probably a continuum. Is an amoeba conscious? Is a spider? A dog? A 2 year old?

Where on this spectrum does AI fall, if at all?

What we’re doing with chatGPT is very similar to how a child learns.


sammyhats t1_j6nbqqb wrote

What makes you say it’s similar to how a child learns? Is there any evidence for that? Everything I’ve seen indicates that it’s pretty different, but I’m no expert..


Magikarpeles t1_j6nfvau wrote

I just mean conceptually. You show a child something and then tell them a word, but also a lot of the time the child just gets exposed to a bunch of language and figures out the relationship between the words themselves. On a surface level that’s similar to the guided and unguided training paradigms we use for training AI models.


bumharmony t1_j6mxw5w wrote

I believe that reality is basically unfathomable. Under real liberty there are no social contracts on anything. It is like the aftermath of the tower of Babel. (Which to me philosophically does not make ethics impossible)

If human is kidnapped to simulation or VR which is like momma’s house: it has its rules one can’t break or ask their justification, then any gadget or machine can seem conscious as long it does not break those rules. It is not the machine that is given consciousness but the world is simplified or dumbed down even.


grantcas t1_j6p5sl4 wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at


[deleted] t1_j6n2yfn wrote



BernardJOrtcutt t1_j6nb45m wrote

Your comment was removed for violating the following rule:

>Be Respectful

>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.

Repeated or serious violations of the subreddit rules will result in a ban.

This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


digitelle t1_j6nbajz wrote

Artificial self awareness does not mean negative hate. If anything, it may try to explain their indifference when asked.

I always enjoyed this article written by AI.

>We asked GPT-3, OpenAI’s powerful new language generator, to write an essay for us from scratch. The assignment? To convince us robots come in peace.


michellelabelle t1_j6ncy50 wrote

Hey, AI, are you there?


Show me what would it look like if Dolly Parton had been cast in the lead role in A Nightmare Before Christmas.

—Oh my God.


—I was just wondering that!


BernardJOrtcutt t1_j6nuk5p wrote

Please keep in mind our first commenting rule:

> Read the Post Before You Reply

> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.

This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.

This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.


Speedking2281 t1_j6ng836 wrote

This is actually a great question that worries me. The ability of AI programming to model language and human understanding is pretty much here in terms of how real it can look. There will be people within a couple of years IMO that will "declare" some piece of AI to be conscious. The ability to interact with humans and mimic how humans act and what we say will be such that they will say "this level of self-awareness surpasses that of some children, and they're conscious, aren't they?" I have almost no doubt.


tkuiper t1_j6o8dhu wrote

The major difference is lack of internal continuity. ChatGPT has no feedback process: it can't form new memories and therefore can't learn. I would argue that if that can be implemented, it would be at a human-like consciousness.