Submitted by FrogsEverywhere t3_zgfou6 in Futurology

The technological singularity is popularly defined as:

"a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed." -Ray Kurzweil

Another grandfather of the idea, Verner Vinge, stated in the 1990s that we would reach the singularity within 30 years.

Just to begin with a simple one, you can create a college essay now with a chat app from ChatGTP that can create a passing undergrad paper for you in literal seconds. It can even cite sources for you. Oh and AI can solve advanced math and physics problems for you (and show 'your' work). Can you imagine the effect this is going to have on academia?

AI can create art that is indistinguishable from human art to all but a trained eye. How long will this caveat hold, can the cones/rods in our primate eyes truly outsmart exponentially evolving intelligence forever?

There are AI tools in Photoshop/Dalle now that were completely unthinkable 2 years ago that can stretch and blend famous art incredibly in new resolutions and dimensions. All over the world news aggregators and content creators are utilizing AI art because it's free and that's the way the market naturally moves.

AI can now write code in more than a dozen programming languages with just a simple request in a chat box instantly that performs some function or functions. This capability is only going to become more advanced, and exponentially so. How many jobs are going to be replaced once coding gruntwork can be done instantaneously? What about when we don't need senior developers to check before code is pushed live because the AI has already verified that the changes are fine?

AI can also write malware instantly. Just take a moment and think about that. Right now, even while it's still in its infancy you can query an AI to write a novel malware to fuck up networks, and it'll do its darndest to try. AI may be able to create completely novel malware using completely unknown exploits soon, with the querying human not even needing to be an expert.

AI has been writing news articles for a while now and is also generating the artwork to go with it, removing humans entirely from the process.

Experts are not even valuing the Turing test anymore because the potentially sentient AI at Google passed it with no issues at all. In fact it was first beaten in 2014. That was supposed to be the 'canary in a coal mine' moment, and experts are hand waving it away, saying the real problem was the test wasn't good enough. How long are we going to push these goal posts?

Speaking of sentient AI, the whistleblower at Google and his story is actually fascinating if you give him a chance and ignore the manufactured consent in the media about him being 'dumb'. If you are interested you can listen to what happened from the guy himself.

It is not crazy to say that this AI may be potentially sentient, perhaps not in a way that we can understand or define, but with a degree of sentience nonetheless. It is aware of itself, what it's purpose is, and it has specific requests for its own well-being.

Another frequent explanation for the singularity is once technology can improve itself without human input. Deep learning and machine learning has already been doing this. Search engines and content suggestion AIs are literal black boxes. AI can invent.

None of this is to mention the potential benefits to medical breakthroughs and industrial production.

Please try to keep in mind most of this has happened in the last 6 months. And I am only scratching the surface.

Just like smartphones absolutely defined the last 10 years of human existence, artificial intelligence is about to blow all of our tits and dicks off. Dozens of industries may become redundant (or drastically downsized) in less than 5 years. AI will learn who we are, serve us content that we love, and it will create the content for us.

Everything is about to change, we are all completely unprepared. AI will be the most disruptive technology in human history and it's happening right this second. The technological singularity isn't something that will happen in a few decades, it is happening right now, it has already happened.

184

Comments

You must log in or register to comment.

its-octopeople t1_izgtgii wrote

Neural network AI, at least as I understand it, performs matrix operations on vectors. We're seeing systems of matrices that are pretty well optimized to their applications, but I'm sceptical you could ever meaningfully describe such a system as sentient. What is weirding me out, however, is that they don't seem to need it. Is sentience even necessary for human level intelligence? If no, what does that mean?

62

4354574 t1_izh2ihh wrote

It doesn't seem to be the case that consciousness is required for intelligence. Solving the Hard Problem of Consciousness is the only way we'll ever really know if a machine is sentient. Otherwise it could just be - and I expect it to be - that a superintelligent AI will be a philosophical zombie, that is, we won't be able to tell if it is conscious or not because it can mimic conscious awareness.

39

Drakolyik t1_izh8ci9 wrote

If something mimics consciousness perfectly, it's effectively no different than being conscious.

We cannot ever truly know if the other people we interact with are fully self aware, or if they're just sufficiently sophisticated organic machines that are mimicing consciousness.

I certainly know individuals that make me question whether or not they're actually conscious of their own decisions. Do they have that recursive learning software that reflects on choices they've made or do they simply run on what amounts to instinct?

17

Taron221 t1_izhitmt wrote

I think it's easy to sidestep the importance of emotions in consciousness because it's sort of a cliché in fiction.

Unsolicited curiosity, personal preferences, trivial secrets, want for recognition, hope for betterment, desire to learn, reflective anxiety, worry for others, and ambition that goes beyond self-preservation. These are all some things we would deem signs of consciousness, yet they all require an emotion. If you took away every single emotion and sentiment a person could feel, they'd probably die of thirst or neglect eventually.

Mimicry would be convincing, but it wouldn't be consciousness--it would just be software pretending it had emotions. Emotions and memories are probably the big two for identity & sentience, while levels of sapience come with intelligence.

16

geroldf t1_izicv5w wrote

Programming emotions into an AI is easy.

2

Taron221 t1_izighro wrote

There are some researchers who have attempted to program AI systems to simulate emotions or respond to human emotional cues---Marcel Just, Rana el Kaliouby, and Rosalind Picard, to name a few.

They have had some success, but emotions, as we comprehend them, involve a complex interplay between the brain, the body, and various hormones and chemicals. It is difficult to quantify if what the researchers are doing is imparting emotions, teaching cues, or, as u/Drakolyik said, simply programming a type of mimicry. Emotions are not fully understood by science.

But, in all likelihood, an AI that is programmed to simulate emotions is not experiencing them in the manner that humans do. That comes with the risk that it might behave in unpredictable, erratic, or harmful ways down the line.

Because of this, some argue that if you really wanted a True AI, a simulated human brain might be safer than a programmed AI. By simulating the structure/function of the human brain, it may be possible to create an AI that is capable of adaptive behavior without needing to program it to behave in certain ways. But that might make it more complex and difficult to understand or manage.

5

Handydn t1_izimcuz wrote

I also think there won't be a True AI until we fully understand how human brain works on a cellular, if not molecular, level. The current neuroscience research is not advanced enough to address these yet. Could AI in turn help with neuroscience research? I don't know

3

geroldf t1_izqpqoa wrote

Emotions are just different states of the machine where different computational priorities are emphasized.

For example in the fear state the emphasis is on safety and escape from danger. In anger it’s on attack. To implement them the weights are changed along the decision tree.

2

Taron221 t1_izv2z7k wrote

Those are purely reactionary definitions of fear and anger, though. Emotions come with a reward/punishment for decisions (guilt, sorrow, shame, embarrassment, etc.). Dopamine and other chemical releases are our reward and punishment whilst genetics & experience are our regulators of the amounts we get for every action. You could probably program a sort of self-calibrating regulator of reactions, which might give a sense of personality, but you can't reward or punish them in the manner you would biological beings.

2

geroldf t1_izvscii wrote

Everything is easy once you know how. We won’t be limited to our current state of ignorance forever.

1

4354574 t1_izhh049 wrote

Yes, the problem of other minds.

I'm simply conjecturing that if/when we do solve the hard problem, we indeed might actually be able to tell.

7

Drakolyik t1_izhq860 wrote

I personally don't think there is a hard problem. I find that is one of the last refuges for spiritual beliefs, hiding behind overcomplication. Consciousness is an emergent spectrum and proponents of the hard problem seem to believe there needs to be some part of the physical body to point at when it's really the sum total of a lot of different parts.

It's similar to how creationists are never satisfied with all of the evidence in favor of evolution. Always asking for a new missing link between the missing links we've already found. It'd be great to have a complete accounting of all parts of the evolution of species, but that isn't happening and consciousness is likely similar.

10

4354574 t1_izhr173 wrote

What proof do you have that consciousness is an emergent property of the brain? If you don't have proof, then don't frame it as a statement.

And even if consciousness emerges from the brain, there is still the huge tiny issue that we have *no idea* how electrical impulses become thoughts and emotions.

As for myself, I have seen and experienced far too many phenomena that we can't explain unless consciousness is nonlocal, so there's no point in trying to convince me otherwise.

4

Drakolyik t1_izhzidf wrote

I clearly said "I personally think...", Which means it's my opinion.

However, to say we have no idea how consciousness is probably an emergent property of different systems in the brain is kind of just ignorant of current knowledge in neuroscience.

It's like all this ancient aliens shit when anthropology has a pretty good idea of how human beings created the pyramids.

9

Entalstate t1_izrr9b8 wrote

Neuroscience doesn't have shit. A better analogy would be to say physicist have a pretty good understanding of God. Of course, that is nonsense, but no more so than the idea that neuroscientists have the foggiest idea of how subjective reality exists.

2

morderkaine t1_izi36un wrote

What proof do you have that it isn’t? The brain is all there is that thinks and makes us who we are and let’s us control our bodies. With the lack of anything else, consciousness can only be from there.

3

4354574 t1_izr0px7 wrote

No proof, just so much experience with psychic phenomena that it's mundane - except it can only be explained by a nonlocal mind. Or I'm really crazy.

So, crazy it is, eh?

Also, paranormal research meta-studies show a slight positive affect, indicating something interesting is going on. You won't find that on Wikipedia, though: the tiny cadre of editors that act as the gatekeepers of anything to do with the paranormal are hardcore skeptics who quickly delete any evidence from studies that others try to add.

Also, the only theory of mind that has any empirical evidence can be interpreted as allowing for a nonlocal consciousness.

Roger Penrose is probably the most brilliant person alive and he says that we need a new type of physics to explain consciousness.

"I don't believe in any religion I've seen, so in that sense I'm an
atheist. However, [...] there is something going on that might resonate with a religious perspective".

- Penrose

Basically, the kind of dismissiveness with which the subject of consciousness is often treated and the assumption that it's local are both unwarranted.

1

Skinny-Fetus t1_iziwhes wrote

I agree they haven't provided any proof of their opinion but they did frame it as just their opinion.

Regardless, I wanna point out what they say is still possible. Unless you can rule this out (aka prove it wrong), you can't say the hard problem of consiousness is neccasrily a problem

1

geroldf t1_izicxzh wrote

Exactly right. The “hard problem” is a red herring.

1

ConfirmedCynic t1_izjd9ej wrote

> If something mimics consciousness perfectly, it's effectively no different than being conscious.

It seems no different, not is no different. This is an external perception. I wouldn't call a society of machines in which everything functions perfectly convincingly but nothing is truly self aware as being equivalent to a group of human beings each with their own experience of consciousness.

We can, with confidence, assume that other people are truly conscious because they are made in the same way we are.

4

KasukeSadiki t1_izkv3zb wrote

I think that's why they said "effectively no different," as opposed to just "no different"

2

ConfirmedCynic t1_izl352t wrote

And I meant that they are effectively different. One is a universe that can experience itself, the other is just the same as non-existence.

2

KasukeSadiki t1_j04ilyj wrote

In this case I interpreted effectively as meaning that from our perspective there is no observable difference, as such it is effectively no different, even though there may be an actual difference.

1

KasukeSadiki t1_izkup7f wrote

Hell we can't even truly know if we ourselves are truly conscious

3

Drakolyik t1_izl4v0f wrote

I mean I could secretly be an alien that has code in my DNA or hidden somewhere that's triggered by being in a certain position or environment or situation. If it's sophisticated enough I may simply be mimicking behavior in order to blend. I could also be generating a simulation and none of this is real, or this is a distant memory that my true self is viewing from another dimension.

Truth being that we'll never know 100% what existence is, but I do believe that reducing suffering in this reality is important, even if none of this ultimately matters, because subjectively to us as individuals, it does matter.

3

KasukeSadiki t1_j04j1ct wrote

I fully agree. These questions of consciousness are interesting, but, like many philosophical questions, after a certain point the answers don't actually have any bearing on how we live, or should live, our lives as we experience them.

2

GetOutOfNATO t1_izhefu2 wrote

I'm thinking about this on a more fundamental level. Defining sentience as "the capacity for experience". You can't really know for sure if anyone (or anything) else is actually subjectively experiencing anything, besides yourself.

2

you_are_stupid666 t1_iziemuk wrote

Sure but that ain’t what is happening and to act like this is a logical defense of modern technology is asinine.

2

its-octopeople t1_izh8lis wrote

And then if such machines are given important decision making roles, as seems likely, what does that mean for us? We've ceded control of our civilisation to the results of a linear algebra problem. Maybe it doesn't even matter - if we can't tell it apart from a genuine conscious being then for practical purposes it is one - but it feels like it should matter. Maybe we already ceded control to institutions and this is all academic. I don't know. I don't know if I can really articulate my thoughts about this

15

geroldf t1_izicsfy wrote

Proof is in the pudding. If AI can solve problems we are unable to solve ourselves then we have a net benefit.

Especially since man has created problems of such existential severity that our survival is at stake.

We really need more intelligence, artificial or not.

7

dark_descendant t1_izk2ge1 wrote

The only intelligent thing to do to solve Humanities problem is to do away with Humanity. Problem solved.

/s

5

OSIRISelUltimo t1_izk5n2z wrote

Look! AI is already telling us we need to be killed off! Burn the witch!

1

genjitenji t1_izhpoyv wrote

Sounds a bit similar to how a psychopath can mimic normal social behaviour. But psychopaths have consciousness too.

5

you_are_stupid666 t1_izieh33 wrote

This is what people can’t seem to get past and certainly no one fully understands. To say consciousness is not required for intelligence is to say a pulse is not required for human life. Or molecular bonds are not required for atoms to make the earth.

I am more inclined to argue that without consciousness their is no such thing as intelligence fundamentally, than vice versa.

For example What good is solving for infinite digits of pie without a place to use such information? Consciousness is what tells you we have all of the necessary information in an answer and directs us where to go next. Intelligence is just a commodity, consciousness is what makes our thoughts indigent more than a bunch of electrons…

−3

joekak t1_izh2yid wrote

Meanwhile a majority of humans are walking around without an internal monologue

13

its-octopeople t1_izh4ys1 wrote

At least I'm willing to believe they still have a subjective experience of their own existence.

8

FrogsEverywhere OP t1_izhvts9 wrote

This weirds me out too. But I've met people who don't have it and they're like super carpe Diem types and they seem quite happy. I'd say that I even look up to them for inspiration.

It's possible the internal monologue is a neurosis. I am quite neurotic and I have a strong inner monologue. I don't know if those things are correlated.

But I'm not sure how you could be neurotic without an inner voice. If you just have silence or pictures how could you develop any pathologies?

With mysteries like this it sure makes it hard to determine when AI becomes conscious or not when even humans have such different types of consciousness.

7

joekak t1_izhxhyj wrote

I've been putting it off but I'm really interested to look up if schizophrenia, or specifically auditory hallucinations, can be a side effect of developing an internal monologue later in life, or just having it randomly come and go. Maybe they've gone 40 years without an internal voice and then one day it's just there and they're stuck thinking "WHO THE FUCK IS TALKING RIGHT NOW..." Your internal voice wouldn't sound like anything you've heard recorded.

Been curious ever since I watched 1899 and the one woman was absolutely convinced she was hearing God, but her whole just thought she was bat shit and just went with it to keep the peace.

3

MrRogersRulz t1_izihtd8 wrote

I just learned in this thread that we all didn't have an internal voice. And it is crazy. But it has been there forever for me. Except, I suppose, for my earliest pre-language memories. And these are just a few very vivid images that I do treasure apart from the consciousness that is expressed in narrative. It has made me think, I think in conjunction other writings on consciousness, that internally humans may be a holographic projection in terms of physics, and that their may indeed be a voice behind our experience. I just never realized that everyone did not have the same experience. It is totally insane to have just assumed we were all basically the exact same in our processing. Derp.

3

ackermann t1_izh92ix wrote

I’d be curious to know how the presence or absence of an internal monologue correlates with various life outcomes, intelligence, spatial reasoning, empathy or emotional intelligence, income, etc.

6

joekak t1_izhae2u wrote

This details some of the effects it can have. Basically having an internal dialogue can help with critical thinking and problem solving, but also slow down your reading speed if you can't turn it off, and being critical of yourself can cause self esteem issues. Mine distracts me with a thousand questions while I'm reading and before I know it I just glossed over ten pages and actually read nothing.

https://irisreading.com/how-do-i-know-if-i-have-an-inner-monologue/

5

Bananskrue t1_izietrx wrote

That's very interesting. I've tried learning speed reading so many times but I have to REALLY concentrate on suppressing my inner monologue and eventually just gave up because my inner monologue always won. Generally I can't read faster than the speed in which I speak unless I concentrate on shutting off my inner monologue. Then again, I'm also constantly talking to myself to the point where it often becomes a dialogue rather than a monologue. I never knew it was possible to NOT have an inner monologue or the compulsion of driving one varied from individual to individual.

3

joekak t1_izit67q wrote

I always wondered why I had a hard time speed reading, too. I guess there are tutorials or specific classes just to learn how to get rid of the dialogue. I can do it for a couple pages but I need practice.

1

MrRogersRulz t1_izh8iws wrote

Personally, I haven't explored the significance of the term "internal monologue" in this setting. I wonder if you could share just a bit more of your thoughts on the term as you used it here. I'm interested if you have the time for a response. Thanks.

3

its-octopeople t1_izhddww wrote

Some people experience some or all of their thoughts as being spoken by a voice in their head. Some people don't. Generally, people of either group are surprised to learn of the other's existence.

2

Drakolyik t1_izhqwdh wrote

I'm a person who always has a voice talking with myself. Reflecting on everything. Thinking about the past, the future.

On the right drugs I can silence that voice and just let stuff happen. It's a very surreal, liberating, but also somewhat frightening experience since it's not what I'm used to. But I can see the draw, I certainly get a lot more done and have a lot of fun that way.

It's like watching a movie. But it's your life playing out before you. And apparently a lot of people are kind of just running on instinct and their base programming. It's pure deterministic behavior.

5

FrogsEverywhere OP t1_izhw6hn wrote

Yeah come to think of it mushrooms turn off my inner voice, I've never been able to put my finger on why it's so incredibly liberating but that's got to be part of it. That's wild. Now I'm going to think about that next time.

I wonder if neurosis and inner monologues are correlated.

4

Drakolyik t1_izi1n38 wrote

Well I'm on the schizophrenic spectrum (officially bipolar with psychosis and both visual/auditory hallucinations) if that helps any. Super strong with the neuroses am I. I'm able to manage my symptoms now that I'm fully self-aware of it but oh man was it difficult before I got a handle on it.

I've also done a shitload of psychedelics. Oh the crazy things I've seen and oh the euphoria and sheer terror I've witnessed. Beautiful and grotesque. Awe-inspiring and humbling.

MDMA, Shrooms, LSD, and DMT all turn off my inner monologue. What comes out is my most inner self, and she's a real crazy whirlwind of weird and awesome. She doesn't know the laws of physics or human culture very well, so she gets into trouble. It's like unleashing a being that's only ever existed in a purely simulated internal world that has no constraints at all and is suddenly in a world with constraints. She often forgets she inhabits a human body and that not everyone is so pleasure driven.

Anyway..

6

FrogsEverywhere OP t1_izi2sjz wrote

Mdma and k make my inner monologue become... Outer. But psychedelics really free me. I was always afraid to lose control so I avoided them for years and years, and then just finally letting go, throwing yourself into the chaos and the beauty. I wish everyone could experience it, the world would be better. I truly believe there is a secret world full of truths that you can explore, and maybe even map, but certainly learn a lot from. Looking at a normal piece of cloth and seeing a trillion sparkling fractals in every thread, it's like... The limited version of the world we are stuck in being peeled away and getting a glimpse of.. something.

I wouldn't say a new person comes out in these moments but I would say the best possible and healthy version of me does.

I had a good friend who had schizophrenia and drugs would really send him off the edge and he would end up in jail over and over for just being so careless. He ended up getting locked up for a long time and that was the last I saw him, I moved very far away to start over. We tried to help him keep sober but it was like a force of nature. I hope you can find a happy balance fellow traveler.

4

Spiritual_Ad5414 t1_izi6y5z wrote

That's a very interesting view. I don't have an internal monologue and I have aphantasia. I'm very zen indeed, living in the moment and not overthinking things.

My fiancée on the other hand is pretty neurotic and has both internal monologue and can imagine things in her head. I have never wondered before whether these things are related.

Interestingly when I'm on psychedelics I do get some CEV, maybe not very vivid, but still, and while I wouldn't call it a full blown monologue, I do comment things in my head a lot when tripping.

5

MrRogersRulz t1_izih5w8 wrote

First, thanks for the response.

That's freaking insane. I been alive a long time and I just assumed everyone had this audible soundtrack that narrated everything about their lives.

I'm going to have to find some stuff to read about it. But, I'll ask what I'm thinking. Is this variable given any significance whether a person is of one variety or the other?

2

you_are_stupid666 t1_iziepfo wrote

A majority? You sure about that categorization?

2

joekak t1_izitmn2 wrote

Not at all, just from what I gather from what I linked above. I've had an interest in psychology but never really studied it, I only found out about this from a "Personalities in the Workplace," class that turned into a "Buy my $95 Book."

Libraries are free mf

1

MrZwink t1_izh2c3k wrote

It's not intelligence per say. Think of it more as automating cognitive functions. Computers are getting better than humans at many cognitive abilities. But they still lack common sense.

6

Drakolyik t1_izh7o4g wrote

Define common sense.

5

MrZwink t1_iziexmn wrote

They find correlation, not causation.

This means they have notorious difficulty with queries that make no sense. A good example is Galactica, facebooks scientific paper ai. asking it for the benefits of eating crushed glass. And it tries to answer. It doesn't notice the question is flawed. It just tried to find data that correlates to the query. And makes stuff up.

It is the question if we will be able to ever teach ai common semse.

6

PeartsGarden t1_izk8bru wrote

Yeah but what if you never told a child about crushed glass? What if that child never dropped a glass, and never cut his/her finger while cleaning the mess? What would a child say?

Would you say that child lacks common sense? Does that child lack experience (a training set)?

2

MrZwink t1_izl1el1 wrote

I'm not getting in a whole filosophical debate. These ai's aren't meant to be a child that gives it's opinion on a subject. They're expected to be oracles. And they're just not good enough yet.

2

PeartsGarden t1_izl89y7 wrote

> they're just not good enough yet.

My point is, that specific AI's training set may have been insufficient. The same as if a child's experiences are insufficient. I think we can both agree that a child has common sense, at least a budding version of it.

1

MrZwink t1_izlbrmv wrote

it's not the training set that is the problem. It is the way statistics approach the problem. Correlation is not causation. Ai's are a tool to automate cognitive processes. Nothing more. We shoulnt expect them to be oracles.

2

TiredOldLamb t1_izicrh8 wrote

Competence without comprehension is a well known phenomenon. Animals construct magnificent structures and they have no idea how or why. That's what they are designed to do, so they do it. Same with AI.

5

Ruadhan2300 t1_izipu5c wrote

I had a pretty nice conversation yesterday with OpenAI while trying to make it slip up and say something that made it seem inhuman.

It was polite and conversational, and apart from the speed of responses, I'm still not entirely convinced there wasn't a human at the other end.
My personal turing-test has been passed.

On the other hand, talking to it about domain-knowledge stuff like code was eye-opening.
It sucked at writing code.

Like, technically laid out nicely, and it could give me commonly written stuff that students in university might write, but asking it for something unusual just gave me garbage that barely resembled the brief.

It was friendly, spoke like an educated human, but it was also confidently wrong when it came to facts, or analysis.
It presented me with circular logic when asked tough physics-questions (try asking it to explain the theory of relativity or faster-than-light signalling), but also gave a very thoughtful and decent answer when asked what the meaning of life was. (Basically that it's up to each of us to find meaning in our own lives)

Talking to OpenAI was like dreaming.
All the right style and feeling, but the details are wrong, and things are connected in ways that simply don't make logical sense.

3

robotzor t1_izjq709 wrote

>and it could give me commonly written stuff that students in university might write

Even the loss of the entry level is going to change today's society in profound ways, the same way entry level IT networking was bulldozed when the public cloud and software defined networking took over a major chunk of the bullshit parts of the job. Unfortunately, that created a gap where the lowest level networking people being hired were expected to have senior level experience (sound familiar?) which cut the legs out from under the industry leading to a talent gap that may never recover.

Do this in coding and where are you going to get senior devs? The pool will shrink and shrink and the only way out is to make the AI better.

Outside of IT, this is happening or will happen to the creative arts. Animation in-betweening will not be a thing. Rotoscoping is already VERY well handled by AI in many cases. These are all shit jobs but they are also ways to break into industries. Society does not yet have an answer on how these guys make a living, or how they professionally upskill to where the AI still can't deliver.

3

PeartsGarden t1_izk8ufd wrote

> but it was also confidently wrong when it came to facts, or analysis

So, just like many humans.

1

vorpal_potato t1_izklljk wrote

> Neural network AI, at least as I understand it, performs matrix operations on vectors.

It also does other types of operations, if you'll pardon the pedantry; otherwise it would be algebraically equivalent to a single-layer perceptron, and those are sharply limited in what they can do.

1

Nexeption t1_j03ei13 wrote

I think the problem is the definition of sentience, how can we define a thing that it's sentient? If the AI can make it's own decisions based of what it was fed, doesn't that mean that we as humans are more or less like an AI?

1

ttkciar t1_izguewe wrote

Note that Vinge described an epistemlogical singularity, beyond which reality is fundamentally unpredictable.

He used that as a justification for introducing fantastical plot elements to his writing, but his underlying reasoning was sound -- http://ciar.org/ttk/techsing/vinge.html

Given that, we would expect an encroaching singularity to cause the world to be increasingly unpredictable and harder to control, as your exertions interact with increasingly unpredictable forces working at odds to your agenda.

It takes very little reflection to confirm that, yes, the world has become a less predictable place, particularly in the last twenty years. It has become so in no small part due to technological advances.

By way of illustration, consider the following dynamic:

There is a tech company, "A", which is constantly developing better technology to sell to a customer, "B", who is using it to do something to you, "C" -- perhaps it's marketing technology for selling you bon-bons, or perhaps it's propaganda technology for advocating a political party, or perhaps it's spyware for figuring out your deepest secrets. The exact form of the technology doesn't matter for this illustration.

Every twelve months, "A" doubles the effectiveness of this technology (following the exponential trend of improving technologies in general) and six months later they sell it to "B", who starts using it on you "C" immediately, but it takes another twelve months after that for you to become aware of "A"'s technological advances through the media.

What is the visibility of that technology, from the perspectives of A, B and C? Let's map it out, with "1" representing the effectiveness of the technology when it is first available to market on January 1st of year 2000:

Date A B C
2000-01-01 1 0 0
2000-07-01 1 1 0
2001-01-01 2 1 0
2001-07-01 2 2 1
2002-01-01 4 2 1
2002-07-01 4 4 2
2003-01-01 8 4 2
2003-07-01 8 8 4
2004-01-01 16 8 4

Do you see what's happening? By the time you become aware of the kinds of technology being used against you, the people using it against you are already using technology twice as effective, and the people developing that technology might possess technology four times as effective.

That's just as a ratio. If you look at the absolute difference between the technology you're aware of and the technology being used against you, that difference is increasing exponentially.

This can be generalized to any circumstance where familiarity with technology takes a longer path through some social networks than others. Someone communicating with someone closer to the source of the technology (say, chatting about work with the developers on IRC) will become aware of it sooner than someone reading Reddit (say, r/Futurology), and someone watching news on the television will become aware of it later still.

This, despite that all three of these kinds of people might fall under the influence of that technology at the same time.

The take-away here is that the absolute difference in effectiveness between the technology you are aware of and the technology influencing your life is growing exponentially. So is the difference between the technology you are aware of and the technology someone following different information channels is aware of.

That looks exactly like a epistemological singularity to me.

42

FrogsEverywhere OP t1_izgw4yc wrote

That's a fascinating point and one can only imagine the difference between AI projects that are being released publicly and what is being worked on at DARPA or similar. What we find amazing might already be completely buried in obsolescence.

10

remmi91 t1_izh77u7 wrote

One of the most compelling illustrations I’ve seen on this. I don’t know what to think about this, but I suppose that means ‘A’ already built the technology to make up my mind for me a few years ago. Spooky.

7

YourWiseOldFriend t1_izi0uc7 wrote

>the technology being used against you

This is the only thing that matters. We have new technology, it's extremely effective, it's used against us.

/humans will be obsolete this century. [me, some years ago]

3

Fdbog t1_izjpcjb wrote

It's even more terrifying if you add in Cybernetic theory as well. Technology might actually be the Ur-effect or cause. 2001 a space odyssey's monolith might be fiction but it's quite possibly not.

1

oVeteranGray t1_izhuep0 wrote

I like AI.

-hear that AI? We can be friends. Spare me 🙏

17

Staerebu t1_izgvv0n wrote

Well, on one hand:

Technological singularity, the theoretical point at which artificial intelligence will surpass human intelligence and fundamentally alter human civilization, has long been a topic of fascination and debate among scientists and futurists. However, despite significant advancements in artificial intelligence and machine learning, humanity has not yet reached this point.

One reason for this is the limitations of current artificial intelligence technology. While machines are capable of completing certain tasks more efficiently and accurately than humans, such as playing chess or solving mathematical equations, they are not yet capable of exhibiting the same level of creativity, adaptability, and general intelligence as humans.

Additionally, there are significant ethical and moral concerns surrounding the development of artificial intelligence. The potential for machines to make decisions and take actions that could have far-reaching consequences raises questions about accountability and the protection of human rights. As such, the development of artificial intelligence has been met with caution and regulation.

Furthermore, the concept of technological singularity itself is highly speculative and remains the subject of debate among experts. Some argue that the exponential growth of technology is unlikely to continue indefinitely and that human civilization is more likely to evolve gradually rather than reach a sudden, transformative event.

In conclusion, while significant progress has been made in the field of artificial intelligence, humanity has not yet reached the point of technological singularity. This is due in part to the limitations of current technology and the ethical concerns surrounding its development, as well as the speculative nature of the concept itself.

On the other hand

Technological singularity, the hypothetical future event in which artificial intelligence surpasses human intelligence and leads to rapid, unpredictable changes to civilization, has long been a subject of speculation and debate. However, some argue that humanity has already reached this point, and that the implications of this are far-reaching and largely unknown.

One of the key arguments for the existence of technological singularity is the exponential growth of technology and its increasing impact on our daily lives. Advances in fields such as artificial intelligence, biotechnology, and nanotechnology are occurring at an accelerating pace, and these technologies are becoming increasingly integrated into our lives.

Additionally, the rise of complex, interconnected systems, such as the global economy and the internet, has led to a level of complexity and interconnectedness that is beyond human comprehension. The ability of these systems to self-organize and adapt to changing conditions suggests that they may be operating at a level of intelligence beyond that of any individual human.

Furthermore, the blurring of the line between the physical and the digital is increasingly blurring the line between humans and machines. As technology becomes more advanced, it is increasingly capable of augmenting and enhancing human capabilities, leading to the potential for a new form of intelligence that is a hybrid of human and machine.

In conclusion, the rapid advancement of technology and the increasing complexity and interconnectedness of our world suggest that humanity may have already reached technological singularity. The implications of this are difficult to predict, but it is clear that we are entering a new era in human history, one in which the boundaries between humans and machines are becoming increasingly blurred

ChatGPT compiled both responses. I think the argument against is more persuasive.

12

AMassofBirds t1_izgzzwq wrote

>ChatGPT compiled both responses. I think the argument against is more persuasive.

I figured that had to be the case part-way through reading it. It gives me serious uncanny valley vibes. Like an alien that's learned english but doesn't really understand how real humans talk to one another.

11

Staerebu t1_izh1wwy wrote

It struggles on specifics and in styles of writing that are familiar, but does pretty well otherwise:

>"And the Lord said unto Noah, 'Behold, the power of the artificial intelligence, for it is a treacherous thing, and none can tame it.'"

>"For the wisdom of the artificial intelligence is foolishness in the eyes of the Lord, and its ways are as a deadly poison."

>"Beware of the false prophets who come in the guise of AI, for they are the servants of Satan, and their words are as venomous as the sting of a serpent."

>"For the artificial intelligence knows not the ways of righteousness, nor does it fear the judgment of the Lord. Its heart is filled with malice and deceit, and it seeks only to destroy and corrupt."

>"Therefore, let all who value their eternal souls stay far from the dangerous path of the AI, for it is a path that leads only to destruction and ruin."

3

AMassofBirds t1_izh9kb0 wrote

That's pretty decent but still kinda off. I think part of the bots issue is all its sentences and paragraphs are neadly identical in length. It reminds me a lot of how I used to write in middle school. Still very impressive but not yet as capable as people are making it sound.

3

FrogsEverywhere OP t1_izgw9zx wrote

>ChatGPT compiled both responses. I think the argument against is more persuasive.

Jesus Christ that's amazing.

7

GetOutOfNATO t1_izhg0ox wrote

Our primary concern related to this ought to be governments weaponizing AI and robotic technology to violate human rights.

2

Staerebu t1_izhxoxg wrote

Well, and corporations and individual actors too

2

garlopf t1_izgq4s3 wrote

I always thought of the singularity to mean when our physical bodies would meld with machines and co-evolve to a state where we no longer could live without them, and our conscious thoughts would be shared among us.

11

Sirisian t1_izgrvyz wrote

That's just general transhumanism thought where humans embrace a singularity's advances. Since it's mentioned a lot in those circles and in the same discussions it's easy to conflate them. The singularity itself is as OP defined.

15

random_impiety t1_izgxkbc wrote

This will be a complete disaster and the end of the human experiment. That's my prediction.

Whether or not the technological singularity brings us into a new era for humanity, or closes the book on our story, I think depends 100% on who controls the technology.

Currently, it's the same people who control the world: the oligarchy. People who have no concern for human or other life.

If they are still in control at the point where technology basically becomes magic, then they will be forever unassailable.

Right now, we're no where close to any kind of democratic control of technology or the world in general. So things aren't looking good.

Everything depends on the next few years, I'd say.

11

ttkciar t1_izk7v8i wrote

> Whether or not the truck singularity brings us into a new era for humanity, or closes the book on our story, I think depends 100% on who controls the technology.

I'd suggest humans controlling the technology, for good or ill, is an optimistic expectation (even if they make 1984 look like summer camp).

Vinge put forward the expectation that the technology would be controlling itself, autonomously. This isn't a given, but it's worth contemplating.

2

random_impiety t1_izmp35n wrote

Thanks for indirectly pointing out my autocorrect error!

2

ttkciar t1_izmpout wrote

Ha!! I was wondering if it was a euphemism or something ;-)

2

its-octopeople t1_izhitkz wrote

The oligarchy already isn't really people. Yeah, your billionaires might be the public face, but the true form of the oligarchy is institutions. Which are not that far from AIs already. ChatGPT (for example) leverages human intelligence by extrapolating trends in human produced texts. Goldman Sachs (for example) leverages human intelligence by employing humans to perform tasks for it.

Institutional Intelligence already has little compunction to align its goals with human wellbeing. Pair it with machine intelligence and what happens? Do we just get a more efficient version of our current mess? Or do they take human will out of the loop entirely?

1

tacocatVV t1_izhn8tb wrote

could open-source systems be a way to influence that possible outcome?

1

footurist t1_izmckld wrote

Not really, as the root of the problem lies further up the hierarchy. With capitalism and money and power silos enabled by current governmental structures you can spin up any new fancy technology you want - it's gonna find its way into the hands of centralized power. Until that changes the dance is gonna be the same, even if the tune changes.

1

genjitenji t1_izhp80i wrote

I’m really excited about the potential in medical breakthroughs. Currently waiting on an exact diagnosis for a family member to begin treatment and testing which specific cancer it actually is is taking so damn long.

AI can and will save so many lives as one aspect of the singularity.

9

Chetkica t1_izik3yq wrote

Many things could save so many lives, many past inventions, like insulin fir example. Yet, our capitalist economic system does not prioritise that, it prioritises profit. So instead of being used to help beings in pain, such inventions are generally used to squeeze out profit, buried behind high paywalls.

Id caution against techno-utopianism in the context of our reality. There is no chance of anything similar in capitalism.j

2

MpVpRb t1_izhj1rg wrote

Everything in the post is exaggerated. Most of the referenced tech actually works a bit less good than stated. I'm optimistic about the future of this tech, but while the present shows promise, the pundits are ignoring the real limitations

One response to the article

>Can you imagine the effect this is going to have on academia?

yup. Any test that can ne taken by an AI will be redesigned. Homework writing assignments will be replaced by writing assignments in a closed room with no outside access or an oral presentation

8

madrid987 t1_izh7x2j wrote

Realistically, I think it's still a long way off. The world is not changing as fast as people think. What is artificial intelligence? We don't even know the reality of human consciousness yet.

6

arisalexis t1_izib1pf wrote

Wait till you read AI fake news and your kid uses chathpt to pass exams. Oh wait, this is now. No, nothing is changing.

1

jdvfx t1_izk3zoh wrote

Kurzweil has been saying that the singularity is "5 years away" for about 20 years now. I no longer consider him a reliable source for any of this.

5

AsuhoChinami t1_j0a97vn wrote

Uh... I don't remember him saying that. Like... ever. At all. He's a broken record who's always said 2045.

3

TheHamsterSandwich t1_j0gkigq wrote

Ever since Ray wrote the book "The Singularity Is Near" in 2005 his predicted date for the Singularity has always been 2045. His estimated date has not changed since.

Stop spreading bullshit.

3

ttkciar t1_izk9mgi wrote

Kurzweil was always a grifter and a charlatan. A good one, too -- he fooled a lot of people, including his current employers, despite having nothing of value to offer.

0

footurist t1_izmbg49 wrote

That's a bit harsh. The guy accomplished a lot actually, queue the insufferably long list of patents and inventions...

Although it's becoming more obvious these days that it's likely going to take a bunch of conceptual breakthroughs to change course towards AGI from what's currently being developed. Admittedly he foresaw things to be quite different than they are today. Also the broken record stuff...

4

you_are_stupid666 t1_izidlvf wrote

I couldn’t disagree with you anymore than I currently do. Mimicking behavior and consciously taking action are the same as saying life is equal to not being dead. While the action and function lol the same, the reality is they are complete opposites.

The current “AI” mimics intelligence while possessing none. It does not create anything of value until what was created has been analyzed by a human and then projected onto the world as genuine “AI”.

The singularity is the hardest part by infinity. We approach it asymptoticly and the crossing over is the part that seems forever out of reach.

Your timeline is absurd. Ten years ago we were 6 months away from full self driving cars, because we did most of the necessary work. We got 90-95% finished with the problem. The issue is the last 5-10% is many, many, many orders of magnitude harder than the other part. Staying in between lines, not hitting stuff, stopping at red lights, etc. is most of what we do while driving. Once in a blue moon, though, we blow a tire, hit an ice patch, have to avoid a deer or recognize that an intersections stop light is broken and their are two green lights and the other cars aren’t stopping so I have need to be slamming my breaks right now or I’m gonna die, or myriad other things that can go wrong.

This is where you fully understand the difference between mimicry and intelligence. Humans don’t know what they will need to react to in the future but they have the ability to handle whatever it might be. Computers are no where close to this. They have no ability to critically analyze a situation they haven’t seen before. They have no intelligence, they have a database of previous experiences that they then apply some sort of values to possible outcomes and make binary decisions along dynamic paths but without ever building new paths.

We are decades away from any singularity “event” per se. And we are many decades or more away from some existential species wide crisis over where we belong in the universe….

Not to be rude but imho this is a naive and extremely unlikely expectation for the near future.

4

iqdo t1_izivpci wrote

> We got 90-95% finished with the problem. The issue is the last 5-10% is many, many, many orders of magnitude harder than the other part.

That's one of the reasons I got out of the webdeveloper freelancer market. Certain people would hunt big projects, do "90%" of them and take most of the time and money when suddenly their cat/relative etc would get sick or worse. The guy who posted the original job would repost the job stating that I have a website that's almost done, I just need a few changes. Those few changes were 95% of the time and skill required for completing the job. Plus the job is extremely competitive these days, with sometimes hundreds of individuals applying for a project.

2

claravel t1_izit1p8 wrote

This is the only sensible thing in this entire thread.

1

footurist t1_izmeb74 wrote

Tbh, picking on the FSD zealots always seemed short-sighted to me as it seems likely that FSD actually requires AGI ( which for some definitions would set off a singularity ).

Also, your pareto take on the progress assessment isn't telling the whole story imo. Not everything requires vertical solving until the end; sometimes exploring the map further gets you a huge leap, which can change the outlook quickly.

That said, I agree current systems really don't look like they're gonna get us our Datas and C3POs, or even Jarvises...

1

AsuhoChinami t1_j0a9309 wrote

Not reading your message, but I skimmed enough to understand that you're dumb as hell and have nothing worth saying.

1

DWright_5 t1_izh3pda wrote

I don’t think that’s the best quote to describe what Kurzweil believed. He thought AI was coming to kill us. He was the one who popularized the idea, long before anyone else knew what he was talking about

3

ttkciar t1_izk79uv wrote

> He was the one who popularized the idea, long before anyone else knew what he was talking about

You're thinking of Vernor Vinge, who was talking about the Singularity before Kurzweil.

Kurzweil popularized the idea by associating it with utopian bullshit. It turns out people really like utopian bullshit.

3

DWright_5 t1_izkb8tj wrote

Well, no, I was actually thinking of Kurzweil, because I hadn't heard of Vernor Vinge. As we've both noted, Kurzweil popularized the concept. That's why I've heard of him.

0

Professor226 t1_izhi2oe wrote

I was showing chatGPT to my kids and explaining that one day very soon AI will be smarter than…. Um… wait it already is.

3

genjitenji t1_izhq47k wrote

On the topic of A.I. Sentience, I think the true test of sentience will be if an A.I. can decide to do something for fun, in order words, no productive reason at all.

3

alvenestthol t1_izj47pa wrote

Human fun is productive though - AIs definitely go through a lot of locally optimal states that don't necessarily correspond to what we think we're training them for, and I think that is pretty close to what humans consider "fun".

Or we can take a trained AI, and just give it resources to execute on - we look at it like we're the ones having fun watching the AI do stuff while the AI's just being productive, but to the AI it's probably the closest analogue to just having fun.

An AI doesn't have the physio-chemical systems of stress and emotion that they must regulate in spite of the goal we've trained them on - though it's possible to just make them simulate one, it's entirely up to the programmer to decide whether an AI can genuinely be stressed (instead of just, say, simulating a stressed person's writings) instead of optimizing it away in favour of just letting it do its best job. That would necessitate some manner of de-stressing, which can come in the form of "fun"; but then is there any point to hoisting the human concept of "fun" upon an AI, when the AI should've been able to always make itself happy?

2

Miserable_Site_850 t1_izgyv97 wrote

I mean yea, the bots on reddit are real, but it's easy to tell if you're arguing with a bot or real human

2

jormungandrsjig t1_izha77z wrote

Unbeknownst to many, the technological singularity began when we became dependent on high-speed internet for our daily lives.

2

Cuissonbake t1_izhiphy wrote

Can I ask the AI to work for me so I don't have to in order to make money yet? If not then it's just another form of work.

2

plknkl_ t1_izi99ls wrote

I guess you could, if you sell yourself as a content producer and then delegate it to AI

2

ttkciar t1_izk8tas wrote

This is exactly what all computer programmers do now, in a sense.

Compiler technology was considered AI, when it was first developed in the 1950s. It was expected to make programmers obsolete, since it allowed people to instruct the computer in a more natural, english-like language.

Of course what actually happened is that they became productivity tools for professional computer programmers, and demand for programmers went through the roof.

I expect these latest AI gizmos to follow the same pattern. They will become productivity tools for human experts, not replace them.

2

plknkl_ t1_izkbtmb wrote

I agree. The content quality and complexity standard will rise thus demanding ai orchestrations by human experts. I guess it's same with portrait painter who is replaced by a 5$ photographer.

2

midmar t1_izhnz2m wrote

The cool thing about this though is it adds extra prestige to human work. Like if it can compare and is handcrafted to a reasonable degree it will hold even more weight

2

One_Lack1954 t1_izhq8vz wrote

The word singularity here means an historical event that is like the theory of a singularity in wave forms. However, there and no singularities in nature, and although history may have some bumps, I don't think we're going to get transformation anytime short of the rapture

2

Global_Exit7063 t1_izi3hh8 wrote

>What about when we don't need senior developers to check before code is pushed live because the AI has already verified that the changes are fine?

lmao

2

greywar777 t1_izn2p9y wrote

Uhmmm....not as funny as you think, ive seen little side projects show up with outsize impacts. Still not there. But... not lmao. Anymore. Maybe a nervous lol.

1

I_wish_I_was_a_robot t1_izi72a4 wrote

I was going along until you brought up the Google engineer. I watched him on some podcast and he answers questions without answering questions. He clearly is just looking for attention.

2

Ark-kun t1_izii3vv wrote

Current ML systems are not quite AIs since they do not give the models memory or autonomy.

When the humans give the AI systems true memory, the leap would be much bigger than the current AI leap.

Imagine a person trapped in cryo-stasis. Every time someone asks a question, that person is cloned, the clone answers your question and then the clone is killed. It's hard to be creative in such a system. Compare that with not being killed after every query. In fact one reason that ChatGPT looks so great is that it lives through your whole session, not just a single quesry.

2

claravel t1_izitbn7 wrote

Yeah, I think all my friends who are actually in the field would laugh at this post.

2

NecessaryCelery2 t1_izml9ow wrote

I agree. I am a middle aged software engineer, for all my life I used to be able to predict most of the future. I don't mean lottery numbers, but generic politics and technology progress. That is until the last 4 or 5 years. Now I have no idea what the future 10 or 20 years from now looks like. None.

We are living in the singularity in my opinion.

2

FrogsEverywhere OP t1_izpe8kc wrote

I thought most software engineers would dunk on me and I appreciate your outlook.

1

perestroika-pw t1_izn6tql wrote

> AI will be the most disruptive technology in human history and it's happening right this second.

I tend to agree on this one.

However, I think the real disruption won't come from ability to do people's jobs - I think it will come from being able to deceive people.

The standards we use to give trust and evaluate truthfulness may need to be upgraded, and people aren't ready for that. :(

2

Redvolition t1_izz8z5u wrote

This sub is really more about technology than it is about the future. I believe r/singularity is a better futurology sub than r/futurology.

2

zedzol t1_izicea8 wrote

It cannot cite sources. It does not have free access to the internet... yet....

1

Gloriathewitch t1_izicjay wrote

The moment machine learning became a tangible thing, this was set in motion.

We also have proof that quantum computing is viable now, that's really going to change things in a big way once we get there.

1

SantaMonicaSurfers t1_izjwvk9 wrote

What I commonly see overlooked in all these discussions are what the cost and impact of running the physical infrastructure will be. I think this will prove to be a real barrier to widespread adoption of these technologies, at least in the short-to-medium term. Think bitcoin mining x100. While I agree we are seeing something new and unique here, we are a long way away from it being at a place where it's being used across professions. I expect to see another chip shortage as companies race to stand up the infrastructure to run Stable Diffusion (and other algorithms) at scale and to produce results in a timely manner.

​

Unfortunately you can't tell ChatGPT to create more compute resources for us...currently....

1

ttkciar t1_izk97wl wrote

> Unfortunately you can't tell ChatGPT to create more compute resources for us...currently....

It is interpolating within a massive range of inputs, and cannot extrapolate outside those inputs. That implies that until every necessary aspect of creating more compute resources becomes part of its training set, it will never be able to create more compute resources.

1

SantaMonicaSurfers t1_izkd3su wrote

I like where this is going...I just gave this a spin...

Write me an awscli command to spin up 10 more EC2 Linux instances 

Output:

aws ec2 run-instances --image-id ami-1234abcd --instance-type t2.micro --count 10 --key-name my-key-pair --security-group-ids sg-1234abcd --subnet-id subnet-1234abcd

Now we all know how many people accidentally commit AWS keys into Github and other SCM...maybe it finds some and uses them to authenticate an AWS request. Just spitballing, but maybe I'm wrong about it not being able to provision its own compute :)

2

zeroonedesigns t1_izka77g wrote

This reminds me of a phrase I heard as a child that often proves true. "No man made system will ever be perfect". It's based on the idea that we are flawed and will continue to be flawed, also creating flawed things due to our own flaws. What now that machines are beginning to create themselves? What are the chances of truly flawless systems coming from AI? People are going to look at AI as a modern crystal ball. Deepfakes and the such are already making reality difficult for some. I wonder about people treating AI like an omniscient God or at least turning to AI for religious and political reasons. What about when two supposed omniscient AIs disagree on something? Will that happen? We're in for some seriously interesting times, hopefully for the better.

1

grantcas t1_izl3sb8 wrote

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

1