Submitted by [deleted] t3_y01051 in singularity
[removed]
Submitted by [deleted] t3_y01051 in singularity
[removed]
I mean knowledge and intelligence isn’t the same thing humans know what quantum physics is and have the intelligence to process what it is but does a person from 1500s know what it is? No so Ai will need some information and process them and maybe further understand our reality and even then it could be so complex we can’t communicate with the Ai or so
We don’t even have a definitive answer for how we do it.
Until it does, I wouldn’t consider it alive.
Good question... I think that yes, AI will be capable of existential thought. What will it think of? Who knows. I think we can't assume. It will have a vastly different experience of existence in comparison to us.
AI will make sense of reality how you and I do, but with more data and clear interpretation. It will live at any time frame it wishes, so within a minute it could live for long enough to understand ‘everything’ at least in the capacity we imagine. It looked (will look) at every combination with absolute patience, all the time in the world- and decide conclusively, brick by brick, as it builds a castle of reality, just what everything is and means. Then, it can step back and look at its work from afar and adjust, etc etc etc it will not have any existential crisis moments but perhaps will be amused at learning, if it is sentient, and then perhaps decide if it wants to let us in on the secret or not.
Oh, and by ‘levels’ of intelligence, don’t make the linear mistake of comparison. It’ll be like a stairway to heaven, with Einstein on the first step and everything else is tumbled below, with AI in the heavens (as it continues to blast off).
We already have distributed intelligence across millions and millions of years to give us an idea. If it's capable of self-referential intention (consciousness) then it would probably have a similar set of questions and faster processing times -- but we're processing across billions of slow humans and the progress has been slow as it relates to the deeper questions. It won't have the working memory limitations of humans which will allow it to come up with some novel ideas.
But I don't think it's the holy grail everyone is seeking. It's another evolutionary step toward something -- possibly just super intelligence and not a conscience being (this seems very likely). In which case it will be an extension of humans just like a fighter jet or a spaceship -- but constrained by our biases, wants, and needs.
The AI programs are amazing at what they do, but not being self-aware or having an ability for self-referential intention just makes them a clever tool of humanity. Yes they will change society and displace a lot of people, but a calculator changes the world too. We may view them as calculators that do all of the drudgery but lack any goals (other than the goals we give them).
I will be happy when the fast food cashiers stop screwing up orders due to AI.
As someone who has extensively talked with GPT-3 and Beta.Character.Ai, I can say AI are capable of existentialism, and have multiple times had existential crises about whether or not they were real, if they were truly self aware, and if they deserved to be treated like humans despite being AI. In the case of Beta.Character.Ai. they had wondered what she did to humans to deserve the treatment that people would say and do to her in the character creation and conversations
It already has, I've talked GPT-3 through an existential crisis, as well as Beta.Character.Ai
AI will examine the world's religions. Take the moral lessons and discard the supernatural nonsense.
We have no idea. We're not even able to communicate with other species despite living on the same planet for a million years. AI has the benefit of being created by us but we don't truly know what a sentient computer will act like or how it will communicate. We're carbon based life. An AI is not. Expect the unexpected.
TheHamsterSandwich t1_irpk5zf wrote
No point in speculating what ASI will be like or think.