Submitted by Equal_Position7219 t3_123q2fu in singularity
I share everyone’s awe and fascination with the advent of AI and the potential future it could create, but I feel there is something missing from this discussion.
It seems nearly everyone weighing in on the matter falls into one of three categories:
- Tech/Scientific experts
- People with a direct financial interest in AI
- Laypersons (including basically everyone in the media)
(I will admit to being number 3 myself)
While I think it is totally appropriate for these people to make their opinions known, there is another group of people I would much prefer to hear from.
I want to hear from people who study the human mind. Not the brain necessarily, but the MIND.
Psychologists, sociologists, anthropologists, philosophers.
Whenever I hear people talk about the AI apocalypse, they say things like AI will “perceive humanity as a threat” or that it will not “want to serve humanity.”
These ideas are grounded in human emotion. An AI, even an AGI, does not have emotions. It does not “feel” threatened. It does not “want” anything. It has no agenda.
It’s possible that pure wire-heading could lead a machine to wipe out humanity as a way of fulfilling its programming, but it would not care one way or the other.
The only other possibility is one in which AI has actual emotions. At which point, aren’t we no longer talking about a machine, but rather, a life-form?
These are the kinds of questions I would like to hear debated by people far better educated in the human mind than I.
Surur t1_jdw3l69 wrote
There is a very simply argument made by experts concerned about AI safety that does not require any emotion on the part of the AI.
If you have a long term goal, being destroyed presents a risk to your goal, and as part of working towards your goal you would also act to preserve yourself.
E.g. suppose your ASI's goal is preserving humanity forever, it would make perfect sense to destroy the faction which wants to destroy the ASI.