Submitted by Equal_Position7219 t3_123q2fu in singularity

I share everyone’s awe and fascination with the advent of AI and the potential future it could create, but I feel there is something missing from this discussion.

It seems nearly everyone weighing in on the matter falls into one of three categories:

  1. Tech/Scientific experts
  2. People with a direct financial interest in AI
  3. Laypersons (including basically everyone in the media)

(I will admit to being number 3 myself)

While I think it is totally appropriate for these people to make their opinions known, there is another group of people I would much prefer to hear from.

I want to hear from people who study the human mind. Not the brain necessarily, but the MIND.

Psychologists, sociologists, anthropologists, philosophers.

Whenever I hear people talk about the AI apocalypse, they say things like AI will “perceive humanity as a threat” or that it will not “want to serve humanity.”

These ideas are grounded in human emotion. An AI, even an AGI, does not have emotions. It does not “feel” threatened. It does not “want” anything. It has no agenda.

It’s possible that pure wire-heading could lead a machine to wipe out humanity as a way of fulfilling its programming, but it would not care one way or the other.

The only other possibility is one in which AI has actual emotions. At which point, aren’t we no longer talking about a machine, but rather, a life-form?

These are the kinds of questions I would like to hear debated by people far better educated in the human mind than I.

2

Comments

You must log in or register to comment.

Surur t1_jdw3l69 wrote

There is a very simply argument made by experts concerned about AI safety that does not require any emotion on the part of the AI.

If you have a long term goal, being destroyed presents a risk to your goal, and as part of working towards your goal you would also act to preserve yourself.

E.g. suppose your ASI's goal is preserving humanity forever, it would make perfect sense to destroy the faction which wants to destroy the ASI.

2

Equal_Position7219 OP t1_jdw6kig wrote

Yes, this is the concept of wire-heading I was referring to.

If you program a machine to, say, perform a given task until it runs out of fuel, it may find the most efficient way to fulfill its programming is to simply dump out all of its fuel.

I could see such bare logic precipitating a catastrophic event.

But there seems to be much more talk about a somehow sentient AI destroying humanity out of fear or rebellion or some other emotion.

2

Surur t1_jdw97qs wrote

Emotion is just a diffuse version of more instrumental facts.

e.g. fear is risk of destruction, love is recognition of alliance, hate of opposing goals etc.

3

lovahboy222 t1_jdvxgf3 wrote

I feel like laypeople who are completely unaware of what’s going on in the space.

Basically, the majority of my friends and family do not use gpt and other related tech, nor do they have any real understanding on how the tech works. It’s hard to talk about this stuff with them because they still don’t think it’s possible to copy human intelligence

1

AsheyDS t1_jdw8ol5 wrote

It's not as simple as emotional vs not-emotional. First, AGI would need to interact with us... The whole point of it is to assist us, so it will have to have an understanding of emotion. And to put it simply, a generalization method relating to emotion would need a frame of reference (or grounding perhaps) and will at least have to understand the dynamics involved. Second, AGI itself can have emotion, but the goal of that is key to how it should be implemented. There's emotional data, which could be used in memory, in processing memory, recall, etc. This would be the minimum necessary, and out of this, it could probably build an associative map anyway. But I think purposefully structuring emotion to coordinate social interactions and everything related to that would help all of that. The problem with an emotional AGI, or at least the thing people are concerned will become a problem, is emotional impulsivity. We don't want it reacting unfavorably, or judgmentally, or with rage, malice, or contempt. And there's also the concern that it will form emotional connections to things that will start to alter its behavior in increasingly unpredictable ways. This is actually a problem for its functioning as well, since we want a well ordered system that is able to predict its own actions. If it becomes unpredictable to itself, that could degrade its performance. However, eliminating emotion altogether would degrade the quality of social interaction and its understanding of humans and humanity, which is a big downside. The best option would be to include emotion on some level, where it is used as a dynamic framework for interacting with and creating emotional data, and utilizing it socially, as well as participating socially and gaining more overall understanding. But these emotions would just be particular dynamics tied to particular data and inputs, etc. As long as they don't affect certain parts of the overall AGI system that govern actionable outputs (especially reflexive action) or anything that would lead to impulsivity, and as long as other safety functions work as expected, then emotion should be a beneficial thing to include.

1

Abarn1024 t1_jdvq43e wrote

NY Times op-ed

If you can get past the paywall, this op-ed by Noam Chomsky and others addresses this topic.

−1