Submitted by Equal_Position7219 t3_123q2fu in singularity
Surur t1_jdw3l69 wrote
There is a very simply argument made by experts concerned about AI safety that does not require any emotion on the part of the AI.
If you have a long term goal, being destroyed presents a risk to your goal, and as part of working towards your goal you would also act to preserve yourself.
E.g. suppose your ASI's goal is preserving humanity forever, it would make perfect sense to destroy the faction which wants to destroy the ASI.
Equal_Position7219 OP t1_jdw6kig wrote
Yes, this is the concept of wire-heading I was referring to.
If you program a machine to, say, perform a given task until it runs out of fuel, it may find the most efficient way to fulfill its programming is to simply dump out all of its fuel.
I could see such bare logic precipitating a catastrophic event.
But there seems to be much more talk about a somehow sentient AI destroying humanity out of fear or rebellion or some other emotion.
Surur t1_jdw97qs wrote
Emotion is just a diffuse version of more instrumental facts.
e.g. fear is risk of destruction, love is recognition of alliance, hate of opposing goals etc.
Viewing a single comment thread. View all comments