BillyT666

BillyT666 t1_j4unpdz wrote

I don't think that a strong AI is inherently good or bad and we'd have to define these terms in order to make a judgement, there. It's the definitions that I see as a problem: a computer will not 'understand' words like we do. Based on your last comment, you would have to define 'life', 'sentient life', 'healthy', and 'kindness' (and I'm excluding operators here). Take sentient life for example. If you have already defined what life is, you need to define a threshold between life and sentient life. If this threshold is set too low, we would be unable to even move because of the implications it would have on other as sentient defined lifeforms. If this threshold is set too high, then some of us or maybe all of us will fall out of the equation. Decisions that would be made in order to further the well being of the sentient lifeforms might wipe out the rest of us.

Each of the terms and goals you name has a large amount of facets. You navigate them by using an underlying understanding of what you define as 'good'. You would need to define all the effects of 'good' on all those facets in order to convey what you want a system to do to it. After you have done that, we will find out, whether your understanding of 'good' is 'good' for you and for the rest of us.

As another commenter pointed out, you would have to make the AI care about your rules, too.

On a side note: Guardrails will only work if the strong AI works at a speed that allows us to react.

1

BillyT666 t1_j4q5221 wrote

That reaction indicates to me that you did not go into the issues that arise if the commandments contradict each other. The mindset when designing systems like this should always revolve around the question 'what can go wrong' instead of the rather idealistic approach you used. In German we have this proverb that says something in the line of well-intentioned oftentimes being the opposite of well done.

Keep it up, the topic you're busy with is pretty interesting and I think it will become important before we know it.

2

BillyT666 t1_j4p4l7w wrote

How are these guidelines weighted? By that I mean which one remains if two are in conflict.

Also: do you really want to live in a world in which every lifeform is viewed as equal? The AI might stop people from moving because of the grass or some insects they might destroy. How about bacteria?

2