Viewing a single comment thread. View all comments

Smack-works OP t1_itou8ft wrote

Continuing to make presumptions (when you see that the previous ones are not clear) may be a problem too.

But I think your assumptions are wrong:

> The first trouble with your X statements, is they seem like an infinite set. The examples you give for your X statements in point 3 don't seem to come from a finite list of statements that you could just hand to a system. Rather they appear to be rational that you'd explain after encountering a specific situation.

  • AI doesn't need to know an infinite set of X statements.
  • You don't need to give all of the statements to the system beforehand. It can learn them.
  • It's OK if some statements are deduced after encountering a specific situation.

X statements are not supposed to encode the absolute ethics we put into the system beforehand.

> 'People learn values without solving ethics'.

> Your non-answer answer to my question leads me to conclude that I am wasting time trying to ask you further questions, so we can let it all go.

You assume I'm supposed to answer to everything you write? I wanted you to admit at least some common truth ("we don't need to solve ethics") before dealing with more assumptions and inferences.

1