Viewing a single comment thread. View all comments

gahblahblah t1_itkjbe2 wrote

On the one hand you claim: 'X statements can be thought of as simply a more convenient reframing of value statements'

You represent that human value statements are difficult to know: 'You don't know this. The same can be said about human value statements.'

Then you represent the types of statements I already know as being human value statements: ' Just compare X statements to the types of statements you already know (e.g. value statements).'

Then you represent that values are learned empirically not systemically.

But also earlier you claimed 'Value statements have multiple interpretations and contradictions too.'

And also claim that there is no footing at all for validating correctness: 'Any type of statements can be "made up".'

It appears to me that the properties of X statements are arbitrary, because the nature of what you call value statements are also arbitrary.

If you think that what you are describing as value statements is non-arbitrary, please characterise their properties, so that I could work out the difference between a false value statement vs a true one.

3

Smack-works OP t1_itkke06 wrote

Sorry, but can you rewind to the start of the conversation and explain (1) what is your argument and (2) why it's important?

For example: > The first trouble with your X statements, is they seem like an infinite set.

Why is this a problem and why do you think this problem matters?

> If you think that what you are describing as value statements is non-arbitrary, please characterise their properties, so that I could work out the difference between a false value statement vs a true one.

You are asking me to solve ethics. But:

  • You don't need to solve ethics in order to learn values.

  • You don't need to solve ethics in order to define what is a "value statement".

  • You may not even need to define what is a "value statement".

1

gahblahblah t1_itkpsnv wrote

>You may not even need to define what is a "value statement".

You define your X statements based off value statements, but then also don't think value statements need defining. This is part of the confusion, because when I try to examine what you are talking about, the expressions that you've used previously as part of explanations and definitions you later represent as unknowable - which makes our conversations circular.

'Why is this a problem and why do you think this problem matters?'

When you represent that you can provide knowledge from a set of statements, but the dataset they are meant to represent is an infinite one, the first thing you are establishing is that the finite data that you have won't really be representative - so you won't be able to make behavior guarantees.

To create a robot that does not turn us into paperclips, I don't think requires infinite data, but rather there is a smaller set of information that would allow us to make behaviour guarantees.

In order for this set of information to not be infinite, the set requires properties that are true for all the statements in the set ie that it is possible to measure and validate if a statement should be inside or outside the set. Having a validity check means that the second value statement that you try to add to the set cannot be arbitrary - because an arbitrary statement may well be contradictory to the first statement.

'You don't need to solve ethics in order to learn values.' How do you learn values then? If you don't know, then you are also saying you don't know how to learn X-statements.

2

Smack-works OP t1_itn7xuq wrote

You make way too many assumptions and inferences at every single turn of your answers. You don't question those assumptions and inferences. And you don't make those assumptions and inferences clear so that I can comfortably agree/disagree with them. You make no effort to check if we are on the same page or not.

> When you represent that you can provide knowledge from a set of statements, but the dataset they are meant to represent is an infinite one, the first thing you are establishing is that the finite data that you have won't really be representative - so you won't be able to make behavior guarantees.

As I understand your reasoning chain: "X statements are an infinite set = AI needs to know the entire set to be aligned = we need infinite memory for this".

Do you realize that this reasoning chain contains at least 2 assumptions which can be fully or partially wrong?

> In order for this set of information to not be infinite, the set requires properties that are true for all the statements in the set ie that it is possible to measure and validate if a statement should be inside or outside the set. Having a validity check means that the second value statement that you try to add to the set cannot be arbitrary - because an arbitrary statement may well be contradictory to the first statement.

You are potentially confusing 4 different things:

  • Set of all X statements. Set of the true X statements. Set of the statements the AI needs to know. Checking if a statement is X statement or not. Checking if X statement is true or not.

Not saying you are actually confused. But what you write doesn't differentiate between those 4 things. So answering to what you wrote is extremely frustrating.

> 'You don't need to solve ethics in order to learn values.' How do you learn values then? If you don't know, then you are also saying you don't know how to learn X-statements.

People learn values without solving ethics.

1

gahblahblah t1_itoqqsi wrote

Lots of communication involves making reasonable assumptions, so that a person doesn't need to spell out every detail. My presumptions are only a problem if they are wrong.

'People learn values without solving ethics'.

Your non-answer answer to my question leads me to conclude that I am wasting time trying to ask you further questions, so we can let it all go.

1

Smack-works OP t1_itou8ft wrote

Continuing to make presumptions (when you see that the previous ones are not clear) may be a problem too.

But I think your assumptions are wrong:

> The first trouble with your X statements, is they seem like an infinite set. The examples you give for your X statements in point 3 don't seem to come from a finite list of statements that you could just hand to a system. Rather they appear to be rational that you'd explain after encountering a specific situation.

  • AI doesn't need to know an infinite set of X statements.
  • You don't need to give all of the statements to the system beforehand. It can learn them.
  • It's OK if some statements are deduced after encountering a specific situation.

X statements are not supposed to encode the absolute ethics we put into the system beforehand.

> 'People learn values without solving ethics'.

> Your non-answer answer to my question leads me to conclude that I am wasting time trying to ask you further questions, so we can let it all go.

You assume I'm supposed to answer to everything you write? I wanted you to admit at least some common truth ("we don't need to solve ethics") before dealing with more assumptions and inferences.

1