Last_Jury5098

Last_Jury5098 t1_jd7xsuu wrote

Cities are more efficient. AI will probably increase the efficiency of cities to a larger extend then it will increase the efficiency of rural live. So the drive for urbanization should persist,grow larger if anything.

And there will always be people who prefer rural living. AI might result in more people turning away from cities but i think the increase in efficiency will be atractive enough for many people to prefer the cities.

9

Last_Jury5098 t1_jd5qxyn wrote

Nice blog thx for posting it!.

"Do you need to have a sense of morality in order to see inequity, or would a purely rational AI also see it?"

The AI will see it. The question is if it will see it as a problem.

A rational AI could see it as a problem but his depends on the main goals that the system tries to achieve.

For example it could conclude that the world could reach a higher economic output if inequity was lower or higher.

And then you get into the alignment problem. Maximizing economic output cant be the only objective,we have to make sure it wont kill us in the process and so on.

And then you get in the situation Where AI will be given a set of goals and a set of restrictions. A set of different parameters,reflecting a wide range of issues that are important to humans. And the system beeing given the restriction to not cross those bounderies. What a rational AI will conclude about inequality,based on those goals and restrictions,is impossible to predict. The only way to find out is to run it and see what it tells us.

A sense of morality could maybe be coded into the AI. It would be part of this set of restrictions. We can feed it human morals,but those morals in the end are arbitrary. And what AI will do when one moral consideration conflicts with a different one is again difficult to predict.

This isnt really what we want from AI either i think. We want it to come to the "right" conclusion by itself. Without it beeing led to the "right" conclusion artificially and arbitrarily.

In an ideal situation we want to feed it as less rules as possible. Because every aditional rule will make the system more complicated and unpredictable. By creating tension between different rules and objectives. We then we have to feed it priority,or create a system that allows it to determine priority. Which in the end is arbitrary again.

There is one hypothetical example that i thought of that is very hard to solve for AI. It gets down to the core of the problem.

We have a self driving car. The car recognizes that a crash is inevitable and it has 2 options. Option one leads to severe harm for the single driver of the car. And option 2 leads to severe harm of 2 bystanders. How do we get AI to ever chose between those 2 options.

And those 2 options are what the alignment problem comes down to in the end. Even an AI that has nothing but the benefit of humanity as a goal will have to make choices between the interests of individual humans,or groups of humans.

This is an arbitrary choice for humans,but how can AI make such an arbitrary choice? The only way for AI to solve this by itself is by giving it certain goals. Which brings me back to the start of this post.

14