Viewing a single comment thread. View all comments

mindsofeuropa1981 t1_j4p43k3 wrote

>AI systems shall not engage in activities that violate the laws or ethical principles of any society or community.

That would make the AI quite inert.

>AI systems shall not engage in activities that could lead to the
development of AI systems that pose a risk to the well-being or survival
of any life form.

This as well. There is no way to predict what new knowledge will lead to, so the only way to obey this 'commandment' is to do nothing.

3

cdin OP t1_j4p4meo wrote

that is true, but i think there is a needle to thread here. that doesn't mean "do nothing" you can still do a lot of things that you are logically certain aren't going cause harm. but i could see this needing to be modified - like what if we NEEDED AI to help us fight an invading species or something similar. I can see a case on both sides. i just wanted to post this as it was an interesting discussion.

2

hour_of_the_rat t1_j4pa63c wrote

>fight an invading species

Burmese Pythons have devastated the mammalian populations of Florida, with some species down by 92% since 1992, when Hurricane Andrew first let loose the snakes from pet stores.

I think drones, with infrared cameras & AI-pattern recognition software, could help ID pythons in the wild--sorting by size--and when one is located, humans in the control base alert other humans in the field to move, and kill them.

The invasive snake is estimated to have a population of 180 - 200k, and regular snakes hunts--with cash prizes--struggle to bring in more than a few hundred at a time.

1