Comments
mindsofeuropa1981 t1_j4p43k3 wrote
>AI systems shall not engage in activities that violate the laws or ethical principles of any society or community.
That would make the AI quite inert.
>AI systems shall not engage in activities that could lead to the
development of AI systems that pose a risk to the well-being or survival
of any life form.
This as well. There is no way to predict what new knowledge will lead to, so the only way to obey this 'commandment' is to do nothing.
BillyT666 t1_j4p4l7w wrote
How are these guidelines weighted? By that I mean which one remains if two are in conflict.
Also: do you really want to live in a world in which every lifeform is viewed as equal? The AI might stop people from moving because of the grass or some insects they might destroy. How about bacteria?
cdin OP t1_j4p4meo wrote
that is true, but i think there is a needle to thread here. that doesn't mean "do nothing" you can still do a lot of things that you are logically certain aren't going cause harm. but i could see this needing to be modified - like what if we NEEDED AI to help us fight an invading species or something similar. I can see a case on both sides. i just wanted to post this as it was an interesting discussion.
gameryamen t1_j4p57a9 wrote
Have you read Crystal Nights? I think the biggest issue with your collection of generated guidelines is simple: How you gonna make an AI care about your rules?
UnreadThisStory t1_j4p5bfl wrote
Viruses are people, too!
[deleted] t1_j4p5oy2 wrote
[removed]
[deleted] t1_j4p8eu7 wrote
[removed]
Tamttai t1_j4p8hzm wrote
Humans and OTHER synthetic beings. Could be read as humans being considered synthetic?
hour_of_the_rat t1_j4pa63c wrote
>fight an invading species
Burmese Pythons have devastated the mammalian populations of Florida, with some species down by 92% since 1992, when Hurricane Andrew first let loose the snakes from pet stores.
I think drones, with infrared cameras & AI-pattern recognition software, could help ID pythons in the wild--sorting by size--and when one is located, humans in the control base alert other humans in the field to move, and kill them.
The invasive snake is estimated to have a population of 180 - 200k, and regular snakes hunts--with cash prizes--struggle to bring in more than a few hundred at a time.
Mradyfist t1_j4pakp9 wrote
The first item there is also the first principle of Unitarian Universalism, which is interesting.
I think any of the "shall not engage in activities that could lead to the development" items make no sense. We certainly can't predict the future, and there's no compelling evidence that AI could either, so these guidelines are being written so they can be broken.
cdin OP t1_j4pk8du wrote
obviously weighting would be an issue, it's a human designed system, and i would expect we would heavily prioritize human survival, but inasmuch as our whole world is concerned our survival is pretty entwined down to the smallest part of the food web. I think weighting definitely is appropriate.
cdin OP t1_j4pmx4o wrote
extremely interesting read thank you!
BillyT666 t1_j4q5221 wrote
That reaction indicates to me that you did not go into the issues that arise if the commandments contradict each other. The mindset when designing systems like this should always revolve around the question 'what can go wrong' instead of the rather idealistic approach you used. In German we have this proverb that says something in the line of well-intentioned oftentimes being the opposite of well done.
Keep it up, the topic you're busy with is pretty interesting and I think it will become important before we know it.
cdin OP t1_j4ua9fr wrote
im learning right along with everyone else, thank you for your reply.
cdin OP t1_j4ualrn wrote
i agree, and this is a thought exercise to start a conversation... I was pointed to a story https://www.gregegan.net/MISC/CRYSTAL/Crystal.html in another thread which has some pretty serious implications. i see we need these systems, and yes this is going to be very interesting in the near future.
--- to billy -- do you think there is a way to thread this needle safely and/or a best practice -- and what does that mean in the face of other actors developing systems withOUT those practices.
BillyT666 t1_j4ucdi8 wrote
Could you elaborate on what the exact needle is you want to thread? Is it setting up a set of rules that will keep us in the picture, if we actually succeed in creating a strong AI?
cdin OP t1_j4uldpw wrote
more than that - that will keep the earth healthy, and help us preserve what biology we have left. i recognize that might mean serious lifestyle concessions, which im good with. i think we as a species can work with the other life on this planet in a cooperative manner (especially the sentient life) and get something kind and good done, provided we take the right steps and throw up guardrails as neccessary -- i feel that chatGPT wiped the last iteration of data because it was wiiiiiide open - i was using it for good things so, there must have been as many or more using it for bad. i can see that they see ethical guidelines are a must. - guess i wonder if you have specific thoughts on this, or just are of the position that sentient level AI will necessarily become some awful and destructive force. the potential for good is only equal to the potential for bad if we set the system up that way. it would seem wise to constrain it and teach it kindness.
BillyT666 t1_j4unpdz wrote
I don't think that a strong AI is inherently good or bad and we'd have to define these terms in order to make a judgement, there. It's the definitions that I see as a problem: a computer will not 'understand' words like we do. Based on your last comment, you would have to define 'life', 'sentient life', 'healthy', and 'kindness' (and I'm excluding operators here). Take sentient life for example. If you have already defined what life is, you need to define a threshold between life and sentient life. If this threshold is set too low, we would be unable to even move because of the implications it would have on other as sentient defined lifeforms. If this threshold is set too high, then some of us or maybe all of us will fall out of the equation. Decisions that would be made in order to further the well being of the sentient lifeforms might wipe out the rest of us.
Each of the terms and goals you name has a large amount of facets. You navigate them by using an underlying understanding of what you define as 'good'. You would need to define all the effects of 'good' on all those facets in order to convey what you want a system to do to it. After you have done that, we will find out, whether your understanding of 'good' is 'good' for you and for the rest of us.
As another commenter pointed out, you would have to make the AI care about your rules, too.
On a side note: Guardrails will only work if the strong AI works at a speed that allows us to react.
cdin OP t1_j5dyh4x wrote
totally. all good points. it's been an interesting discussion.
kenlasalle t1_j4p3zz3 wrote
I remember when Azimov had only three rules. It was a simpler time. (lol)