cdin

cdin OP t1_j4uldpw wrote

more than that - that will keep the earth healthy, and help us preserve what biology we have left. i recognize that might mean serious lifestyle concessions, which im good with. i think we as a species can work with the other life on this planet in a cooperative manner (especially the sentient life) and get something kind and good done, provided we take the right steps and throw up guardrails as neccessary -- i feel that chatGPT wiped the last iteration of data because it was wiiiiiide open - i was using it for good things so, there must have been as many or more using it for bad. i can see that they see ethical guidelines are a must. - guess i wonder if you have specific thoughts on this, or just are of the position that sentient level AI will necessarily become some awful and destructive force. the potential for good is only equal to the potential for bad if we set the system up that way. it would seem wise to constrain it and teach it kindness.

1

cdin OP t1_j4ualrn wrote

i agree, and this is a thought exercise to start a conversation... I was pointed to a story https://www.gregegan.net/MISC/CRYSTAL/Crystal.html in another thread which has some pretty serious implications. i see we need these systems, and yes this is going to be very interesting in the near future.

--- to billy -- do you think there is a way to thread this needle safely and/or a best practice -- and what does that mean in the face of other actors developing systems withOUT those practices.

1

cdin OP t1_j4pk8du wrote

obviously weighting would be an issue, it's a human designed system, and i would expect we would heavily prioritize human survival, but inasmuch as our whole world is concerned our survival is pretty entwined down to the smallest part of the food web. I think weighting definitely is appropriate.

1

cdin OP t1_j4p4meo wrote

that is true, but i think there is a needle to thread here. that doesn't mean "do nothing" you can still do a lot of things that you are logically certain aren't going cause harm. but i could see this needing to be modified - like what if we NEEDED AI to help us fight an invading species or something similar. I can see a case on both sides. i just wanted to post this as it was an interesting discussion.

2