Viewing a single comment thread. View all comments

Ok_Garden_1877 t1_j1almtk wrote

It's funny, when I first started studying genetics, I was completely dismissive of the bioethics view on putting a moratorium on in-vitro gene modification of humans. However, as I learned more, I realized why it's important to weigh as many possible outcomes, both good and bad, before continuing. So I agree with you in that sense.

That being said, I have a counterargument. Sticking with genetics as the example:

Some topics such as human cloning have more ethical implications when compared to something universally beneficial like curing a disease with a novel medical treatment. It can be properly assumed that all stakeholders would agree that curing a disease is important and they should do it, finding the right and safe way to test the new treatment before exposing it to the world. However, the same cannot be said if you told the stakeholders that we should be allowed to clone humans to further our knowledge of our species. The benefits that might come from allowing cloning might be vast, but ethics come into play with the newly cloned person; their rights, their identity, ya-da ya-da. In this example, cloning is AI. There are too many ethical concerns to cover to ever reach a decisive course of action.

AI's a beautiful, complicated mess that is simple enough to explain (type words and robot does thing), but extremely hard to understand (Is it alive? Is it sapient or is it sentient? Does it like me?).

To summarize: This plunge we're doing into AI is scary, but we will learn from our mistakes just like we always have. We can't stop it for the main reasons el_chaquiste explained in this thread; there will be a disadvantage to anyone NOT participating.

1

a4mula OP t1_j1an214 wrote

If I wanted to argue with ChatGPT, I could have had that discussion in private, and certainly have.

The beauty of the machine is this though. It doesn't know the answers any more than we do. Because it's only trained to outupt thoughts that have already been expressed.

So it's open to rational and logical rebuttal. It's exposed to it. Because rationally, I can explain why the only advantage that will be taken is going to be by the first adopters.

It's not even the CEOs and Presidents that will rule tomorrow. It's the early adopters of this technology.

Very quickly they will rise above even those in control, in their ability in spreading information quickly, accurately, and in ways that are most persuasive.

And that's all it takes. Because now that small handful of humans that figure out the true power these machines represent. Will typically work to ensure that they are alone in it.

That's just human nature.

The only solution, is to for the moment, deprive this to all. Until we understand how it can influence every human on this planet.

1

Ok_Garden_1877 t1_j1ap4pq wrote

While I agree that the early adopters of this tech will be the most successful, I personally think the best thing we can do is expose as many people as possible to it, and most importantly educating them on the right ways to use it.

Just my thoughts, but I can't see any moratorium working the way you explain. While in other realms of science like biology, we can restrict access to certain chemicals, lab equipment, and biological agents to users based on their knowledge and credentials, the most we can do with AI at the moment is the same.

We can let people play with ChatGPT, Dall.e and the others, in a controlled environment before we move to the more advanced features which will come out in the future, regardless if we want them to or not. That way we create the best legislature regarding its usage.

1