Ok_Garden_1877

Ok_Garden_1877 t1_jad24xj wrote

> and the union leader answers "no, i'm woundering with what money are they going to buy your cars."

Totally agree with this point. Everyone keeps screaming "Dey tooker jerbs!" but the market simply won't allow it in the big bang everyone's expecting.

Do I believe many jobs today won't exist in a few decades? Absolutely.

But Rome wasn't built in a day. Nor was it destroyed in a day...

4

Ok_Garden_1877 t1_j1ap4pq wrote

While I agree that the early adopters of this tech will be the most successful, I personally think the best thing we can do is expose as many people as possible to it, and most importantly educating them on the right ways to use it.

Just my thoughts, but I can't see any moratorium working the way you explain. While in other realms of science like biology, we can restrict access to certain chemicals, lab equipment, and biological agents to users based on their knowledge and credentials, the most we can do with AI at the moment is the same.

We can let people play with ChatGPT, Dall.e and the others, in a controlled environment before we move to the more advanced features which will come out in the future, regardless if we want them to or not. That way we create the best legislature regarding its usage.

1

Ok_Garden_1877 t1_j1almtk wrote

It's funny, when I first started studying genetics, I was completely dismissive of the bioethics view on putting a moratorium on in-vitro gene modification of humans. However, as I learned more, I realized why it's important to weigh as many possible outcomes, both good and bad, before continuing. So I agree with you in that sense.

That being said, I have a counterargument. Sticking with genetics as the example:

Some topics such as human cloning have more ethical implications when compared to something universally beneficial like curing a disease with a novel medical treatment. It can be properly assumed that all stakeholders would agree that curing a disease is important and they should do it, finding the right and safe way to test the new treatment before exposing it to the world. However, the same cannot be said if you told the stakeholders that we should be allowed to clone humans to further our knowledge of our species. The benefits that might come from allowing cloning might be vast, but ethics come into play with the newly cloned person; their rights, their identity, ya-da ya-da. In this example, cloning is AI. There are too many ethical concerns to cover to ever reach a decisive course of action.

AI's a beautiful, complicated mess that is simple enough to explain (type words and robot does thing), but extremely hard to understand (Is it alive? Is it sapient or is it sentient? Does it like me?).

To summarize: This plunge we're doing into AI is scary, but we will learn from our mistakes just like we always have. We can't stop it for the main reasons el_chaquiste explained in this thread; there will be a disadvantage to anyone NOT participating.

1

Ok_Garden_1877 t1_j1ag7gp wrote

That's hilarious. I thought it sounded a bit like ChatGPT. It's one of the human things that specific AI seems to be lacking: the natural disorganization of thought. When we talk as humans, sometimes we get excited and jump to a new thought without finishing the first. At least I do, but I have adhd so maybe that's a bad example. Either way, ChatGPT so far seems to break down its paragraphs in organized little blocks. It writes as though everything it says is a rehearsed speech.

Am I alone in this thought?

2