Viewing a single comment thread. View all comments

Ezekiel_W t1_j6nwba9 wrote

The notion of containing AI is a flawed concept. With advancements in hardware and improved AI performance, open-source versions will become widely available, rendering containment efforts ineffective. Additionally, moral and ethical considerations are fluid and constantly evolving. What may have been considered acceptable 1000 years ago in another culture may not align with current beliefs and values.

17

Silicon-Dreamer t1_j6otqv6 wrote

I would disagree. Sam Altman in his StrictlyVC interview said,

> "One of the things we really believe is that the most responsible way to put this out in society is very gradually, and to get people, institutions, policymakers, get them familiar with it, thinking about the implications" ...

OpenAI has vast computing resources as we know, so before algorithmic advances allow open source, lower-compute groups to make (& inference) alternatives, their containment efforts accomplish Sam's goal very effectively -- making the release process more gradual for institutions/policymakers' sake.

We all know how slowly government operates at times, especially democracies that require consensus. It stands to reason then that we would agree if OpenAI's policy changed to completely release any new works ASAP, and if we assume there's ever any negative thing the new AI can do, government will not react before its already had a long impact. I won't argue my political views in this post, but it is worth noting that the negative thing... could be as benign as a few more spam emails..... or annihilation of the planet, and everything in between.

I really like this planet.

9

Ezekiel_W t1_j6phmzs wrote

These are good points. Technology has always been a double-edged sword, being wary is wise.

4