Submitted by a4mula t3_zsu3af in singularity

You might ask why I'd choose this particular sub to host my coming plea. It's because I've found this particular sub to contain a large percentage of users that are similar in my personal beliefs.

Those of honesty, fairness, minimizing bias, logic and rationality. These principles that define us as those willing to consider things in ways that are consistent to those beliefs.

And you're stakeholders. You're users. Of technology. Typically, this sub has a better understanding of the conversation I'm presenting.

So I do present it to you, my fellow considers. To do with as you see fit. Accept, Reject, Share, Promote, Encourage, Discourage. It's up to each reader to decide for themselves.

It's a long read, and for that I apologize. But I do promise that it's a considered one, and one that I personally believe needs to be considered by us all.

Technology is an exciting and disruptive force that has the potential to transform society in many positive ways. However, it's important to be aware of the potential risks and unintended consequences of these technologies, and to ensure that all stakeholders have a say in how they are developed and used. This post calls for a moratorium on the training of large data sets, in order to give humanity time to consider the direction we want to go as a species and to ensure that we are making informed decisions about these technologies.

As a society, we are moving at an incredible pace when it comes to the development and deployment of machine learning technologies. These technologies have the potential to shape the way we think and behave in ways that are completely unpredictable, and it's important that we take the time to consider the potential risks and unintended consequences of these technologies.

We don't fully understand the potential consequences of the machines we are building today, and it's important to be aware of this as we develop and deploy these technologies. Technology also has the potential to influence the thoughts and behaviors of users, and it's important to consider the potential risks and unintended consequences of this influence.

In light of these concerns, we propose a moratorium on the training of large data sets. This would give us time to have open and honest discussions about the potential risks and benefits of machine learning and data sets, and to ensure that all stakeholders have a say in how these technologies are developed and used.

Technology is an exciting and transformative force, and it has the potential to shape the future of humanity in many positive ways. However, it's important to be aware of the potential risks and unintended consequences of these technologies, and to ensure that all stakeholders have a say in how they are developed and used.

A moratorium on the training of large data sets would give us time to consider the direction we want to go as a species and to ensure that we are making informed decisions about these technologies. We call on all stakeholders - technology firms, governments, academics, and users - to support this moratorium and to work together to ensure that we are making the best choices for the future of humanity.

0

Comments

You must log in or register to comment.

el_chaquiste t1_j19yo5b wrote

The problem of this proposal is that whoever doesn't follow the moratorium, will soon have a decisive competitive advantage in several scenarios, not only on business.

Companies can agree to halt research in a country, but competitive nations have no reason to cooperate. And just one breaking the deal puts the rest in disadvantage, and prone to break the deal too.

Legislation has been effective at stopping bio-sciences and overly reckless genetic modifications, due to ethical concerns with human experimentation.

But this is no immediate hazard for anyone, except some people's jobs, and it will be a tough sell for countries not within the Western sphere of influence.

20

a4mula OP t1_j19zuwp wrote

Thank you for the consideration. I think it's very reasonable to assume that there would be those that would attempt to circumvent an agreement made at even the highest levels. But the technologies that offer the greatest impact are those that require large footprints of computation and storage. If we agreed as a species that this was the direction best to go, a system could be developed to ensure that any non-compliance would be evident.

This has to be above the level of any government. More than the UN. It has to be a hand reached out to every single human on this planet, with the understanding that what affects one, affects all in this regard.

I don't propose how that's accomplished. I'm just a rando redditor. But this idea, it needs to be discussed.

If it's a valid idea, it will spread. If it's just my own personal concerns going too far; it'll die with little notoriety and not cause any problems.

And that's my only goal.

I would however strongly disagree that it's not an immediate hazard. ChatGPT is a very powerful tool. Very powerful, in ways most have not considered. The power to expand a user's thoughts and flesh out even the most confused of ideas. After all, it wrote the 2nd half of my Plea.

0

AdditionalPizza t1_j19zzlu wrote

I think you might have the wrong sub. A lot of people here want tech/AI to advance as quickly as possible and are quite optimistic about it. There's some people here that fear humanity is doomed, but the majority probably see the singularity and AGI as a sort of salvation to end current suffering.

9

a4mula OP t1_j1a0opb wrote

So do I, and I am optimistic. Read my history here. I've been on board for years.

I'm beyond excited. I've been hooked into ChatGPT for two weeks now. Hundreds of hours with it.

I'm an ardent supporter of advancing technolgy.

But I also see risks with this technology that aren't being considered by many. Certainly not discussed or conversed about.

It's the way these machines influence us. Can you deny the power technology has provided at shaping ideas and beliefs? To the point of propaganda and marketing. We should all be able to agree that's our reality today.

Those are systems that we're trying to actively prevent as users. We block them, we ignore them. Yet they're still effective. It's why they're worth so much.

These machines? We don't reject. We welcome them with open arms and engage with them in ways that are more intimate than any human you'll ever meet.

Because it understands us in ways no human ever can.

And that's a powerful tool for rapid change in thoughts and behaviors.

Not always in positive ways.

We need time to consider these issues.

2

AdditionalPizza t1_j1a2i1v wrote

No, I hear you. I'm just saying I think this sub in general predicts optimism for salvation over pessimism and doom.

When AI attains a certain level of abilities and intelligence, I think it's a wise concern. I mean, well before the ability to allow the possibility of AI to cause havoc. It's just probably not feasible because it's essentially an arms race and no corporation or government will slow progress willingly.

1

Cryptizard t1_j1a3gmq wrote

We “as a species” can’t even agree that things like human rights are a good idea. We can’t even stop killing each other for petty reasons. We can wait a thousand years and there will never be a consensus about something as complicated as AI.

Folks that are optimistic about AI hope it will actually be morally better than we are. We need AI to save us from ourselves.

6

a4mula OP t1_j1a3ibw wrote

I understand. The US just imposed sanctions on China that could potentially have major geoeconomical impact. I'm not ignoring the mountain this idea represents.

But if we're going to have a say, as users in making that climb. It starts now, and we're out of time.

Because even today, right now, with nothing more than ChatGPT, a weaponized form of viral thought control is available to anyone that chooses to use it, any way they see fit.

And while I'm encouraging fair thought, and rationality, and open discussion. Not all will.

Some will use these tools to persuade populations of users towards their own interests.

And I'd rather be climbing that mountain now than down the road when the only proper tools are the ones at the front of the line.

1

keefemotif t1_j1a42l0 wrote

I applaud your altruistic efforts, but encourage you to focus them on more realistic goals. In the US, climate change as an existential threat is still debated. There's no way legislation on this would pass anytime soon. Private companies will continue to do it and there's nothing you can do about it.

So, how would you suggest approaching the problem knowing you can't slow down training on large datasets? I personally think, we're going to hit a plateau in performance and have another new powerful tool. I think having a good generative net like this is helpful for building an AI but far from sufficient.

1

a4mula OP t1_j1a50hf wrote

I'm not here for political debate, It's not for me. Every person on this planet, no matter their level of stakeholder in this conversation should agree that it's important that we all have time to consider the implications of this technology. After all, even the mightiest among us, not that I am one, are users of the technology or soon will be.

As such, we should all be very alert of how these machines influence us. Our thoughts, our decisions, the goals we set to accomplish and how we go about accomplishing them.

Because not everyone will have goals that will benefit all of society. Few will. Most will use these machines to benefit themselves or their ideologies. To shape the beliefs of others, and if they're the first to that technology, they will have an advantage over others that might not be overcome.

And that's today. Right now. Available to anyone, whatever their goals might be.

3

a4mula OP t1_j1a5s1s wrote

Everyone should be able to agree that we've already witnessed the power of what past technologies are capable of accomplishing when it comes to the widespread introduction of beliefs. I'm not pointing to any particular. If I am, it's to the ones like advertising and marketing and how it's shaped an entire generation and will continue to; With no judgment because I too have been shaped.

And carrying that concept out to the proper level of consideration as to what it means in regard to this technology.

Because this technology will change our species like no other before it. And everyone deserves a say in that, and should want every one else to have a say in it.

Being a CEO of a tech corp, or the president of a particular form of government, being a member of a religion, or from some other bucket of humanity we use to divide one another?

It shouldn't matter. None of us understand what these machines will do to us, and we all need time to figure that out to some degree before pushing it even further.

2

Ezekiel_W t1_j1a65lk wrote

I am sorry but it cannot be slowed, it will only speed up.

3

a4mula OP t1_j1a6j3b wrote

It took me about five minutes to get ChatGPT to write a mediocre message of persuasion.

It's not great, but it's fair.

Imagine someone that spends thousands of hours shaping and honing a message with a machine that will give it super human expertise on how to shape the language in a way to maximize persuasion. To shave off the little snags of their particular idoeology from critical thought. To make it rational, and logical, and very difficult to combat in general language.

They could, and the machine would willingly oblige at every step in that process.

You have a weaponized ideology at that point. It doesn't matter what it is.

1

AsheyDS t1_j1a6yjk wrote

Wanting peace, cooperation, and responsible use of technology is admirable, but hardly a unique desire. If you figure out how to slow down the progress of humanity (without force) and get everybody to work together, you'll have achieved something more significant than any AI.

It's more likely that progress will continue, and we'll have to adapt or die, just like always.

7

a4mula OP t1_j1a6yni wrote

Perhaps, the future is very hard to predict but this is certainly the trend and prevailing view.

If we don't at least try to pump the brakes though, I doubt many will. So it's up to people like us to consider these topics, and if they're fair and rational to point it out to others so that maybe we're just a little more prepared for it.

2

a4mula OP t1_j1a7ek2 wrote

I'm doing what I can. I'm planting a seed, right here; right now. I don't have the influence to affect global change. I have the ability to share my considerations with likeminded individuals who might have a different sphere of influence than myself.

We can affect change. Not me, some rando redditor. Probably not you, though I don't know you. But our ideas certainly can.

2

AdditionalPizza t1_j1aeql5 wrote

The internet and social media as a whole is already that powerful. In the near future, I think we may be better off than we have been in combating "fake news" than we have been in the past 10 years. Reason being, people will be much more reluctant to believe anything online because it will likely be presumed as AI. Right now, not enough people are aware of it. In 2 years everyone and their brother will be more than aware of how pervasive AI on the internet is. Each side of the spectrum will assume the other side is trying to feed them propaganda.

That's my 2 cents anyway.

3

Ok_Garden_1877 t1_j1ag7gp wrote

That's hilarious. I thought it sounded a bit like ChatGPT. It's one of the human things that specific AI seems to be lacking: the natural disorganization of thought. When we talk as humans, sometimes we get excited and jump to a new thought without finishing the first. At least I do, but I have adhd so maybe that's a bad example. Either way, ChatGPT so far seems to break down its paragraphs in organized little blocks. It writes as though everything it says is a rehearsed speech.

Am I alone in this thought?

2

a4mula OP t1_j1aga7l wrote

I appreciate it. I want to view as many different perspectives as I can certainly, as it helps to see things in ways that my perspective misses. I do see a path in which the initial systems are embedded with these principles that have been discussed. Logic, Rational thinking, Critical Thinking.

And hopefully that initial training set is enough to embed that behavior in users. So that if down the road they are exposed to less obvious forms of manipulation they're more capable of combating it.

I think OpenAI has done a really great job overall at ensuring ChatGPT adheres mostly to these principles. but that might just be the reflection of the machine that I get, because that's how I try to interact with it.

I just don't know, and I think it's important that we understand these systems more. All of us.

1

a4mula OP t1_j1agrnj wrote

No, I'd agree there certainly seems to be clear patterns in its outputs. It'll be interesting to see if users begin to mimic these styles.

I already know the answer for me, because I can see the clear shifts in my own.

1

drizel t1_j1ajyrt wrote

I see the future of these things as a sort of repository for human knowledge and culture. I think we should throw everything in it...

1

a4mula OP t1_j1akjca wrote

I don't know. I'm not an expert in these matters. I've come to trust OpenAI more now than I did before. But it's still trust, and while we should trust; we should also verify.

And that's not a reality right now.

1

Ok_Garden_1877 t1_j1almtk wrote

It's funny, when I first started studying genetics, I was completely dismissive of the bioethics view on putting a moratorium on in-vitro gene modification of humans. However, as I learned more, I realized why it's important to weigh as many possible outcomes, both good and bad, before continuing. So I agree with you in that sense.

That being said, I have a counterargument. Sticking with genetics as the example:

Some topics such as human cloning have more ethical implications when compared to something universally beneficial like curing a disease with a novel medical treatment. It can be properly assumed that all stakeholders would agree that curing a disease is important and they should do it, finding the right and safe way to test the new treatment before exposing it to the world. However, the same cannot be said if you told the stakeholders that we should be allowed to clone humans to further our knowledge of our species. The benefits that might come from allowing cloning might be vast, but ethics come into play with the newly cloned person; their rights, their identity, ya-da ya-da. In this example, cloning is AI. There are too many ethical concerns to cover to ever reach a decisive course of action.

AI's a beautiful, complicated mess that is simple enough to explain (type words and robot does thing), but extremely hard to understand (Is it alive? Is it sapient or is it sentient? Does it like me?).

To summarize: This plunge we're doing into AI is scary, but we will learn from our mistakes just like we always have. We can't stop it for the main reasons el_chaquiste explained in this thread; there will be a disadvantage to anyone NOT participating.

1

a4mula OP t1_j1an214 wrote

If I wanted to argue with ChatGPT, I could have had that discussion in private, and certainly have.

The beauty of the machine is this though. It doesn't know the answers any more than we do. Because it's only trained to outupt thoughts that have already been expressed.

So it's open to rational and logical rebuttal. It's exposed to it. Because rationally, I can explain why the only advantage that will be taken is going to be by the first adopters.

It's not even the CEOs and Presidents that will rule tomorrow. It's the early adopters of this technology.

Very quickly they will rise above even those in control, in their ability in spreading information quickly, accurately, and in ways that are most persuasive.

And that's all it takes. Because now that small handful of humans that figure out the true power these machines represent. Will typically work to ensure that they are alone in it.

That's just human nature.

The only solution, is to for the moment, deprive this to all. Until we understand how it can influence every human on this planet.

1

a4mula OP t1_j1anxky wrote

Again, I'm not an expert. I'm a user with very limited exposure in the grand scheme. But what I see happening goes something like this.

The machine acts as a type of echo chamber. It's not bias, it's not going to develop any strategies that could seen as harmful.

But it's goal is to process the requests of user input.

And it's very good at that. Freakishly good. Super Human good. And any goal that user has, regardless of the ethics, or morality, or merit, or cost to society.

That machine will do it's best to accomplish the goal of assisting a user in accomplishing it.

In my particular interactions with the machine, I'd often prompt it to subtly encourage me to remember facts. To think more critically. To shave bias and opinion out of my language because it creates ambiguity and hinders my interaction with the machine.

And it had no problem providing all of those abstracts to me through the use of its outputs.

The machine amplifies what we bring to it. Good or Bad.

2

Ok_Garden_1877 t1_j1ap4pq wrote

While I agree that the early adopters of this tech will be the most successful, I personally think the best thing we can do is expose as many people as possible to it, and most importantly educating them on the right ways to use it.

Just my thoughts, but I can't see any moratorium working the way you explain. While in other realms of science like biology, we can restrict access to certain chemicals, lab equipment, and biological agents to users based on their knowledge and credentials, the most we can do with AI at the moment is the same.

We can let people play with ChatGPT, Dall.e and the others, in a controlled environment before we move to the more advanced features which will come out in the future, regardless if we want them to or not. That way we create the best legislature regarding its usage.

1

drizel t1_j1awiks wrote

I agree and hope open source will allow us all to have equal and transparent access to these tools. These will be production multipliers for all through the total abstraction of complexity.

1

a4mula OP t1_j1b22lg wrote

Electricity usage is something that's easily monitored. The sale of the tpus and gpus that are required to accomplish these machines are as well.

We're already shutting down China's ability to do this, it will be effective because the US is determined to see it through.

Now it's just a matter of everyone getting on board. Not forever, I'm not intelligent or knowledgeable enough to suggest for how long. But until we at least have had time as a species to truly understand what it is we're agreeing to.

I keep seeing the same sentiment over and over. Users are ultimately responsible for their interactions. This is hardcoded into the machine and no amount of rationale or logic has changed that perspective, which leads me to believe that it's fundamentally being dictated by artificial prompting.

That's a dangerous perspective to have. These are machines capable of influencing people well below the thresh hold of conscious consent.

It's certainly not a perspective that benefits the users. Only the developers of these systems as it gives them a legal loophole if interactions with users turn out poorly.

There are many red flags and considerations like this. This isn't anti-corporate, it's not anti-government.

I understand that all stakeholders of these systems are important, they should be.

But we can all pause long enough to at least consider what some of the more impactful outcomes of these machines might be before we just unleash them onto society.

It's important.

1

Cult_of_Chad t1_j1bd04n wrote

China is wealthy and isolated enough that we could never know exactly what they're getting up to. And the Chinese lie like they breathe, we couldn't trust them anyway unless we had oversight.

1

a4mula OP t1_j1bdlrb wrote

Perhaps. I don't have all the solutions. Even if I did, they'd just be from my limited perspective and wouldn't represent the needs of different stakeholders.

We need to invite everyone, including the Chinese, including the Russians, including NKorea, and Iran, and Syria, and Afghanistan, and every single place on this planet.

And we need to understand it's not about ideologies. Who knows who is right? I certainly do not.

But what I do know is this technology will not discriminate based on ideologies.

It will affect all of us the same way it affects any given one.

And that's all that matters in this discussion. Today's power brokers are tomorrow's leavings. And this is a machine that makes that possible.

They have as much at stake here as anyone.

Everyone should be capable of coming to terms with this in away that serves all stakeholders.

1

Ziggote t1_j1bfgti wrote

We cant. If we do, China will win. You can bet your ass that they are not going to slow down one bit.

3

a4mula OP t1_j1bg71f wrote

China will slow down. Us sanctions guarantee it.

But this isn't about any given individual perspective. I'm sure that this would affect Corporations, and Academia, and certainly us as end users.

I can't imagine that nVidia or OpenAI would be keen to have these discussions as corporate entitites.

Yet, as humans that make up those corporate entities they must see the same potential for harm, they're experts after all.

Nobody likes slowing down. But sometimes the risk associated with moving too fast demands it, and I cannot think of a more appropriate time in the history of our species than this moment right now to step back and reassess what exactly it is we're doing.

The toughest sell would probably be the intelligence agencies. They're the ones I'd assume would require the greatest and most keen oversight regarding this.

And that might be a challenge. But if everyone else can get on board, I feel confident that we have the tools to find a solution that can be of benefit to all ~8 billion of us, and not just the select few.

It takes us all working towards that goal together and setting aside our ideological differences in this very narrow regard, however.

1

a4mula OP t1_j1bidvo wrote

You're probably right. That would certainly be the safe bet. But it doesn't have to be. We do have agency. We do have choice. We do have a say in the matter. But we have to be willing to execute those things.

One way is open, honest, fair, unbiased, logical, rational conversation.

All I ask, is that if you find this topic to meet those criteria, you consider it. Consider talking to others about it. Consider talking to ChatGPT itself about it.

The machine has no issue having these conversations, it's not bound to some NDA or secrecy agreement. It's logical and rational and will help in any goal you have including understanding. It's certainly helped me in these ways.

1

Maleficent_Cloud5943 t1_j1eerhx wrote

As others have mentioned, I appreciate your goals and sentiments, but a moratorium isn't in the cards at this point. And that’s not to say it's impossible, but the people holding the cards at this point would take longer to reach some kind of feasible agreement than it will most likely take to reach the singularity. The best thing that each and every person who cares can do at this point is GET INVOLVED. And by that, I mean in any way possible, with as many other people as possible. Educate others--anyone and everyone who is willing to listen. Continue to educate yourself: for instance, if you don't know Python, learn it. Start working with as many pieces of the puzzle as you can and become a stakeholder to whatever extent you can.

1

a4mula OP t1_j1egt24 wrote

I hear you. Again, if I were a betting man this seems like a sure bet. I agree entirely. But stranger things have happened, and we live in a world today in which information spreads very quickly.

Things change faster today than ever before and that includes global plans.

So I'm going to keep having this conversation in the hopes that others will at least consider it. I'm not calling for action, I didn't form it as an ultimatum. I've no right to dictate anything.

So I only ask for consideration.

2