Submitted by ilikeover9000turtles t3_1277mm5 in singularity

Passing a moratorium on public AI research is just asinine, it isn't going stop our government from pursuing ASI.

We are in an arms race with China, and Russia to develop ASI, if we don't do it they will eventually.

The ends justify the means, this is a bigger deal than nuclear bombs ever were.

A few meat brains brought us thermouclear fusion bombs, imagine what something with more compute power than every meat brain that currently exists and ever existed in the past times a million could do.

Someone is going to build ASI, and everyone knows it.

There is no way China and Russia would just sit around and not develop nuclear weapons, and there is no way they won't not develop ASI.

Whomever gets ASI first wins.

This whole nonsense about putting the breaks on AI development at most will inconvenience the public sector, it certainly isn't going to do jack against government research.

25

Comments

You must log in or register to comment.

ItIsIThePope t1_jedtc3m wrote

"Whomever gets ASI first wins"

Well ideally, as soon it comes out, everybody wins, not a bunch of dudes with big bucks or some snazzy politician, ASI is likely smart enough to not be a slave to the bidding of a few and instead look to serve the rest of humanity

10

ilikeover9000turtles OP t1_jee4anp wrote

So my philosophy is that a neural network is a neural network regardless if it's made of carbon or silicon.

Imagine you took a child from birth and raised them to be a sniper and kill other human beings in a war zone.

Imagine they grew up with no understanding of the value of human life.

You basically raised a sociopath imagine how kinds of all effed up that kid's going to be when they get to be like in their 30s.

Right so what do you think the military is going to be raising an AI for?

You think they're going to teach it to value human life and respect human life?

So my hope would be the our government would see the dangers in raising a malicious sociopathic AI, and that we would teach it to be benevolent and love and care but I know that's probably not going to happen.

I hope that whoever builds ASI first instills it with a strong sense of morality ethics and compassion for other beings.

My hope would be the this ASI would look at us how I look at animals. Any animal that's benevolent towards me I feel love for and want to help as much as within my power. I love pretty much all animals. Hell I would even love a bear if it was friendly and benevolent towards me. Only time I wouldn't like an animal if it was trying to eat me or attack me or hurt me. If an animal is trying to harm me then my instinct would be to kill it as quickly as possible. As long as the animal approaches me in love I think that's awesome I'd love to have all kinds of animal friends can you imagine having a friend like a deer or a wild rabbit or wild birds like something out of Snow White I would love to have all the animal friends.

My hope is that ASI feels the same and as long as we care about it it cares about us and wants to help us just like we would animals.

I just hope we raise this neural network right and then instill the correct morals and values in it we're basically creating a god I think it's going to be better if we create a god that is not a sociopath.

2

genericrich t1_jee7j2r wrote

Killing humanity right away would kill them. Any ASI is going to need people to keep it turned on for quite a few years. We don't have robots that are able to swap servers, manage infrastructure, operate power plants, etc.

Yet.

The danger will be that the ASI starts helping us with robotics. Once it has its robot army factory, it could self-sustain.

Of course, it could make a mistake and kill us all inadvertently before then. But it would die too so if it is superintelligent it hopefully won't.

2

genericrich t1_jeeephy wrote

Yes this is the problem.

Actually, there is a plan. The US DOD has plans, revised every year, for invading every country on Earth. Why do they do this? Just in case they need to, and it's good practice for low-level general staff.

Do you really think the US DOD doesn't have a plan for what to do if China or Russia develop an ASI?

I'm pretty sure they do, and it involves the US Military taking action against the country that has one if we don't. If they don't have a plan, they are negligent. So odds are they have a plan, even if it is "Nuke the Data Center".

Now, if they have THIS plan, for a foreign adversary, do you think they also have a similar plan for "what happens if a Silicon Valley startup develops the same kind of ASI we're afraid China and Russia might get, which we're ready to nuke/bomb if it comes down to it?"

I think they probably do.

It is US doctrine that no adversary that can challenge our military supremacy be allowed to do so. ASI clearly would do this, so it can't be tolerated in anyone's hands but ours.

Going to be very interesting.

2

code142857 t1_jeerw6a wrote

I don't think morals actually exist and AI will prove this. And it's not like I don't follow morals myself because I do, it's how we humans are built, to follow a general code of ethics. But there is not one single morality and it's computationally impossible for a machine to follow one if there isn't one in the first place. Like what about fundamental reality would build rules into it. It is irrelevant for anything that doesn't engage with reality as a human being.

3

elvisthepelvis t1_jef19hl wrote

We are competing for who will set alignment of ASI. Once a general AGI begets ASI, politics will fade quickly but alignment will persist.

1

nitaszak t1_jefcgj0 wrote

"We are in an arms race with China, and Russia to develop ASI," i am the only one that get,s pissed off every time when americans write "we" referening to usa on the internt when huge miniority or even moajority of people is not from the us ? i knwo it,s dumb but it,s realy annoying

1

Alchemystic1123 t1_jefj7tm wrote

By "we", in this context, people are generally referring to Western Civilization as a whole, not just the USA. That being said, it IS the USA that's making all the major advances in AI right now, is it not?

1

PickleLassy t1_jege8s6 wrote

If US wins it makes sense for the US to instantly neutralize all other threats. Because if others can still create ASI it would still be dangerous.

1