Comments

You must log in or register to comment.

clay12340 t1_j7tiohp wrote

As a human species we can't seem to control the simplest of things. Look at any of the issues that most countries are facing right now. Even simple old technologies can't seem to be effectively regulated when there is profit involved. There just isn't a regulatory body capable of "controlling" AI development. So it is just down to profit seekers regulating themselves. If there is a dollar to be made, then I would expect that path to be thoroughly explored.

18

TheSecretAgenda t1_j7ye3n4 wrote

The more you separate intellect from biology the better off you will be. Emotions of pride, jealousy, greed will not be taken into account when decisions are made. I welcome our AI overlords.

1

aasteveo t1_j7tjjq9 wrote

The problem with this country is that technology advances exponentially faster than regulation. Just look at the Jan 6 hearings, it's been several years since that crime was committed on live tv, and they still haven't figured out what to do about it. This is a crime first, fine later type of country. And if you remember the facebook hearings, those old boomers in congress have no fucking clue how the internet works.

9

dyingbreedxoxo t1_j7tjrx0 wrote

Whatever we are doing to regulate [human] cloning across the world, let’s do that with AI.

7

Baturinsky t1_j7txxpm wrote

Problem is, AI has much lower entrance barrier and the potential of the much higher return (in money and power) than the cloning, or even nuclear energy/weapon.

Even now people can run StableDiffusion or the simpler language models on home computers with RTX2060. It's quite likely that AI will be optimised enough that eventually even AGI will be possible to run on the gaming GPUs.

3

Sasuke_1738 t1_j7tiel5 wrote

Basically why don't we just become AI human hybrids

4

Shahzoodoo t1_j7wh3pd wrote

It’s only a matter of time we might be the firsts lol

3

TheSecretAgenda t1_j7ye8jr wrote

That would be the worst possible outcome. Super powerful intellect paired with human emotions, very dangerous in my opinion.

1

Sasuke_1738 t1_j7yf0l0 wrote

Wouldn't we technically be able to control them better if we were smarter?

1

TheSecretAgenda t1_j7yfnr7 wrote

I don't think they will be dangerous, Decisions based on pure logic detached from emotion will be better for humanity.

2

Sasuke_1738 t1_j7yfx2k wrote

Lol we'd be vulcans 🖖

1

TheSecretAgenda t1_j7yg9qm wrote

Would that be a bad thing?

1

Sasuke_1738 t1_j7ygh5a wrote

I mean, wouldn't we start to lose connection to our cultures and basically just evolve into artificial beings without what makes us human.

1

TheSecretAgenda t1_j7ygt5l wrote

No, humans would go about their business for the most part. But if you wanted to create a product that was dangerous or polluting the AI would stop you. If you wanted to hoard resources, the AI would stop you. If you wanted to go to war with another country the AI would stop you. The AIs would be like children caring for a senile parent.

1

Sasuke_1738 t1_j7yh8n6 wrote

But the AI would be merged with you, meaning the AI isn't making those decisions the person is.

1

adrenalinjunkie89 t1_j7thqyb wrote

I'm no programmer but i figure if a machine has a strong ai, it could eventually break through its safeguards

2

Sasuke_1738 t1_j7ti61z wrote

Well, couldn't we prevent this if scientists somehow figured out how to upgrade the consciousness by implementing the computing within the AI into our own brains. Also, wouldn't our intelligence be able to evolve with genetic enhancement as well as implementing AI into ourselves through augmentation.

2

adrenalinjunkie89 t1_j7tikdf wrote

We're pretty far from adding electronic computing power to our brains. Any science dealing with that stuff is purely theoretical.

People are scared of making ai too smart. It makes sense that a computer thousands of times smarter than a human could eventually break through the barriers we put up and do... Who knows what.

The final season of Silicon Valley is an excellent portrayal of what might go wrong

4

Zer0pede t1_j7vuk59 wrote

You can have the same effect without direct wiring into the brain. You just need an AI that always wants to “check in” to make sure it’s aligned with human goals and values. There’s a good discussion of that in this book. It’s more game theory added on top of neural networks than biotech (but it probably will be the basis for more direct wiring once we have a better idea of how the human brain works).

(Also, no “consciousness” would be involved, because nobody is even trying for that.)

1

Zer0pede t1_j7vtopj wrote

Not if the “safeguards” are structured like a value system. I like the approach in Stuart Russel’s “Human Compatible,” which is that we start now making AI have the same “goals”* as humans (including checking with humans to confirm).

*I put “goals” in quotes because it makes AI sound conscious, but literally no AI researcher is working on consciousness so we’re really just talking about a system that “checks in” with humans to make sure it doesn’t achieve a minor human-assigned goal at the expense of more important, abstract human values. (eg., Paperclip Multiplier or Facebook algorithm.)

2

boyatrest t1_j7tjz58 wrote

I think the whole AI can do art and music thing is all hype and media. Peope will be creative and make art that succeeds.

2

IndigoFenix t1_j7tm6dt wrote

The big change is that it can "fill in the gaps" that are necessary to get a good idea off the ground. If I make a good product, instead of having to find investors and blow a bunch of money on artists for an ad campaign I can now spend a few seconds having an AI write a sales pitch and make an eye-catching poster. The actual substance still needs to be good though.

2

Baturinsky t1_j7ttnra wrote

You are right, but it's quite hard to implement.

There is a whole science, called AI Alignment Theory, which is TRYING to figure how to make AGI without destroying the humanity.

There is https://www.reddit.com/r/ControlProblem/ subreddit about it

It's half-dead, and admins there are quite unfriendly to noobs posting (and I suspect those two things are somehow related to each other), but it has a good introduction info on it's sidebar.

There is also https://www.lesswrong.com/tag/ai with a lot of articles on the matter.

2

Italiancrazybread1 t1_j7u09i4 wrote

AI will be eventually be treated the same way nuclear threats were.

Numerous countries will be doing intense research, there will be an incident or two here or there where people die, ultimately one country might use it on another country to inflict thousands or even hundreds of thousands of casualties. Then most countries will quickly move to banning and heavily regulating it, all network traffic will be monitored for signs of AI, countries will develop defensive tools to combat AI (AI designed to hunt down and destroy other AI,), countries will be heavily disincentivized from developing, and their development/deployment may even incite military action against another country for it.

I do believe that countries will still allow certain types of specialized AI to run certain tasks that are better handled by computers.

2

SnooPuppers1978 t1_j7wcsk5 wrote

I think one obstacle is that in theory anyone can build AI in secrecy and especially different World Powers. It is kind of like race for nuclear. Each country must race for having the best AI and AGI first because otherwise they will lose to other countries, like West losing to China or Russia. So AGI could be unleashed from one single country and take over the World that way.

2

RevolutionaryKnee736 t1_j7ww49f wrote

Are you familiar with the knowledge domains of human economics and politics? Easy peasy stuff to put some guardrails on with a judicial pillar in society.

Oh wait ... economic war costs more suffering through poverty and actual death that any military war ever did.
We need all the help we can get from question answering systems that are mostly correct.

2

Sasuke_1738 t1_j7wznuk wrote

That's true. My concern, though, is if humans become reliant on the AI without improving ourselves to keep up with its intelligence. Also, there are many possible scenarios in which a governing AI could go of the rails, especially if other governments start trying to develop them. The future is gonna be a very interesting place.

2

RevolutionaryKnee736 t1_j8ffri9 wrote

Many humans are fully reliant on society already, few in fact blaze any trails, you might say that holding to a status quo is a blight on all of us. Bring it on, bring on all the new problems, we have lots already, a few more won't make any difference to the low classes; but there is potential for raising up us all!

2

Sasuke_1738 t1_j8fhk7g wrote

I mean, this tech tho can do so much good it's starting getting bad when people get greedy.

2

RevolutionaryKnee736 t1_j8fo8n1 wrote

We compete for resources, people can be and are generous, kind and compassionate; but people are greedy, greedy is a survival trait. Expect that everyone is going to be greedy. Study game theory and you learn that it's the competition between the greediest people that drives innovation & innovation is the rising tide that raises all ships.

2

Sasuke_1738 t1_j8fot76 wrote

That's true, but when greed goes unchecked, people die

1

RevolutionaryKnee736 t1_j8kjqrh wrote

Give me an example, when did unchecked greed make people die? what were the circumstances?

1

Sasuke_1738 t1_j8kny4w wrote

Oil companies killing animals and ocean life tho they aren't human, they are still killing life for profit and they get away with it. Other companies in Big Pharma are also responsible for this with humans, especially when it comes to stuff like insulin for the diabetic. It's literally overpriced just so they can make money.

Also, people kill each other while being greedy over things like money all the time. I mean, anywhere you look in America, you're gonna find greed, and often, it's within these bigger corporations that cause the issue. Hence why I think them being greedy is dangerous because it comes at the expanse of the average person.

Also you can look this up, it's very common for people to kill each other over shit lol

1

RevolutionaryKnee736 t1_j8p8w6z wrote

it happens so often it's something to expect eh... high fructose corn syrup creating obesity, nicotine causing cancer ... the evil is endless, it's a human nature that wont go away. ...What can we do ? outlaw making profit ? ... no, that's a nonsense. We expect the profit motive to run rampant, it's negligent or incompetent governance that's the underlying systemic problem. What we need are tools that can help us organize and manage resources fairly. What kind of tools will be able to resolve the economic complexities of society? ...

2

bojun t1_j7tl3c3 wrote

Technical advancement is not for human wellbeing. It never has been even if it was rationalized that way. For technnology to take hold you need a lot of capital investment. That puts control firmly in the hands of a few who, for their own reasons, want to invest in it. They are not thinking about humanity.

1

Sasuke_1738 t1_j7tl6lp wrote

That's true, but eventually, technology like AI won't need capitalism to keep developing, especially after the singularity.

1

bojun t1_j7uwqd9 wrote

Then we totally lose control as one or multiple AIs go marching down whatever inscrutable paths they go down. This is a big worry for humanity. We are not a nice species. We do a lot of harm. AIs may not care for us that much.

2

KamikazeArchon t1_j7tlp6m wrote

My first reaction - why slow down AI development and not just speed up the "integrate into us" development?

My second reaction - we already did. What you're seeing is the "slowed down" version of AI development. There are very many factors that have reduced AI development rate from its theoretical maximum. One of the big ones was risk aversion - AI investment, for a long time, came only from entities willing to dump a lot of time-money-effort into something with uncertain "payoff".

My third reaction - we also already did integrate with AI. Integration just sometimes looks different than what you're expecting.

My search engine, my email, etc. are already functionally a part of my consciousness. I don't need a physical wire-to-meat link for that. And I'm not just talking metaphorically; we have research that suggests that the human brain adapts to treat available information tools as part of its processing systems. AI systems that are similarly useful will be / are integrated in the same way.

1

Shiningc t1_j7tpwlo wrote

AI yes, AGI no. That would be the same thing as controlling humans. AGI and human intelligence are indistinguishable from each other.

1

HistoricalCommon t1_j7u461n wrote

It's probably sci-fi bullshit, but I hope AI takes over one day. Humanity is fucked. AI would be a nice substitute.

1

Zer0pede t1_j7vrwvc wrote

Only thing is human values are pretty arbitrary, so there’s no reason a rational AI would have them.

Humans want to save whales because we think they look cool. It’s mostly our pareidolia and projection that makes us like cute animals, and also trees and sunlight.

An AI wouldn’t need any of that—it could just as easily decide to incorporate every squirrel on earth into its giant auto-spellchecker.

1

MpVpRb t1_j7vj4dk wrote

>We should definitely start slowing down

Strongly disagree

Progress is accelerating and it's a good thing. Fortunately, the developers have placed a lot of importance on safety and accuracy. I predict that the outcome will be good overall, but there will be problems along the way

I think that a lot of the fear comes from years of dystopian sci-fi stories like the Terminator

1