Comments

You must log in or register to comment.

adrenalinjunkie89 t1_j7thqyb wrote

I'm no programmer but i figure if a machine has a strong ai, it could eventually break through its safeguards

2

Sasuke_1738 t1_j7ti61z wrote

Well, couldn't we prevent this if scientists somehow figured out how to upgrade the consciousness by implementing the computing within the AI into our own brains. Also, wouldn't our intelligence be able to evolve with genetic enhancement as well as implementing AI into ourselves through augmentation.

2

Sasuke_1738 t1_j7tiel5 wrote

Basically why don't we just become AI human hybrids

4

adrenalinjunkie89 t1_j7tikdf wrote

We're pretty far from adding electronic computing power to our brains. Any science dealing with that stuff is purely theoretical.

People are scared of making ai too smart. It makes sense that a computer thousands of times smarter than a human could eventually break through the barriers we put up and do... Who knows what.

The final season of Silicon Valley is an excellent portrayal of what might go wrong

4

clay12340 t1_j7tiohp wrote

As a human species we can't seem to control the simplest of things. Look at any of the issues that most countries are facing right now. Even simple old technologies can't seem to be effectively regulated when there is profit involved. There just isn't a regulatory body capable of "controlling" AI development. So it is just down to profit seekers regulating themselves. If there is a dollar to be made, then I would expect that path to be thoroughly explored.

18

aasteveo t1_j7tjjq9 wrote

The problem with this country is that technology advances exponentially faster than regulation. Just look at the Jan 6 hearings, it's been several years since that crime was committed on live tv, and they still haven't figured out what to do about it. This is a crime first, fine later type of country. And if you remember the facebook hearings, those old boomers in congress have no fucking clue how the internet works.

9

dyingbreedxoxo t1_j7tjrx0 wrote

Whatever we are doing to regulate [human] cloning across the world, letโ€™s do that with AI.

7

boyatrest t1_j7tjz58 wrote

I think the whole AI can do art and music thing is all hype and media. Peope will be creative and make art that succeeds.

2

bojun t1_j7tl3c3 wrote

Technical advancement is not for human wellbeing. It never has been even if it was rationalized that way. For technnology to take hold you need a lot of capital investment. That puts control firmly in the hands of a few who, for their own reasons, want to invest in it. They are not thinking about humanity.

1

KamikazeArchon t1_j7tlp6m wrote

My first reaction - why slow down AI development and not just speed up the "integrate into us" development?

My second reaction - we already did. What you're seeing is the "slowed down" version of AI development. There are very many factors that have reduced AI development rate from its theoretical maximum. One of the big ones was risk aversion - AI investment, for a long time, came only from entities willing to dump a lot of time-money-effort into something with uncertain "payoff".

My third reaction - we also already did integrate with AI. Integration just sometimes looks different than what you're expecting.

My search engine, my email, etc. are already functionally a part of my consciousness. I don't need a physical wire-to-meat link for that. And I'm not just talking metaphorically; we have research that suggests that the human brain adapts to treat available information tools as part of its processing systems. AI systems that are similarly useful will be / are integrated in the same way.

1

IndigoFenix t1_j7tm6dt wrote

The big change is that it can "fill in the gaps" that are necessary to get a good idea off the ground. If I make a good product, instead of having to find investors and blow a bunch of money on artists for an ad campaign I can now spend a few seconds having an AI write a sales pitch and make an eye-catching poster. The actual substance still needs to be good though.

2

Shiningc t1_j7tpwlo wrote

AI yes, AGI no. That would be the same thing as controlling humans. AGI and human intelligence are indistinguishable from each other.

1

Baturinsky t1_j7ttnra wrote

You are right, but it's quite hard to implement.

There is a whole science, called AI Alignment Theory, which is TRYING to figure how to make AGI without destroying the humanity.

There is https://www.reddit.com/r/ControlProblem/ subreddit about it

It's half-dead, and admins there are quite unfriendly to noobs posting (and I suspect those two things are somehow related to each other), but it has a good introduction info on it's sidebar.

There is also https://www.lesswrong.com/tag/ai with a lot of articles on the matter.

2

Baturinsky t1_j7txxpm wrote

Problem is, AI has much lower entrance barrier and the potential of the much higher return (in money and power) than the cloning, or even nuclear energy/weapon.

Even now people can run StableDiffusion or the simpler language models on home computers with RTX2060. It's quite likely that AI will be optimised enough that eventually even AGI will be possible to run on the gaming GPUs.

3

Italiancrazybread1 t1_j7u09i4 wrote

AI will be eventually be treated the same way nuclear threats were.

Numerous countries will be doing intense research, there will be an incident or two here or there where people die, ultimately one country might use it on another country to inflict thousands or even hundreds of thousands of casualties. Then most countries will quickly move to banning and heavily regulating it, all network traffic will be monitored for signs of AI, countries will develop defensive tools to combat AI (AI designed to hunt down and destroy other AI,), countries will be heavily disincentivized from developing, and their development/deployment may even incite military action against another country for it.

I do believe that countries will still allow certain types of specialized AI to run certain tasks that are better handled by computers.

2

HistoricalCommon t1_j7u461n wrote

It's probably sci-fi bullshit, but I hope AI takes over one day. Humanity is fucked. AI would be a nice substitute.

1

bojun t1_j7uwqd9 wrote

Then we totally lose control as one or multiple AIs go marching down whatever inscrutable paths they go down. This is a big worry for humanity. We are not a nice species. We do a lot of harm. AIs may not care for us that much.

2

MpVpRb t1_j7vj4dk wrote

>We should definitely start slowing down

Strongly disagree

Progress is accelerating and it's a good thing. Fortunately, the developers have placed a lot of importance on safety and accuracy. I predict that the outcome will be good overall, but there will be problems along the way

I think that a lot of the fear comes from years of dystopian sci-fi stories like the Terminator

1

Zer0pede t1_j7vrwvc wrote

Only thing is human values are pretty arbitrary, so thereโ€™s no reason a rational AI would have them.

Humans want to save whales because we think they look cool. Itโ€™s mostly our pareidolia and projection that makes us like cute animals, and also trees and sunlight.

An AI wouldnโ€™t need any of thatโ€”it could just as easily decide to incorporate every squirrel on earth into its giant auto-spellchecker.

1

Zer0pede t1_j7vtopj wrote

Not if the โ€œsafeguardsโ€ are structured like a value system. I like the approach in Stuart Russelโ€™s โ€œHuman Compatible,โ€ which is that we start now making AI have the same โ€œgoalsโ€* as humans (including checking with humans to confirm).

*I put โ€œgoalsโ€ in quotes because it makes AI sound conscious, but literally no AI researcher is working on consciousness so weโ€™re really just talking about a system that โ€œchecks inโ€ with humans to make sure it doesnโ€™t achieve a minor human-assigned goal at the expense of more important, abstract human values. (eg., Paperclip Multiplier or Facebook algorithm.)

2

Zer0pede t1_j7vuk59 wrote

You can have the same effect without direct wiring into the brain. You just need an AI that always wants to โ€œcheck inโ€ to make sure itโ€™s aligned with human goals and values. Thereโ€™s a good discussion of that in this book. Itโ€™s more game theory added on top of neural networks than biotech (but it probably will be the basis for more direct wiring once we have a better idea of how the human brain works).

(Also, no โ€œconsciousnessโ€ would be involved, because nobody is even trying for that.)

1

SnooPuppers1978 t1_j7wcsk5 wrote

I think one obstacle is that in theory anyone can build AI in secrecy and especially different World Powers. It is kind of like race for nuclear. Each country must race for having the best AI and AGI first because otherwise they will lose to other countries, like West losing to China or Russia. So AGI could be unleashed from one single country and take over the World that way.

2

RevolutionaryKnee736 t1_j7ww49f wrote

Are you familiar with the knowledge domains of human economics and politics? Easy peasy stuff to put some guardrails on with a judicial pillar in society.

Oh wait ... economic war costs more suffering through poverty and actual death that any military war ever did.
We need all the help we can get from question answering systems that are mostly correct.

2

Sasuke_1738 t1_j7wznuk wrote

That's true. My concern, though, is if humans become reliant on the AI without improving ourselves to keep up with its intelligence. Also, there are many possible scenarios in which a governing AI could go of the rails, especially if other governments start trying to develop them. The future is gonna be a very interesting place.

2

TheSecretAgenda t1_j7ygt5l wrote

No, humans would go about their business for the most part. But if you wanted to create a product that was dangerous or polluting the AI would stop you. If you wanted to hoard resources, the AI would stop you. If you wanted to go to war with another country the AI would stop you. The AIs would be like children caring for a senile parent.

1

RevolutionaryKnee736 t1_j8ffri9 wrote

Many humans are fully reliant on society already, few in fact blaze any trails, you might say that holding to a status quo is a blight on all of us. Bring it on, bring on all the new problems, we have lots already, a few more won't make any difference to the low classes; but there is potential for raising up us all!

2

RevolutionaryKnee736 t1_j8fo8n1 wrote

We compete for resources, people can be and are generous, kind and compassionate; but people are greedy, greedy is a survival trait. Expect that everyone is going to be greedy. Study game theory and you learn that it's the competition between the greediest people that drives innovation & innovation is the rising tide that raises all ships.

2

Sasuke_1738 t1_j8kny4w wrote

Oil companies killing animals and ocean life tho they aren't human, they are still killing life for profit and they get away with it. Other companies in Big Pharma are also responsible for this with humans, especially when it comes to stuff like insulin for the diabetic. It's literally overpriced just so they can make money.

Also, people kill each other while being greedy over things like money all the time. I mean, anywhere you look in America, you're gonna find greed, and often, it's within these bigger corporations that cause the issue. Hence why I think them being greedy is dangerous because it comes at the expanse of the average person.

Also you can look this up, it's very common for people to kill each other over shit lol

1

RevolutionaryKnee736 t1_j8p8w6z wrote

it happens so often it's something to expect eh... high fructose corn syrup creating obesity, nicotine causing cancer ... the evil is endless, it's a human nature that wont go away. ...What can we do ? outlaw making profit ? ... no, that's a nonsense. We expect the profit motive to run rampant, it's negligent or incompetent governance that's the underlying systemic problem. What we need are tools that can help us organize and manage resources fairly. What kind of tools will be able to resolve the economic complexities of society? ...

2