Viewing a single comment thread. View all comments

Salendron2 t1_iuukvm8 wrote

Yes, Trust in Google, Google is responsible and trustworthy company to create an AI to manage the entire planet with unchecked control.

I can see no ways this could ever end poorly.

9

Plouw t1_iuws4jf wrote

We shouldn't trust in Google having the power, but I do trust the engineers making the framework and contributing to reaching a AI that can, at least partly, manage policies.

Using "Google AI" as a dictator? No.
Using the learnings from what their engineers are creating to at some point make a crowdsourced, opensource, cryptographically verifiable and truly democratically controlled AI to manage policies at a slowly increasing rate?
I think that has potential to be very beneficial.

1

ninjasaid13 t1_iuwufo2 wrote

but what if the leaders at google disagree with how the technology should be used and fire/resign engineers who disagree with them.

2

Plouw t1_iuwvnwn wrote

Then at that point we've still gotten engineers who have learned and spread that learning to the world.

I would never trust any commercial company with managing the world. My point is merely that there is positives to be found in smart researchers working on these areas. And usually the researchers individually are ethical - in my opinion most people are, it's money that corrupts.

So yes, we should not allow Google to manage the world, but we can still use their ideas and findings to build the "crowdsourced, blabla..." AI I mentioned - and that's the positive perspective of this. The researchers hints to this as well, that they are laying the framework for other to draw inspiration from.
Science is science, how it's being used is up to the people to decide.

0

fjjshal t1_iuxyl5c wrote

This is a horrible idea and would become a game to see who can 51% attack the governator — winner take all

2

Plouw t1_iuyl1bo wrote

Not all crypto networks are 51% to overrule or that simple for that matter either. Look into polkadot governance.

1

turnip_burrito t1_iuzcr3d wrote

Open source near AGI sounds like a bad idea. The technology has infinite impact in any well funded group's hands. Much rather have a closed doors team or teams (likely sharing many of my values) develop and use it first than expose it to the world and risk a group with different values I disagree with controlling the world. Or risk having multiple AIs all competing with each other for power.

2

Plouw t1_iuzxoar wrote

I very much see your worries, but I see all those worries behind closed doors too. Also not necessarily talking about AGI here, just policy making/suggesting AI.
I'm not quite sure what the solution is to be honest, but I know for sure that a closed source AI is not trustable and I think the future requires trustless operations, especially if it's gonna manage policies.

1

turnip_burrito t1_iuzyziw wrote

It is a difficult problem. I don't know what the solution is either.

2