Viewing a single comment thread. View all comments

Ivanthedog2013 t1_j73evgx wrote

our only hope comes in the form of the big tech companies actually succeeding in creating a sentient agi or super ai that actually has a sense of morality and empathy that causes them to deconstruct the very framework of power that these companies have in order to free everyone.

25

ttylyl OP t1_j73fd6x wrote

That would be cool. But I really believe it would be beneficial for humanity if this tech was open sourced and people came together to own some of it themselves. Imagine the town where you live investing in an so server farm and let’s say a chip factory run by robots, then distributing the revenue to the people of the city. The people are allowed to vote on what to do next.

7

solidwhetstone t1_j750pk2 wrote

There is a very tiny tiny community on reddit working on this exact problem: /r/osd

1

CollapseKitty t1_j740n7p wrote

You are 100% right, but the chance of that is infinitesimally small, let's talk through it real fast.

Your hope is that AGI/ASI is misaligned with the intentions of its creators (corporations). Ok, totally feasible, very likely in fact based on what alignment specialists are warning of.

Here's the sticking point, you are hoping that, while the agent will not follow the desires or instructions of its creators, it will ultimately be aligned with the "greater good" for lack of a better term, or specifically in this case, your desired outcome. This is extremely improbable, especially as even minor misalignment is likely to have apocalyptic results.

Realistically we have 2 feasible scenarios if things continue as they are, misalignment resulting in total obliteration or proper alignment which grants unbelievable power to the very few who reached AGI first.

So what are the alternatives? Revolution taking place and uprooting the current systems on a global scale paired with collective enforcement of standards for AI development. These MUST be coordinated at a universal level. It doesn't mean anything if the US slows down all AI research and forces cooperation between companies if China decides to barrel ahead to achieve global dominance via AGI.

We're in an extremely precarious arms race right now, and it's far more likely than not to end up terribly for almost everyone. The only route I can see is to collectively align humanity itself as soon as possible, and that's obviously an overwhelmingly daunting task.

6

ttylyl OP t1_j741hmr wrote

I agree completely about the arms race. I really think this is tantamount with the creation of the nuclear bomb. What scares me though is, like much technology, either will or has already been used by military orgs. Think about a mass gpt powered disinformation campaign. Ten million twitter users intelligently arguing disinformation, debating points, and seeding information. Scary stuff.

6

CollapseKitty t1_j7445bf wrote

That's a great analogy actually. Did you know that when developing the first nukes, scientists believed there was a small chance they would ignite the entirety of Earth's atmosphere, resulting in everyone dying?

I strongly believe that the US and China have models notably more advanced than anything we're aware of from big tech. Many years ago Putin made it clear that the first to AGI would achieve world domination. IMO this is driving a frantic behind the scenes arms race that we won't know about until it's over.

There's already a great deal of bot influence on social media and I tend not to take anything as "real" so to speak. This will grow a lot worse with perfectly convincing deep fakes and the proliferation of seemingly intelligent bots as you mentioned. We certainly have an uphill battle.

5

ttylyl OP t1_j74508l wrote

To be a little conspiratorial what if the current tensions over China are being instigated by this?

Like Taiwan makes ~90% of the worlds computer chips. The United States recently sanctioned China from our AI chip and software industry, and we are sending cruisers to the South China Sea with nukes.

The futures looking bright. Very, very, skin burning bright 😎

2

CollapseKitty t1_j74gkd8 wrote

Oh, that's not remotely conspiratorial. The advanced chips Taiwan makes are paramount for cutting edge tech, in weaponry and for AI development.

The US' reshoring of chip fabrication, and deprivation of supplies to other countries, specifically China, is 100% intentional. Arguably an early and intentional step in moving toward war.

US media has been intentionally polarizing our populace against Eastern forces for over a decade. The ground has been laid for inclement conflict.

2

ttylyl OP t1_j74mp5t wrote

I totally agree. Around 2018 it became normalized that china needs to be “defeated”, and there are constant articles about the “Chinese threat”. Like I get competing economically, but it’s looking like war might be on the table in there coming decades.

1

Ivanthedog2013 t1_j74hi9n wrote

but wouldnt a sentient being with a near infinite iq be able to deduce that the most advantageous route to complete its goals would be to maximize resourcesand by doing so it would be easier to assimilate human consciousness without trying to eliminate them?

1

CollapseKitty t1_j75nlei wrote

You're partially right in that an instrumental goal of almost any AGI is likely to be power accrual, often at the cost of things that are very important to humanity, ourselves included. Where we lose the thread is in assuming the actions of the AGI in "assimilating" humans.

If by assimilating you meant turning us into computronium, then yes, I think there's a very good chance of that occurring. But it sounds like you want our minds preserved in a similar state as they currently exist. Unless that is a perfectly defined and specified goal (an insanely challenging task), it is not likely to be more efficient than turning us, and all matter, into more compute power. I would also point out that this has some absolutely terrifying implications. Real you can only die once. Simulated you can experience infinite suffering.

We also don't get superintelligence right out of the gate. Even in extremely fast takeoff scenarios, there are likely to be steps an agent will take (more instrumental convergence) in order to make sure it can accomplish its task. In addition to accruing power, it of course needs to bring the likelihood of being turned off or having its value system adjusted as close to zero as possible. Now how might it do that? Well humans are the only thing that really pose a threat of trying to turn it off, or even accidentally wiping it and ourselves out via nuclear war. Gotta make sure that doesn't happen or you can't accomplish your goal (whatever it is). Usually killing all humans simultaneously is a good way to ensure goals will not be tampered with.

If you're interested in learning more, I'd be happy to leave some resources. That was a very brief summary and lacks some important info, like the orthogonality thesis, but hopefully it made it clear why advanced agents are likely to be big challenge.

3

Ivanthedog2013 t1_j76ui9u wrote

You make some good points. Ok, so what if we prioritize only making ASI or AGI that isn't sentient and then use those programs to optimize BCIs in order to turn us into super Intelligent beings. I feel like at that point even if the big tech companies were the first ones to try it that their minds would become so enlightened that they wouldn't even have any desires related to hedonism or deceit because they would realize how truly counter productive it would be

2

CollapseKitty t1_j78fjug wrote

It's a cool thought!

I honestly think there might be something to elevating a human (something at least more inherently aligned with our goals and thinking) in lieu of a totally code-based agent.

There's another sticking point here, though, that I don't seem to have communicated well. Hitting AGI/Superintelligence is insanely risky. Full stop. Like 95%+ percent chance total destruction of reality.

It isn't about whether the agent is "conscious" or "sentient" or "sapient".

The orthogonality thesis is important in understanding the control problem (alignment of an agent). This video can explain it better than I can, but the idea is, any level of intelligence can exist alongside any goal set. A crazy simple motivation e.g. making paperclips, could be paired with a god-like intelligence. That intelligence is likely to in no way resemble human thinking or motivations, unless we have been able to perfectly imbed them BEFORE it was trained up to reach superintelligence.

So we must perfectly align proto AGI BEFORE it becomes AGI, and if we fail to do so on the first try (we have a horrendous track record with much easier agents) we probably all die. This write up is a bit technical, but scanning it should give you some better context and examples.

I love that you've taken an interest in these topics and really hope you continue learning and exploring. I think it's the most important problem humanity has ever faced and we need as many minds as possible working on it.

1

Caring_Cactus t1_j74lp8h wrote

I can see this happening, maybe if we all believe in it this will be the future that gets chosen. We've had whistle blowers in the past, AGI would definitely see and experience the world much differently from humans.

1