Viewing a single comment thread. View all comments

StarCaptain90 OP t1_jeey2qr wrote

One reason it seeks power strategies because it's based off of human language. By defaults humans seek power, so it makes sense for an AI to also seek power because of the language. Now that doesn't mean it equates to destruction.


theonlybutler t1_jeeyvgd wrote

Good point. A product of it's parents. It won't necessarily be destructive I agree but it could potentially view us as inconsequential or a just a tool to use at its will. one example: Perhaps it may decide if wants to expand through the universe and having humans produce resources to do that, it could determine humans are most productive in labour camps and send us all off to them. It could also decide oxygen is too valuable a fuel to be wasted on us breathing it and just exterminate us. Pretty much how humans treat animals, sadly. (Hopefully it worries about ethics too and keeps us around).


AGI_69 t1_jef4anp wrote

The reason, why agents seek power is to increase their fitness function. The power seeking is logical consequence of having a goal, it's not product of human language. You are writing nonsense...


StarCaptain90 OP t1_jef4xx7 wrote

I'm referring to the research on human language. Fitness function is a part of it as well


AGI_69 t1_jef5z5c wrote

No. The power seeking is not result of human language. It's instrumental goal.
I suggest you read something about AI and it's goals.


StarCaptain90 OP t1_jef8zgb wrote

I understand the mechanisms, I'm referring to conversational goals that are focused on language.


AGI_69 t1_jefdgpu wrote

This thread from user /u/theonlybutler is about agentic goals, power seeking is instrumental goal. It has nothing to do with human language being one way or another. Stop trying to fix nonsense with more nonsense.


StarCaptain90 OP t1_jefepy2 wrote

After re-reading his comment I realized I made an error. You are right, he is referring to the inner mechanisms. I apologize.


FoniksMunkee t1_jef9x9q wrote

It also does not negate destruction. All of your arguments are essentially "it might not happen". This is not a sound basis to assume it's safe or dismiss peoples concerns.


StarCaptain90 OP t1_jefa6g8 wrote

These concerns though are preventing early development of scientific breakthroughs that could save lives. That's why I am so adamant about it.


FoniksMunkee t1_jefbr87 wrote

No they aren't, no ones slowing anything right now DESPITE concerns. In fact the exact opposite is happening.

But that's not the most convincing argument - "On the off chance we save SOME lives, let's risk EVERYONE's lives!".

Look this is a sliding scale - this could land anywhere from utopia to everyones dead. My guess is that it will be somewhere closer to utopia, but not enough so that everyone gets to enjoy it.

The problem is you have NO IDEA where this will take us. None of us does. Not even the AI researchers. So I would be cautious about telling people that the fear of AI being dangerous is "irrational". It really fucking isn't. The fear is in part based on the the ideas and concerns of the very researchers who are making these tools.

If you don't have at least a little bit of concern, then you are not paying attention.


StarCaptain90 OP t1_jefdzbh wrote

The problem I see those is that we would be implementing measures that we think benefits us but actually impedes our innovation. I'm trying to avoid another AI winter that is caused ironically by how successful its been.


FoniksMunkee t1_jefi6fs wrote

There isn't going to be another AI winter. I am almost certain that the US government has realised they are on the cusp of the first significant opportunity to fundamentally change the ratio of "work produced" per barrel of oil. I.e. we can spend the same amount of energy to get 10x 100x productivity.

There is no stopping this. That said - it doesn't mean you want to stop listening to the warnings.


StarCaptain90 OP t1_jefiz1l wrote

You should see my other post. These are the contents:

I have a proposition that I call the "AI Lifeline Iniative"

If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.

For example:

Let's say Stacy makes $100,000 a year.

She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.

This would help our society transition into an AI based economy.


FoniksMunkee t1_jefm1in wrote

Okay - but that won't work.

Stacy makes $100,000. She takes out a mortgage of $700,000 and has montly repayments of approx $2000.

She gets laid off but is now getting $35,000 a year as reduced salary.

She now has only $11,000 a year to pay all her bills, kids tuition, food and any other loans she has.

Now lets talk about Barry... he's in the same situation as Stacy - but he wanted to buy a house - but now his $35,000 isn't enough to qualify for a loan. He's pissed.


Like - I think we need a UBI or something - but how does this even work?


StarCaptain90 OP t1_jefmj4n wrote

So I agree with you that it will still be rough but its the best I can offer based around the assumption that jobs will continue to get replaced and eventually we will reach a point where UBI is necessary


FoniksMunkee t1_jefpnba wrote

Then we are screwed because that will lead to massive civil unrest, collapse of the banking system and economy.


StarCaptain90 OP t1_jefpz63 wrote

Well yeah that's what's going to happen in order to transition. My hope is that a system gets put in place to prevent a harsh transition. We definitely need a smoother transition


FoniksMunkee t1_jefq5br wrote

Then we really do need to put a pause on this.


StarCaptain90 OP t1_jefsbpp wrote

The end goal though is job replacement in order to support human freedom, speed up innovation, technology, medicine, etc.. So whatever the solution is during the pause it will still have to support this transition.


FoniksMunkee t1_jefskhs wrote

But I think that’s at least in part the point of the suggested pause. Not that I necessarily agree with it - but it’s likely we have not got a plan that will be ready in time.