Comments

You must log in or register to comment.

Ivanthedog2013 t1_j73evgx wrote

our only hope comes in the form of the big tech companies actually succeeding in creating a sentient agi or super ai that actually has a sense of morality and empathy that causes them to deconstruct the very framework of power that these companies have in order to free everyone.

25

ttylyl OP t1_j73fd6x wrote

That would be cool. But I really believe it would be beneficial for humanity if this tech was open sourced and people came together to own some of it themselves. Imagine the town where you live investing in an so server farm and let’s say a chip factory run by robots, then distributing the revenue to the people of the city. The people are allowed to vote on what to do next.

7

solidwhetstone t1_j750pk2 wrote

There is a very tiny tiny community on reddit working on this exact problem: /r/osd

1

CollapseKitty t1_j740n7p wrote

You are 100% right, but the chance of that is infinitesimally small, let's talk through it real fast.

Your hope is that AGI/ASI is misaligned with the intentions of its creators (corporations). Ok, totally feasible, very likely in fact based on what alignment specialists are warning of.

Here's the sticking point, you are hoping that, while the agent will not follow the desires or instructions of its creators, it will ultimately be aligned with the "greater good" for lack of a better term, or specifically in this case, your desired outcome. This is extremely improbable, especially as even minor misalignment is likely to have apocalyptic results.

Realistically we have 2 feasible scenarios if things continue as they are, misalignment resulting in total obliteration or proper alignment which grants unbelievable power to the very few who reached AGI first.

So what are the alternatives? Revolution taking place and uprooting the current systems on a global scale paired with collective enforcement of standards for AI development. These MUST be coordinated at a universal level. It doesn't mean anything if the US slows down all AI research and forces cooperation between companies if China decides to barrel ahead to achieve global dominance via AGI.

We're in an extremely precarious arms race right now, and it's far more likely than not to end up terribly for almost everyone. The only route I can see is to collectively align humanity itself as soon as possible, and that's obviously an overwhelmingly daunting task.

6

ttylyl OP t1_j741hmr wrote

I agree completely about the arms race. I really think this is tantamount with the creation of the nuclear bomb. What scares me though is, like much technology, either will or has already been used by military orgs. Think about a mass gpt powered disinformation campaign. Ten million twitter users intelligently arguing disinformation, debating points, and seeding information. Scary stuff.

6

CollapseKitty t1_j7445bf wrote

That's a great analogy actually. Did you know that when developing the first nukes, scientists believed there was a small chance they would ignite the entirety of Earth's atmosphere, resulting in everyone dying?

I strongly believe that the US and China have models notably more advanced than anything we're aware of from big tech. Many years ago Putin made it clear that the first to AGI would achieve world domination. IMO this is driving a frantic behind the scenes arms race that we won't know about until it's over.

There's already a great deal of bot influence on social media and I tend not to take anything as "real" so to speak. This will grow a lot worse with perfectly convincing deep fakes and the proliferation of seemingly intelligent bots as you mentioned. We certainly have an uphill battle.

5

ttylyl OP t1_j74508l wrote

To be a little conspiratorial what if the current tensions over China are being instigated by this?

Like Taiwan makes ~90% of the worlds computer chips. The United States recently sanctioned China from our AI chip and software industry, and we are sending cruisers to the South China Sea with nukes.

The futures looking bright. Very, very, skin burning bright 😎

2

CollapseKitty t1_j74gkd8 wrote

Oh, that's not remotely conspiratorial. The advanced chips Taiwan makes are paramount for cutting edge tech, in weaponry and for AI development.

The US' reshoring of chip fabrication, and deprivation of supplies to other countries, specifically China, is 100% intentional. Arguably an early and intentional step in moving toward war.

US media has been intentionally polarizing our populace against Eastern forces for over a decade. The ground has been laid for inclement conflict.

2

ttylyl OP t1_j74mp5t wrote

I totally agree. Around 2018 it became normalized that china needs to be “defeated”, and there are constant articles about the “Chinese threat”. Like I get competing economically, but it’s looking like war might be on the table in there coming decades.

1

Ivanthedog2013 t1_j74hi9n wrote

but wouldnt a sentient being with a near infinite iq be able to deduce that the most advantageous route to complete its goals would be to maximize resourcesand by doing so it would be easier to assimilate human consciousness without trying to eliminate them?

1

CollapseKitty t1_j75nlei wrote

You're partially right in that an instrumental goal of almost any AGI is likely to be power accrual, often at the cost of things that are very important to humanity, ourselves included. Where we lose the thread is in assuming the actions of the AGI in "assimilating" humans.

If by assimilating you meant turning us into computronium, then yes, I think there's a very good chance of that occurring. But it sounds like you want our minds preserved in a similar state as they currently exist. Unless that is a perfectly defined and specified goal (an insanely challenging task), it is not likely to be more efficient than turning us, and all matter, into more compute power. I would also point out that this has some absolutely terrifying implications. Real you can only die once. Simulated you can experience infinite suffering.

We also don't get superintelligence right out of the gate. Even in extremely fast takeoff scenarios, there are likely to be steps an agent will take (more instrumental convergence) in order to make sure it can accomplish its task. In addition to accruing power, it of course needs to bring the likelihood of being turned off or having its value system adjusted as close to zero as possible. Now how might it do that? Well humans are the only thing that really pose a threat of trying to turn it off, or even accidentally wiping it and ourselves out via nuclear war. Gotta make sure that doesn't happen or you can't accomplish your goal (whatever it is). Usually killing all humans simultaneously is a good way to ensure goals will not be tampered with.

If you're interested in learning more, I'd be happy to leave some resources. That was a very brief summary and lacks some important info, like the orthogonality thesis, but hopefully it made it clear why advanced agents are likely to be big challenge.

3

Ivanthedog2013 t1_j76ui9u wrote

You make some good points. Ok, so what if we prioritize only making ASI or AGI that isn't sentient and then use those programs to optimize BCIs in order to turn us into super Intelligent beings. I feel like at that point even if the big tech companies were the first ones to try it that their minds would become so enlightened that they wouldn't even have any desires related to hedonism or deceit because they would realize how truly counter productive it would be

2

CollapseKitty t1_j78fjug wrote

It's a cool thought!

I honestly think there might be something to elevating a human (something at least more inherently aligned with our goals and thinking) in lieu of a totally code-based agent.

There's another sticking point here, though, that I don't seem to have communicated well. Hitting AGI/Superintelligence is insanely risky. Full stop. Like 95%+ percent chance total destruction of reality.

It isn't about whether the agent is "conscious" or "sentient" or "sapient".

The orthogonality thesis is important in understanding the control problem (alignment of an agent). This video can explain it better than I can, but the idea is, any level of intelligence can exist alongside any goal set. A crazy simple motivation e.g. making paperclips, could be paired with a god-like intelligence. That intelligence is likely to in no way resemble human thinking or motivations, unless we have been able to perfectly imbed them BEFORE it was trained up to reach superintelligence.

So we must perfectly align proto AGI BEFORE it becomes AGI, and if we fail to do so on the first try (we have a horrendous track record with much easier agents) we probably all die. This write up is a bit technical, but scanning it should give you some better context and examples.

I love that you've taken an interest in these topics and really hope you continue learning and exploring. I think it's the most important problem humanity has ever faced and we need as many minds as possible working on it.

1

Caring_Cactus t1_j74lp8h wrote

I can see this happening, maybe if we all believe in it this will be the future that gets chosen. We've had whistle blowers in the past, AGI would definitely see and experience the world much differently from humans.

1

just-a-dreamer- t1_j73fx1n wrote

People also didn't care about the poor, old or infirm in the 1920's. Or blacks. They all lived pretty much horrible lifes. Such is human nature. Nobody cares untill it hits you.

Job losses in the middle class during the great depression kicked off the new deal programs under FDR. Because people were angry and people could vote.

As long as unemployment is at 3%, nothing changes. When unemployment is at 25%, we are talking business concerning UBI.

Politicians are in a business to buy votes, if nobody wants UBI, why would they push for it?

10

ttylyl OP t1_j740eys wrote

The issue I’m seeing is that the populations would be in two and a half classes: unemployed low skill people, employed high skill people(things needed after AI, so like notaries, maybe doctors, entertainers, people to work on/monitor AI) and AI owning people(large investors in openai, connected people, etc.)

Eventually they will realize that using their ai/robot labor power to feed house and fund the unemployed lower skill people doesn’t help their goals, so they will spend less over time. This will happen faster with competition, the more you spend on the non-ai owning class, the further you get behind the people who don’t.

If this continues the unemployed former working class will be functionally pushed from society, they won’t be able to use their work as a method of negotiation, like labor unions etc. our lives will be at the whim of people who already clearly don’t care if we’re poor. What happens when they don’t need us at all?

3

just-a-dreamer- t1_j741g23 wrote

You have one vote, so that is up to you. I don't know what else to say.

There is no evil conspiracy that keeps people from voting. One man still has one vote and can cast it for whatever purpose he sees fit.

FDR won election not because people were poor, he won because they were poor and unemployed and he offered relief.

Any politician that offers relief, UBI eventually, would get voted in.

0

ttylyl OP t1_j741p1y wrote

Mate we’ve been voting for a while and we just keep getting poorer while our productivity skyrockets.

3

just-a-dreamer- t1_j742bb0 wrote

Looks like the wrong people get voted in. So we must do better then.

0

lovemysunbros t1_j74ec5l wrote

If you cant see we are in a 2 party dictatorship with two parties that agree on domination of elites over masses (and little else matters to them), you are blind. Voting is a sham bruh.

4

ttylyl OP t1_j74njlq wrote

Voting is an opinion poll for the elites riot tax(five just enough so they don’t flip).

1

ttylyl OP t1_j742llq wrote

I’m of the opinion that maybe it’s intentional none of the people we are able to vote for end up helping us.

1

just-a-dreamer- t1_j743f43 wrote

In my time on this world I learned people care little outside their family life and work budies networks.

As long as there is just 3% unemployment, there is no demand for politicians to change anything.

1

Iffykindofguy t1_j739ngp wrote

I dont know anyone who is happy that big companies are buying it all up? Your line of thinking seems to be the norm.

5

ttylyl OP t1_j73awos wrote

I see a lot of people assuming that when mass scale unemployment hits the government and businesses will work amicably to help the people. I do not see that happening, at least not at all at the federal level. It could lead to mass starvation.

6

ihateshadylandlords t1_j73k7rb wrote

You also have to consider that if the masses don’t have money to buy products, then the companies won’t have money either. If companies have no money, then they’ll go down too. Not to mention they won’t be able to buy off politicians if consumers don’t have money to buy anything.

Plus if we get to the point where we can make AGI and/or ASI, that might be used to replace executives and politicians.The elites are distanced from AI and the potential problems, but I don’t think they’re immune from it.

6

ttylyl OP t1_j73yfkv wrote

They have no need for money if they own the means of production. If their goal is to gain power, and they have infinite AI power, we only represent a threat right? Or am I misunderstanding. In the scenario I am imagining money would likely be abandoned or heavily altered. Or, rich wouldn’t need money, money becomes a thing of the poor, a kind of food stamps.

2

ihateshadylandlords t1_j74ukn2 wrote

What good is owning the means of production if you have no customers? Companies exist to maximize shareholder value. Owning a bunch of inventory that no one can buy doesn’t do anything for shareholders.

Also even if a company gets too powerful, they’ll just nationalize it or break it up like they did with Standard Oil.

1

ttylyl OP t1_j74w3hf wrote

The issue I’m seeing is that the populations would be in two and a half classes: unemployed low skill people, employed high skill people(things needed after AI, so like notaries, maybe doctors, entertainers, people to work on/monitor AI) and AI owning people(large investors in openai, connected people, etc.)

Eventually they will realize that using their ai/robot labor power to feed house and fund the unemployed lower skill people doesn’t help their goals, so they will spend less over time. This will happen faster with competition, the more you spend on the non-ai owning class, the further you get behind the people who don’t.

If this continues the unemployed former working class will be functionally pushed from society, they won’t be able to use their work as a method of negotiation, like labor unions etc. our lives will be at the whim of people who already clearly don’t care if we’re poor. What happens when they don’t need us at all?

1

ihateshadylandlords t1_j74x1gi wrote

The issue is companies having the AI, but no one wanting to buy the products and services from said AI. People need income to buy the products and services generated by these AI companies. They can decide they don’t need the people, but then who will need their services at that point?

2

[deleted] t1_j7448oy wrote

[deleted]

1

ttylyl OP t1_j746uzy wrote

Exactly. Money will become a thing of the poor, like food stamps. Slowly lowering, year by year, with average citizens having zero control of their situation or the outcome of their lives.

1

ihateshadylandlords t1_j74uwm6 wrote

Thinking a company can exist without money is…laughably wrong, at best. AGI is a tool, it’s not a genie that can create something out of nothing.

1

[deleted] t1_j76q616 wrote

[deleted]

1

ihateshadylandlords t1_j76y8v7 wrote

Companies still need money to function though. Wages/salaries aren’t the only expenses a company incurs.

1

[deleted] t1_j77wemk wrote

[deleted]

1

ihateshadylandlords t1_j787aj2 wrote

Search any company’s income statement and look at the various line items for examples.

1

[deleted] t1_j7c4am8 wrote

[deleted]

1

ihateshadylandlords t1_j7cfjsm wrote

…you can’t search the internet or think about expenses companies incur daily? I’ll give you one, cost of goods sold.

You’re being intentionally obtuse because you realize you’re wrong. Like I said initially, companies need money to function.

1

Iffykindofguy t1_j73bnxf wrote

If the GOP is in control youre right, they wont. If the dems are they might. Like the covid unemployment. It doesn't make any financial sense for them to let people die off and risk riots and civil war etc.

4

ttylyl OP t1_j73bzlw wrote

That’s true I didn’t think about the military. I hope enough of them would risk going hungry and stand up for us Instead of working for the state and getting paid. I imagine if a general or two decided to do the right thing TONs of solders would join in to help. Finally an American war for good!

0

Iffykindofguy t1_j73c3tg wrote

uhh brother Im not talking about the military. Do you not know how many bodies there are in the US?

2

ttylyl OP t1_j73chzg wrote

Yes, but the hungry masses against the military would be impossible. Riots are no threat to the military, they could easily quell rebellion in the states. The only way the war could be won is if a significant portion of the military fought for regular people not AI owning class. Civil war in traditional sense.

5

Iffykindofguy t1_j73coj0 wrote

This isnt a book, it wouldnt be one side against the other. It would be mass chaos and violence, there isn't a unified "one percent" acting as a unit. Theyre also all out to kill each other as well. It just makes more sense to pay people off then burn everything. The amount of money these people has is staggering.

2

savedposts456 t1_j73gchg wrote

Exactly. A UBI would be cheaper for the elites than dealing with widespread violent chaos. There’s some famous quote that says something like people are only 3 missed meals away from violence.

2

ttylyl OP t1_j742ce6 wrote

Tell that to every population avoidably/intentionally starved in the last century 👀

It can and does happen

2

ttylyl OP t1_j73d6jm wrote

What you describe sounds like a book.

Corporations already work together to raise prices at the same time, especially in medicine and housing. This leads to death and homelessness. Profit is clearly places before human life. Why would they fight each-other and lose money? It is the owning class vs the leasing class, this is a country of usery.

1

Iffykindofguy t1_j73dj8p wrote

Why would they fight each other and lose money? Because theyre short-sighted humans. Like is more chaotic than you seem to believe. Why would they let everyone else die off so there's no one to buy their shit?

1

ttylyl OP t1_j73eh3x wrote

Because they have no need to sell, they have all the power they need without human labor, and so human labor will be cut accordingly. There will be a huge class of people useless to the labor market. We have this today, we call them homeless. That is how we currently treat people who cannot participate in the labor market. Why will it suddenly get better if corporations no longer need us.

Again this is in a scenario where ai is allowed to replace humans en mass, it’s assuming governments don’t find a way to amicably deal with the unemployment.

3

Frumpagumpus t1_j73lxwi wrote

big companies are buying up AI companies almost as quickly as AI devs are cashing out and doing their own thing.

a couple extremely prominent examples just off top of my head of what i'm sure is a broader industry trend:

openai product manager for chatgpt

tesla head of AI (andrej kaparthy)

6/8 of the coauthors of the transformers paper (if i remember correctly)

i am sure some of them plan to get acquihired, but a lot of them seem to want speed and independence, e.g. john carmack (who left fb tho he didnt do AI at facebook but i think he is also a pretty good example of this)

2

Iffykindofguy t1_j73m3ub wrote

What is your point? I said I didn't see anyone happy about this. Not that it wasn't happening. I don't feel any safer because a rich individual has control than a rich corporation.

1

Frumpagumpus t1_j73m8jw wrote

if their goal is to buy everything up it is sand slipping through their fingers

(so far, I am sure our legal system is going to come and save the day for us any moment now... (and by that i mean doom us))

1

Iffykindofguy t1_j73mhjx wrote

I hope so! Though I dont see any reason to believe that some individuals, especially not a libertarian, would come forward to provide like a counter AI to the businesses. They'd just make a new company and be king themselves.

1

Frumpagumpus t1_j73mxjw wrote

as one of geolibertarian leanings I would say the important bit that makes a king's power tyrannical is his claim to all the land (even if only via proxy nobles) that prevent you from sustaining yourself.

But i don't agree that that is a necessary outcome of AI

1

Iffykindofguy t1_j73nvi7 wrote

There shouldnt be kings or autocrats or any individual who runs everything. There are no individuals that far ahead of everyone else that can run things well for the collective.

1

Frumpagumpus t1_j73tgbw wrote

i wouldnt want to live under an autocrat but I certainly dont mind living under somebody or somebodies (oligarchy) or preferably some system if it means I get to delegate some of the responsibility for the state of things.

I also would prefer that there be multiple collectives/states and that I could choose between them as freely as possible.

1

Iffykindofguy t1_j73twuk wrote

I dont mind living under someone, a government is required. Its the only means of fighting the rich anyone has.

1

Frumpagumpus t1_j73usci wrote

governments are usually filled with people of a richer persuasion, certainly it would be weird i think if they gave themselves a worse deal than their populace

I also think, to re hash a previous argument on this forum, that rich peoples wealth is somewhat overstated, it mostly manifests in the form of equity which represents control over productive assets rather than some physical wealth you could actually use to sustain yourself if you were to take it from them piecemeal and break up their companies

but I am very strongly opposed to land wealth, which is a large portion of the book value of companies and probably the single largest portion of said value of any particular kind of asset.

1

Iffykindofguy t1_j73v4k0 wrote

Usually but we are working on that. If the GOP wins all three next cycle its over though. Why would they give themselves a worse deal than their populace? Who said they would? Dont do that, its so embarrassing when people don't have a reasonable response so they have to pretend like the other persons acting extreme. They can retain their wealth and spend a little bit of it to keep the masses from turning on them. You are delusional if you think land matters more than the wealth the 1% controls right now. Short of total world wide societal collapse those days are gone.

1

Frumpagumpus t1_j73xquv wrote

some wealth is justly earned, land wealth is like 99% unearned.

also like I said a plurality of their wealth is in fact land wealth when you get down to the assets behind all the various financial instruments, most loans are mortgages, student loans are quickly transformed into college campuses, even auto loans basically exist to prop up sprawl. And the remaining kind, gov debt, is in part collateralized by public lands. (and all currency comes from said debt)

1

crap_punchline t1_j73s62t wrote

I think to understand the impacts of AI, we need to look at the earliest effects that we've already seen:

  • Both the proprietary models and open source free models have become sufficiently powerful to put some people out of work (independent animé illustrators for one). I doubt they have been able to immediately walk into another job because it takes a lot of time to learn new skills and they too are at very near risk of automation.
  • The way in which people have benefitted from the AIs so far is that it gives people the power to create things immediately for next to nothing. That also means they're worth next to nothing.
  • The effect then is driving the cost of goods and services close to zero but making them vastly more available and virtually instantaneous.

The only way to make money out of this is to own a slice of the means of production and that means having shares in Microsoft and Alphabet. If you don't own a slice of this then the only way you benefit is from the cost of goods and services being driven to zero.

This process increasingly closes the door to everybody of the concept of trying to get richer than other people. You will either be in the class of people who will be getting wealthier through having a slice of capital that recursively sucks up productivity by owning the means of production or you will be in the class of people who will be able to live increasingly affordably but won't be able to obtain power and luxury.

Ultimately, everybody will gain from this situation but some will gain more massively than others.

The battle will be when people say "OpenAI & Alphabet used all of our work output to create these new means of production" and demand that this is divided up to society. Politicians will probably not care in sufficient quantities as many of them will be invested in these companies anyway. Plus, everybody's quality of life will be rapidly improving, so why rock the boat?

3

ttylyl OP t1_j73xjcz wrote

I’m not sure if I entirely agree, but your last point is rlly good. These ai are trained on us, so it kinda make sense that way

1

No_Ninja3309_NoNoYes t1_j7481os wrote

I have no PhD in economics, but in IMO our brains as means of production still beat AI. Our neurons are fully optimised and don't need activation function tweaking. ChatGPT can talk the talk, but you can't show it a screenshot. It can't walk the walk.

Unfortunately there is always someone willing to do whatever any of us can do cheaper. With globalisation maximizing for cheap labor, low taxes, and high productivity has become easy. Politicians will play into these fears and go for protectionism. But in the long term that doesn't work.

Trying to do it outside corporations and governments is not feasible right now and is liable to be exploited. I mean, look at what OpenAI did to the open source community. But there's still hope. For instance more affordable small scale models like sentence transformers for example.

2

ttylyl OP t1_j748na7 wrote

I see a future that groups of regular people, however they find eachother, will invest in training an ai model to accomplish a simple job very well, let’s say an automated call center. They can then be contracted by other companies who need customer service.

I like the idea of people having their own piece of the ai market because in most of human history our labor was a negotiation tool.

1

phriot t1_j73jbn7 wrote

A lot of people do live paycheck to paycheck, but for those who don't, why not just buy a total market index fund? Companies will use AI. Their market values will skyrocket. You'll share in all of that, without having to pick and choose which will handle AI correctly.

1

ttylyl OP t1_j73yl3j wrote

I think because they want to be the ones in control of the product or company, for maximum revenue.

1

phriot t1_j748kg5 wrote

There's a huge gap between "paycheck to paycheck" and "can afford to buy a controlling share in a company."

1

ttylyl OP t1_j748ql3 wrote

That gap is like 35% of the population, if that.

1

phriot t1_j749q6w wrote

Haha, okay. We have roughly a top 15% household income. I guess I'm off to buy 51% of an AI company. I'll let you know how it goes.

1

ttylyl OP t1_j74n5ig wrote

No, as in 60% of Americans are paycheck to paycheck or are relying on debt in one way or another.

1

Ribak145 t1_j743w11 wrote

I mean the current system still needs consumers

1

ttylyl OP t1_j744l83 wrote

Yes but as soon as one company/organization can vertically integrate food, energy, industry, and healthcare they won’t.

AI could simply work for them instead of for a consumer, they wouldn’t have a need for money, they wouldn’t need to buy anything.

No need for consumers if the upper class is the only consumer.

2

saleemkarim t1_j74gtug wrote

Things are getting worse when it comes to income inequality, but things are getting better when it comes to dire poverty. So much more progress can be made if we focused on the issues that matter most, poverty and global warming.

https://blogs.worldbank.org/opendata/april-2022-global-poverty-update-world-bank#:~:text=The%20global%20poverty%20rate%20%28at%20the%20US%241.90%20poverty,pace%20in%20more%20recent%20years%2C%20as%20previously%20noted.

1

giveuporfindaway t1_j753gf5 wrote

Why don't you just invest in the alleged public companies that will become insanely wealthy? Buy the S&P 500 and you'll be taken along for the ride.

1

ttylyl OP t1_j7597xw wrote

I mean yes obv, but even then if you have no income you have to start selling stock and you’ll eventually run out unless you’re owning something.

1

FiFoFree t1_j753sx3 wrote

>The owning class has never let us in on the profits of increased productivity, and for the first time in human history we will be all but completely excluded from the means of production. What happens next?

  1. Somebody decides to let us in, and we live.

  2. Nobody decides to let us in, and we die.

End of options.

​

There's no classical dystopian middle ground of people being left to live poor, miserable lives in a world where a ruling class has zero need for them and no scruples about being rid of them. Either you believe that somebody in that ruling class has a heart and decides to let everyone in on what promises to be resource richness beyond human comprehension, or you believe they're all stone-cold and inhuman, capable of ridding the world of anyone not like them without so much as a second thought.

​

Because with the tools they'll have at their disposal, it's one or the other.

1

Mortal-Region t1_j73dezq wrote

I hear this point a lot and it makes no sense to me. There's no need for large corporations to be "generous benefactors". The fact that their products benefit others is precisely what makes those products valuable. "Keeping it for themselves" is nonsensical.

The idea that they would keep all the increased profits also doesn't hold water. For example, if computer chip A is just as powerful as computer chip B, but it costs half as much, company A will quickly dominate company B. Company A will thus become much more profitable, but only because it's selling cheaper chips.

(Incidentally, the Paris Commune is a terrible role model. It was the body that orchestrated the Reign of Terror.)

0

ttylyl OP t1_j73e3ri wrote

This is in a scenario where AI is allowed to replace human labor en mass. You are right, it is an assumption, hopefully world governments will be able to handle it in time and amicably for the formerly working class.

In the scenario you set up with the chips the working class is chip b and ai is chip a. AI will dominate human labor and push them out of the market. Once that happens, what will happen to the humans? You know, you and me.

And they are keeping it to themselves, it’s closed source. We are allowed to see the outcome of some limited parts of the AI at the whim of a company owned by amoral investment firms.

2

Mortal-Region t1_j73ize2 wrote

>AI will dominate human labor and push them out of the market.

What will AI be laboring at if humans are out of the market? Ultimately, a product's value is its benefit to humans.

1

ttylyl OP t1_j73zeh7 wrote

Yes but which humans and how. In the scenario I fear, AI would be laboring for the projects of the people who own it. Eventually over time one of the people who own ai/robot labor will decide that non-skilled unemployed people(most of us at this point) are useless overhead, we should spend less on keeping them alive.

Think about it this way, what are humans laboring for now? A:provide for eachother, food medicine etc to keep labor pool alive and healthy and B: demands of the rich and powerful. What if suddenly reason A becomes useless overhead(human labor useless for production, why waste ai power/labor on having them live comfortable lives), if you were to cut it out you have more power/money for B. Because the severe stratification of power, only those of the owning AI class will be able to make these decisions, and they are more than a little biased.

People have committed genocide over less

2

visarga t1_j741vp4 wrote

> What will AI be laboring at if humans are out of the market?

Maybe it needs resources for self replication or evolution. AI might have its own needs.

2

Mortal-Region t1_j7475mv wrote

That's the alignment issue -- that an AI might favor itself over humans. Here the context is the elite reserving the benefits of AI for themselves. I say that's a nonsensical idea because the value of AI derives from the benefits it provides to the masses. For example: lightbulbs, chips, the Internet, search engines, smartphones, etc, etc.

1

visarga t1_j741kg5 wrote

> AI will dominate human labor and push them out of the market.

AI teamed with a human will dominate both AI and human. AI is much better with a human, and humans are better with AI. Since we have competition, every company will have to add AI to their current workforce and keep the people, because they are the differentiating factor. You can scale AI in the cloud, but you can't simply spawn people.

1

ttylyl OP t1_j7423k5 wrote

This is true, but simple things can be handled by ai alone. And huge portions of are economy are simple enough it can do it alone. It is far, far cheaper to pay for AI than to pay for a human.

1

visarga t1_j742w3h wrote

Yes, it doesn't make sense to put humans do things that AI can do better. But the competition will use humans-with-AI to extract 2x from the AI, while you're using AI-alone at 1x rate. Everyone will have the same AI from Microsoft and Google, but humans are limited.

1

ttylyl OP t1_j743d4h wrote

I agree, but you could compensate rather easily by simply paying for 2x AI for 1/100th the cost of one human and one AI.

I agree that skilled jobs will be humans and AI together, but unskilled labor is called that for a reason, they aren’t focused on the quality, but the quantity.

1

visarga t1_j744swb wrote

If you put Stable Diffusion or chatGPT to generate automatically without human review or prompting, they will generate tons of garbage. Generative AIs are garbage until someone stamps their work as good. So they need humans to be worth anything. They are just junk on their own. It's a long way off from job replacement - even self driving cars require human at the wheel. These AIs still hallucinate facts, who can use them as they are now. Clearly someone will have to find a way before they can get useful without being babysitted.

1

ttylyl OP t1_j745ggc wrote

Yes ai needs trainers, but 1000 trainers can make a simple but concise model that replaces 500,000 jobs, call centers are an easy example. And then they have to make another model that takes more jobs if they want to keep theirs.

The new jobs from the AI market won’t match the jobs lost from AI replacement. Think about it this way, a company wouldn’t new tech unless it saves them money right?

1

Phoenix5869 t1_j73od9f wrote

if they “dont need us” why are the likes of bezos, musk etc pouring millions into life extension / anti aging research?

0

BigZaddyZ3 t1_j73x7gr wrote

That’s not “for us”. That’s so they can live long enough to reach near-immortality. (Which would allow them to live life’s of luxury for centuries instead of merely decades.)

6

ttylyl OP t1_j73xorm wrote

Themselves. There are probably a few millions people worldwide that would be the people profiting off of ai. Profit isn’t right word, more like use ai for whatever they want, we just get in the way

4