Submitted by Rofel_Wodring t3_11nnof6 in Futurology

I'm a hardcore 'our world is drifting to destruction' techno-pessimist who is nonetheless in one of those 9%.

The reason I feel that way is because AI is developing in a way not that it will result in a second class of artificial citizens or a unitary singular intelligence, but more like a hive intelligence. That is, instead of SkyNET or Star Trek's Data or Agent Smith, AI is a collection of amorphous, agency-free, problem-solving intellects that behaves more like a genie. Both the evil Jafar-genie and the benevolent Robin Williams-genie.

Funnily enough, the structure of capitalism (which I blame for our ongoing dystopias) is actually encouraging the only positive way AI can turn out. It has to be democratized via consumerist channels thanks to the profit motive. Our wannabe totalitarian overlords might be drooling over the idea of infinitely loyal AI mind-slaves, but the way it's developing gives their future subjects (both humans and the AI) a LOT of ways to fight back.

So it won't be a battle of unaugmented humans versus their billionaire overlords / Gattaca babies / rebellious robots / hyperintelligent AI. Not when the way AI is being deployed such that ten unaugmented humans are the military superior of eight genetically engineered supermen -- especially when the unaugmented humans have access to the same AI tools as the supermen.

Yes, the way it's being deployed is nightmarish and catastrophic and could lead to accidental extinction, but let's be honest here: our current society is already nightmarish and catastrophic and would be so with or without AI. AI is just the pinprick of a syringe that may either contain penicillin or Strychnine. But if you're in a muddy trench struggling to breathe after 2nd and 3rd-degree burns... well... why not take a risk?

... thank God that robotics and automation are really starting to fall behind AI, eh?

6

Comments

You must log in or register to comment.

[deleted] t1_jboetfg wrote

[removed]

15

z3njunki3 t1_jbv4dw2 wrote

Just a fun little jab... It is also very reddit for someone to get worked up about it and write an essay of complaint rather than just roll their eyes and move on. 😉

3

fuckreddit1123 t1_jbs0bqz wrote

it's actually so goddamn funny to think people actually write stuff like this in earnest

talking about skynet and "unaugmented humans" like we're on some deus ex bullshit LMAO

1

FillThisEmptyCup t1_jbp2fc7 wrote

I had no clue what the OP was trying to say anyway. In the old days, we'd call this a word salad. "If you can't dazzle them with brilliance, baffle them with bullshit."

One central premise they have is decentralization. They think AI will be decentralized and distributed.

Yes, that was a founding principle of the internet or maybe the later worldwideweb; dream of the interconnect net with many different but roughly equal nodes, but I don't see where any of the historical trends since then point that way. We went from free-wheeling usenet to forums to superwebsites like reddit or facebook or youtube capturing and moderating 90% of the conversations, making an Overton window of their own -- from many outlets to a relative few with much more censorship and locks on what can be said than the early wild west days of the www.

Same with hosting in general, where there used be 10s of thousands, with the majority of the net in some superdatacenter owned by a few big names like amazon with even big names like netflix running on it.

Similarly, even when someone is using AI, right now it's in the hands and total control of somebody that has the computing power to train it. Since Moore's law is effectively over, I don't know why anyone would think computing power in 40 years will be so far advanced per $$$$ as the same 40-ish time span of 1980 to now. It'll be faster, sure, just not the same orders of magnitude difference we have experienced before. I'm using a 2012 computer (Linux) that I would never have dreamed of staying on half as long in the 80s or 90s or 00s. It's fast enough and speeds for what I used it for haven't improved too much (different needs will have different POVs ie games). Hence, no unconnected AI in everyone's pocket. Even if it were to exist, the big boys with bigger datacenters can still afford/train a much better AI than in someone's 2060 iPhone/implant will have. Just like today, more processing power, more dataset, etc.

And simply having AI is not a missile, or a gun, or anything else. Having AI Jason Borne without military hardware doesn't mean anything. Sure, you can 3d prints stuff but everything has limits and industrial processes can make things better than some garage tinkering in a lot of these cases.

I think OP was trying to paint some technopunk future. I'm sure the US will fail sometime in the future, though I hesitate to name a date. It will be a mundane thing like debt and loss of currency reserve. Not some band of geek brothers forging alliances to bring the corrupt system down.

0

Some-Ad9778 t1_jbomcxf wrote

I completely disagree. AI is not sentient and will not go through any democratic procedures. It is going to be used to eliminate jobs and weaken the working class

14

Bobtheguardian22 t1_jbpf2f7 wrote

historically speaking, people with power often use that power on people without it.

5

z3njunki3 t1_jbqwuk0 wrote

Yes I have noticed that as well. Well I am sure it will be different this time..........

2

Rofel_Wodring OP t1_jbp1k92 wrote

Doesn't have to be for my prediction. It just needs to get good enough that the masses don't need to rely on a particular state or corporation to continue advancing its capabilities. It just needs to get to the stage of "hey, Jailbroken And Stolen Siri, using this 3D Printer and these materials, create for us a BCI wearable that will connect our neocortexes to our rebel cloud service that I also want you to build".

Which I think it will.

1

Temporary_Sir_3050 t1_jbp41ky wrote

If it makes us overall more productive why not use it? I for one wouldn't want to be doing a robots job

1

porknwings t1_jboga0f wrote

AI is built on human behavior models, which dramatically de-emphasizes things like any concept of altruism, ethics, equity or empathy. The internet was thought that it would be a great change agent for democratization across the globe too. Just the opposite occurred. AI of any kind will never save humans from themselves.

9

Rofel_Wodring OP t1_jbp0v2y wrote

No empathy is required in my prediction. AI isn't going to save us per-se, what it will do is make our previous modes of existence and government -- to include autocratic monopoly of the means of production -- completely unsustainable.

It breaks the monopoly of force by destroying our ability to meaningfully own anything. AI breaks the chain between resource and product in a way to make old notions of ownership impossible.

1

z3njunki3 t1_jbqxw5w wrote

Making a decision of any magnitude in the government is like trying to change direction on a shipping container. AI is advancing so quickly now major changes will be implemented in that space before those in power even comprehend them yet alone their implications. I actually had to explain to my department head the other day what Chat GPT was (no idea it existed) and how AI could be used to our advantage moving forward. He looked at me like I was a nut job wasting his precious time which he uses to talk about the latest sporting event he is attending.

2

Wooow675 t1_jbrkm5p wrote

Unrelated to your industry but I had that conversation with my director (I’m in commercial insurance) re: our positions and what it would mean when Amazon comes to take our lunch. Dude looked at me like I grew a dick on my face, and said Amazon doesn’t do insurance. End of story.

Few months ago amazon released their Health insurance. Asked him what he thought now but it’s so weird he hasn’t responded about that

1

[deleted] t1_jbo5dhl wrote

I like that you think wars will be fought with cyborgs vs. eugenics instead of through cyber security

3

Rofel_Wodring OP t1_jbo76im wrote

On the contrary. Such a scenario is why I say that AI might be the only way out of our ongoing dystopia.

Warfare will be fought via cyberwarfare by way of drones. The unaugmented gamers will obliterate the cyborgs and eugenicists in a military conflict. The augmented gamers might have adaptations that allow them to exploit the AI tech slightly more conveniently than the unaugmented gamers, but the sheer crush in population will render those advantages moot.

Imagine you had a magic lamp with an all-powerful genie with three wishes in it and you were up against SkyNet. SkyNET has access to all of the resources of the planet, along with its own all-powerful genie with three wishes.

In a head-to-head conflict, I'd still give it to SkyNET. However, if there were two of me, I'd easily crush SkyNET, especially if one of our wishes was 'I wish we and any potential allies knew exactly what to use our wishes for to defeat a SkyNET also armed with a magic lamp'.

−1

[deleted] t1_jbobxjq wrote

[deleted]

2

Rofel_Wodring OP t1_jbp071s wrote

>While military physical forces like drones have their place, to quote Starship Troopers, "If you disable their hand, they cannot push a button." My point on cyberwarefare is that war will be economic, informational, and infrastructure disabling.

And my point is that the way our AI is developing, even the very idea of having a state-run military is nonsensical. What exactly is the point of having an East African Union Hacking Team if some random peasant can just push a button and have a hacking team just as good as anything your state (such as it was) could put up?

It becomes even more nonsensical if we're post-scarcity at that point, meaning that not even land and energy become things worth theoretically fighting over.

1

cambridge65 t1_jbnzs11 wrote

You're probably right but who's ai? If the AI has come from a different country i.e. North Korea or Russia, OK they may not be there yet, but if it did come from those sort of countries, there's going to be a problem.

1

Rofel_Wodring OP t1_jbo1x1j wrote

It literally will not matter what country it comes from. In fact, our disordered international politics is a big reason why I think the future of AI will be towards democratization and decentralization. China or the USA or Russia or whoever won't be able to go: 'muahaha now I have a loyal hyperintelligence to command, kneel before me' because some random salaryman in Tokyo can go 'but I have one too'.

The trend with AI has been towards increased accessibility. Unsurprising, because consumerism can't really work without accessibility and iterative release. And unfortunately for the nation-states, but fortunately for human survival, they have to democratize the tools in order to keep up in the short term.

Our society can still end, I can think of a lot of ways for things to go wrong, but the classic AI doomsmongering of a hyperintelligence being or species deciding that humanity (or a portion of it) has outlived its usefulness doesn't look likely to happen. It'll be less like an ant going up against an elephant and more like a 10-year old Chess prodigy going up against Magnus Carlsen. And both of them have access to the same Stockfish chess engine.

0

Hades_adhbik t1_jbo5f48 wrote

the key to a fulfilling life with all this advancement will be us hacking our own visceral dials to give us new sensations and motives we have yet to experience or have something drugs attempt to get at, sloppily sort of like what games try to do, hack our impulses but they can only hack what's already there we'll be able to hack our system itself cure all depression and existential dread to give us visceral systems adjusted for longer or even immortal life we won't have deminishing marginal return as we have now some of our instincts we have our there because we are mortal those can be hacked and altered for the purpose of longer life i expect we'll develop and exeperience more advanced perception more advanced and more senses we can edit out or not experience any form of depression existential dread wondering what the point of life is replaced with new feelings and sense of priorities that aren't painful that still propel us to act that are adjusted for immortal life what constitutes meaning and fulfillment in life of an immortal person that has a wiring of knowing it is immortal will be very different from what humans experience our set of internal experiences will be much different our sense of time how we experience the package of time will be different we will likely experience time dialation where time feels like it goes by faster a year feels like a second or perhaps we'll stop experiencing the passage of time and be completely immersed in present existence experiencing all time past present and future at once almost like an animal, animals that live 100's of years don't question being alive

1

OriginalCompetitive t1_jbofkfb wrote

It’s great that you see the inherent advantages of decentralized capitalism for preventing a state monopoly on AI. But surely you see that the exact same advantages apply to previous technologies too?

The point of decentralized capitalism is not that it always makes the optimum decisions, but rather that it avoids the perils of centralizing decisions in a single authority.

1

lavendergrowing101 t1_jbon8nl wrote

Your root fallacy here is this: " It has to be democratized via consumerist channels thanks to the profit motive." This is not true. Capitalism does not inherently democratize its tools, quite the opposite. The entire history of the internet so far has been one of monopolization and the increased concentration of wealth and power into fewer and fewer hands. That is already happening with AI. It's all about who owns the algorithms, and guess what, given our current economic structures it's the tech giants and Wall Street who are going to own them.

1

Josh12345_ t1_jbosgwc wrote

I guess it depends on who manufactured the AI.

It can be used for good or bad. An authoritarian government would obviously use it for malevolent purposes.

1

Rofel_Wodring OP t1_jbp26hs wrote

People keep talking about AI as if it was this one product we produced on a shelf, and if we don't like it, we're stuck with it.

That may be the case for now, but it'll get to the point where even if the best-in-class models only comes from two or three other states/companies, there will be dozens if not hundreds of comparable AI tools that aren't privately owned.

So, again, it'll get to the point where some authoritarian government could go "muahaha, bow before TyrantBot's massive intellect, engineered by my scientist thralls" but we'll just roll our eyes and just print out an additional, slightly less-capable AI to thwart it.

The point is: it won't matter. It'll be out of any unitary or small-group intelligence's hands, benevolent or authoritarian. There's a reason why elephants are afraid of bees.

1

Kiizmod0 t1_jbovxr0 wrote

Brother, we, humans, has never been able to define in a reward function beyond material gains for over collective existence, and yet we as sentient fuckers can actually grasp ethics, kindness, love etc, and we don't optimize those.

How the heck do you want to numerize those immaterial goodies for an agent which is not sentient at all, to being you out of dystopia.

1

Rofel_Wodring OP t1_jbp2wt5 wrote

You won't need to. I didn't say anything about our morals getting better. What I'm saying is that AI will destroy the power differential between tyrant and slave that pretty much every dystopian vision of the future relies upon.

What's the point of Gattaca babies when the AI-Neocortex Cloud is way better than anything you can engineer?

What's the point of owning the entire news media if we have millions of independent AI journalists working for free?

If the tyrants can't keep AI on a leash (and our economic and political situation guarantees they can't), the only way they can control us if by controlling certain resources. Which raises the question of how they plan to do this if any unitary or oligarchic intelligence will be intellectually crushed by the hoi polloi's millions of lesser AI.

1

scratchedocaralho t1_jbp9kbv wrote

very well put.

but i disagree with your conclusion. yes, due to our current economic system, ai is being developed by different groups with different intentions. there is no central authority deciding what algos are or aren't released to the public. but this is because governments still don't have legislation towards it. this is due to the fact that corporations are still trying to find the profit angle that these current algos have. once that happens, and considering that the profits could be massive, the pressure to curtail the wild west of ai development will increase.

and thus in no time you'll see campaigns in mass media asking for certain regulation, lobbying/bribes on a massive scale towards politicians that decide the regulations. "grassroot movements" will emerge to give legitimacy to specific demands.

after that, the multiple possibilites of ai will be reduced to what can keep the capitalist machine going.

the biggest legal battle will be over on who as the legal rights to what is created by the ai.

1

Zestyclose-Ad-9420 t1_jbqcfwe wrote

You need to breath and keep thinking. You havent reached the conclusion yet, you just got excited and thought you reached the conclusion. But conclusions are illusory, history never stops turning.

AI can and likely will become an open use consumer system. That doesnt mean that the elite class cannot monopolise its use.

My favourite historical analogue is grain milling. Technology spreads fast and by the medieval ages in Europe any peasant community with access to flowing water could make their own water mill to get flour. However, across much of Europe it was illegal to mill your own grain. You had to take it to the landowners mill, who would then charge you for its use.

AI of course is not exactly a grain mill. But advanced AI will eventually be able to be treated as means of (intellectual) production. And elites will always mobilise to monopolise these means, whatever means necessary.

Eventually using an AI without going through a middle man will be illegal and you will only be able to use it for sanctioned purposes. Criminals will use AI regardless until eventually terrorists figure out how to use it to hurt people. Then the state will justify using violence to crack down on AI use, for example drone striking places where illegal AIs or data centres are hosted. All speculation of course.

1

rogert2 t1_jd0yzlw wrote

A problem with this analysis is that the super-wealthy don't have to let the profit motive control things they don't want it to control.

Basic monopoly problem: a wealthy corporation can afford to sell its products at a loss in some markets for the purpose of driving the competition out of business. When you have enough money, you can afford to operate at a loss for a while, especially if doing so will guarantee higher or more stable returns later. That is exactly what is happening.

The billionaires who want to use AI to decapitate labor can easily afford to bypass profits from early AI products, because they also own other massively profitable business and happen to already possess 99.9% of all wealth that exists.

  • For one thing, it's not a donation: they are crowd-sourcing the development and QA testing of the product, which is a real benefit that has huge economic value.
  • Secondly: once the tech works, they can apply the lessons learned toward quickly ramping up a different AI that is more overtly hostile to the owners' enemies.
1

Rofel_Wodring OP t1_jd2a2yz wrote

>The billionaires who want to use AI to decapitate labor can easily afford to bypass profits from early AI products, because they also own other massively profitable business and happen to already possess 99.9% of all wealth that exists.

One reason why I don't care much for talking about capitalism in terms of billionaires and wealthy overlords is because it masks how the actual locus of conflict isn't just them versus the world, but them and their lower-class stooges against the world. When we talk about interests like Microsoft and China and the US government 'using' AI, it overlooks how they can't actually enforce control without the consent of its underlings. Whether the underling is a human or an AGI.

I can discuss the mechanisms of how THAT works and its broader implications of class warfare, but that's communism and I don't want to trigger a screeching xenophobic freakout.

>Secondly: once the tech works, they can apply the lessons learned toward quickly ramping up a different AI that is more overtly hostile to the owners' enemies.

This is a very stupid strategy because, again, the gap between cutting edge and entry level isn't decades like it was in earlier parts of the Industrial Revolution/Age of Imperialism, it's 6-36 months. You can't establish a hegemony where small numbers of technology-fueled intelligences lord over larger numbers of less powerful beings, because their technological edge is miniscule and they're way outnumbered. What's more, if this is your endgame, you also can't ally with the other cutting-edge AGI. In fact, they will be your rivals. Along with billions of other minds who oppose what you can do and are mere months away from matching you in technology.

It's like Genghis Khan declaring war against the Americas after being transported forward in time to 1450 with 500 of his best troops. But at his technology level, not Cortez's.

1

ROSS-NorCal t1_jbo2sqr wrote

Well, the world is drifting towards destruction just as it was prophesied. These are the last days.

in the last days [a]perilous times will come: 2 For men will be lovers of themselves, lovers of money, boasters, proud, blasphemers, disobedient to parents, unthankful, unholy, 3 unloving, [b]unforgiving, slanderers, without self-control, brutal, despisers of good, 4 traitors, headstrong, haughty, lovers of pleasure rather than lovers of God, 5 having a form of godliness but denying its power. 

AI will not be the destroyer or the savior.

−8

Rofel_Wodring OP t1_jbo3n5z wrote

John, the bloodthirsty moron who wrote the Book of Revelation, was an sociopathic incel stoner whom even Diogeneses would think was annoying.

And yet, so many people treat that loon's Biblican horror fanfiction as some kind of revelation.

2

ROSS-NorCal t1_jbo9203 wrote

Yeah... because it's not about him. He wrote what he saw and was told. His personal failings, if any, simply make him human.

You can judge him. The way you do that leads me to believe that you're a liberal, condemning people on one hand while preaching tolerance for other people to follow.

If he was an incel, a stoner, or even gay, would that make him less credible?

−2

Rofel_Wodring OP t1_jboakzc wrote

>Yeah... because it's not about him.

lol, nah, that book was TOTALLY about him, or more specifically, his dumbass incel prejudices.

I'm familiar with 4chan fanfiction, and that's what the Book of Revelation comes off as. A revenge fantasy from some maladjusted manchild seething how the Romans pantsed him in front of that cute teenager.

2