rogert2

rogert2 t1_jd0yzlw wrote

A problem with this analysis is that the super-wealthy don't have to let the profit motive control things they don't want it to control.

Basic monopoly problem: a wealthy corporation can afford to sell its products at a loss in some markets for the purpose of driving the competition out of business. When you have enough money, you can afford to operate at a loss for a while, especially if doing so will guarantee higher or more stable returns later. That is exactly what is happening.

The billionaires who want to use AI to decapitate labor can easily afford to bypass profits from early AI products, because they also own other massively profitable business and happen to already possess 99.9% of all wealth that exists.

  • For one thing, it's not a donation: they are crowd-sourcing the development and QA testing of the product, which is a real benefit that has huge economic value.
  • Secondly: once the tech works, they can apply the lessons learned toward quickly ramping up a different AI that is more overtly hostile to the owners' enemies.
1

rogert2 t1_jd0kpfw wrote

I have had this same thought, and I think it would make a good premise for a fictional world. In my imagination, I call it "the Phoenix Project."

The problem, of course, is that humanity is not going to survive long enough for this goal to be relevant. The climate crisis will likely exterminate the majority of aerobic life by 2100. If anybody survives, it will be a handful of billionaires and dictators, and those dummies are frankly not capable of perpetuating a functional species.

All humans and all human descendants will be dead before 2200.

1

rogert2 t1_jb7pd81 wrote

I don't think that "doctors dismissing patients concerns" is a source of failure to detect breast cancer via mammograms.

I assume women generally get mammograms because health experts recommend regular checks for all women. The reason radiologists fail to detect breast cancer in some x-rays is not that they aren't taking women seriously, because the women weren't coming in with symptoms or complaints -- they came in for a preventative screening. Radiologists sometimes fail to detect breast cancer because each radiologist looks at thousands of essentially identical x-rays over their career, breast cancer is uncommon, and cancer that does exist is hard to visually recognize in its early stages.

I'm not saying that people don't dismiss the complaints of women, whether in a healthcare or other context. But that's not what's going on here, because breast cancer checks are generally driven by prevention rather than symptoms.

1

rogert2 t1_jb39f47 wrote

This is a really good idea.

Human doctors have a worse detection rate than you'd want, but not for the reason you'd think: they aren't incompetent, it's that humans are really bad at recognition tasks when the thing they are looking for is rare.

To illustrate: if I gave you 5 x-rays and told you that exactly one of them definitely has cancer, you'll find it. But if I give you 500 x-rays and zero promises about whether any of them have cancer, you'll be less reliable.

This is true whether or not the human is tired from a "long shift." It has to do with the way humans pay attention, and how our expectations influence what we observe. False-negatives go down as the sample size gets lower, or if the incidence increases. If 1% of the 500 x-rays have cancer, a human may only spot 3 or 4 of the 5. But if there are 50 with cancer, the human detection rate increases.

AI won't have this problem.

(Source: an intro psych class I took, which actually used breast-cancer detection as the vehicle for studying human attention.)

10

rogert2 t1_j99amgr wrote

"The reason I was at the adult book store is that my car's worm brain drove me there on autopilot."

"Okay... but you spent $74 dollars there."

"Worm brain, honey. It was the worm brain."

6

rogert2 t1_j8l9hz1 wrote

Something vaguely similar happens in the 1976 novel A World Out of Time.

A wealthy guy with cancer pays for cryo. When they thaw him out centuries later, it's because they are looking for indentured servants to work as space-truckers: if he refuses, they'll kill him and try their luck with the next popsicle.

He agrees and is promptly subjected to some intense and uncomfortable training and conditioning for his new "job."

2

rogert2 t1_j8540xj wrote

My web browser holds onto my bookmarks, and even starts to suggest frequently-visited websites when I type URLs into the bar. Do you really want to call that "learning?" Learning of the kind that's necessary to support interactions where trust and forgiveness are meaningful concepts?

It seems like you're trying to use the word "learning" to import a huge amount of psychological realism so you can argue that people have an obligation to treat every neural network exactly like we treat humans -- humans that are unimaginably more mentally sophisticated than a computer file that contains numeric weightings.

2

rogert2 t1_j838jz0 wrote

This reminds me, I need to hire a caligrapher to write the following message into 100 Valentine's Day cards:

> Dear Sir or Madam,
>
> We’ve never met, but I need you to know that I’m in jail on a far-away military oil rig, I’ve found some gold that I can teach you to invest, and I need your private pictures to bribe the guards so I can escape, deliver the gold, and we can get married.

11

rogert2 t1_j80gv0f wrote

This question is not merely vague, but fatally confused.

I bet folks will engage with it, but I really doubt OP will get any satisfaction.

−2

rogert2 t1_j7ssv1l wrote

You're going to have to explain that a little more.

It seems that if AI is capable of keeping human soldiers in line, it's probably also capable of simply replacing humans soldiers with armed version of the Boston Dynamics robots.

2

rogert2 t1_j7r3jq5 wrote

Relevant: The wealthy are plotting to leave us behind

Accurate summary by a redditor:

> people who were asked by billionaires to go to a meeting and advise them on what to do to keep staff loyal in their bunkers when money becomes worthless

Supporting quotes from the source:

> I got invited to a super-deluxe private resort to deliver a keynote speech... it was by far the largest fee I had ever been offered for a talk > > I just sat there at a plain round table as my audience was brought to me: five super-wealthy guys... from the upper echelon of the hedge fund world > > Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked, “How do I maintain authority over my security force after the event?” > > This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? > > Taking their cue from Elon Musk..., Peter Thiel..., or Sam Altman and Ray Kurzweil..., they were preparing for... insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.

8