rogert2
rogert2 t1_jd0yzlw wrote
A problem with this analysis is that the super-wealthy don't have to let the profit motive control things they don't want it to control.
Basic monopoly problem: a wealthy corporation can afford to sell its products at a loss in some markets for the purpose of driving the competition out of business. When you have enough money, you can afford to operate at a loss for a while, especially if doing so will guarantee higher or more stable returns later. That is exactly what is happening.
The billionaires who want to use AI to decapitate labor can easily afford to bypass profits from early AI products, because they also own other massively profitable business and happen to already possess 99.9% of all wealth that exists.
- For one thing, it's not a donation: they are crowd-sourcing the development and QA testing of the product, which is a real benefit that has huge economic value.
- Secondly: once the tech works, they can apply the lessons learned toward quickly ramping up a different AI that is more overtly hostile to the owners' enemies.
rogert2 t1_jd0kpfw wrote
Reply to Discussion: the goal of human existence should be avoiding the heat death of the universe by Mickeymousse1
I have had this same thought, and I think it would make a good premise for a fictional world. In my imagination, I call it "the Phoenix Project."
The problem, of course, is that humanity is not going to survive long enough for this goal to be relevant. The climate crisis will likely exterminate the majority of aerobic life by 2100. If anybody survives, it will be a handful of billionaires and dictators, and those dummies are frankly not capable of perpetuating a functional species.
All humans and all human descendants will be dead before 2200.
rogert2 t1_jc4fy3g wrote
rogert2 t1_jbdh6pf wrote
Reply to Turtle, Me, Pen, 2023 by Lazy_Option
This is excellent technically, but those vertical lines are provocative.
rogert2 t1_jb7pd81 wrote
Reply to comment by Lydiafae in Artificial intelligence could soon be widely used to detect breast cancer — and may be more effective than doctors at doing so, study says by Gari_305
I don't think that "doctors dismissing patients concerns" is a source of failure to detect breast cancer via mammograms.
I assume women generally get mammograms because health experts recommend regular checks for all women. The reason radiologists fail to detect breast cancer in some x-rays is not that they aren't taking women seriously, because the women weren't coming in with symptoms or complaints -- they came in for a preventative screening. Radiologists sometimes fail to detect breast cancer because each radiologist looks at thousands of essentially identical x-rays over their career, breast cancer is uncommon, and cancer that does exist is hard to visually recognize in its early stages.
I'm not saying that people don't dismiss the complaints of women, whether in a healthcare or other context. But that's not what's going on here, because breast cancer checks are generally driven by prevention rather than symptoms.
rogert2 t1_jb39f47 wrote
Reply to Artificial intelligence could soon be widely used to detect breast cancer — and may be more effective than doctors at doing so, study says by Gari_305
This is a really good idea.
Human doctors have a worse detection rate than you'd want, but not for the reason you'd think: they aren't incompetent, it's that humans are really bad at recognition tasks when the thing they are looking for is rare.
To illustrate: if I gave you 5 x-rays and told you that exactly one of them definitely has cancer, you'll find it. But if I give you 500 x-rays and zero promises about whether any of them have cancer, you'll be less reliable.
This is true whether or not the human is tired from a "long shift." It has to do with the way humans pay attention, and how our expectations influence what we observe. False-negatives go down as the sample size gets lower, or if the incidence increases. If 1% of the 500 x-rays have cancer, a human may only spot 3 or 4 of the 5. But if there are 50 with cancer, the human detection rate increases.
AI won't have this problem.
(Source: an intro psych class I took, which actually used breast-cancer detection as the vehicle for studying human attention.)
rogert2 t1_jab8o2e wrote
Reply to comment by Gravemonera in Why does temperature determine the sex of certain egg laying animals like crocodiles? by insink2300
Clownfish are one such fish. Another species is mentioned in the BBC Earth series, but I'm afraid I don't recall the name or even which episode.
rogert2 t1_jaaomts wrote
I would settle for the electorate understanding consequences.
rogert2 t1_j9c28bz wrote
Reply to Third person cured of HIV after stem cell transplant, researchers say by esprit-de-lescalier
Something I did not pick up on from the summary is that the stem cell donor has some natural HIV resistance, which seems like a critical element of this.
rogert2 t1_j99amgr wrote
Reply to MIT researchers makes self-drive car AI significantly more accurate: “Liquid” neural nets, based on a worm’s nervous system, can transform their underlying algorithms on the fly, giving them unprecedented speed and adaptability by lughnasadh
"The reason I was at the adult book store is that my car's worm brain drove me there on autopilot."
"Okay... but you spent $74 dollars there."
"Worm brain, honey. It was the worm brain."
rogert2 t1_j8l9hz1 wrote
Reply to comment by DickweedMcGee in company offers neural preservation service by [deleted]
Something vaguely similar happens in the 1976 novel A World Out of Time.
A wealthy guy with cancer pays for cryo. When they thaw him out centuries later, it's because they are looking for indentured servants to work as space-truckers: if he refuses, they'll kill him and try their luck with the next popsicle.
He agrees and is promptly subjected to some intense and uncomfortable training and conditioning for his new "job."
rogert2 t1_j8gs0b8 wrote
I think I read the same thing, although I'm not so sure it was on reddit.
I've checked my bookmarks and other reddit data and haven't found it.
rogert2 t1_j8gdmg5 wrote
Reply to [OC] Fast food restaurant chains ranked by average number of visitors per location in 2022 by EvergreenGates
This is incorrect. Now, all restaurants are Taco Bell.
rogert2 t1_j86zpqb wrote
Reply to I am Tye Abbott, the solo developer of Yuma Will Burn- An interactive moral thriller where choices have long-lasting story and mechanical consequences. Ask me anything! by tyeishing
Did you play the Mass Effect games? What do you think about ME's "paragon/renegade" gameplay and narrative mechanic?
rogert2 t1_j8540xj wrote
Reply to comment by Sanity_LARP in Humans are struggling to trust robots and forgive mistakes by Gari_305
My web browser holds onto my bookmarks, and even starts to suggest frequently-visited websites when I type URLs into the bar. Do you really want to call that "learning?" Learning of the kind that's necessary to support interactions where trust and forgiveness are meaningful concepts?
It seems like you're trying to use the word "learning" to import a huge amount of psychological realism so you can argue that people have an obligation to treat every neural network exactly like we treat humans -- humans that are unimaginably more mentally sophisticated than a computer file that contains numeric weightings.
rogert2 t1_j83ag9i wrote
Reply to From Swiping to Sexting: The Enduring Gender Divide in American Dating and Relationships - The Survey Center on American Life by TrixoftheTrade
> a 26-year-old woman:
> ...It’s like when you want to watch a show and you put on Netflix and like, you literally find yourself not being able to decide for like an hour and then you wind up not watching anything.
This exactly. About Netflix, I mean.
rogert2 t1_j838jz0 wrote
Reply to Top lies that romance scammers use to take advantage of people—lies that reports to the FTC show cost nearly 70,000 consumers $1.3 billion in 2022 by allpenny
This reminds me, I need to hire a caligrapher to write the following message into 100 Valentine's Day cards:
> Dear Sir or Madam,
>
> We’ve never met, but I need you to know that I’m in jail on a far-away military oil rig, I’ve found some gold that I can teach you to invest, and I need your private pictures to bribe the guards so I can escape, deliver the gold, and we can get married.
rogert2 t1_j82q6pa wrote
Reply to comment by Zedd2087 in Humans are struggling to trust robots and forgive mistakes by Gari_305
No, they can't. You are mistaken about what current "AI" technology is actually doing.
rogert2 t1_j80gv0f wrote
Reply to Open source AI by rretaemer1
This question is not merely vague, but fatally confused.
I bet folks will engage with it, but I really doubt OP will get any satisfaction.
rogert2 t1_j80g5ln wrote
Carpenters don't trust their table saws, either.
Robots are not thinking, learning things. It would be a category error to trust a robot, just as it would be extend forgiveness to a robot.
The WEF's myopia is an endless source of incredibly stupid takes.
rogert2 t1_j7t9twt wrote
Reply to comment by NeedleworkerFit188 in Question about UFO or launch by NeedleworkerFit188
I think you're saying: each of the lights would disappear and reappear in a different position, and those positions formed a triangle.
Is that right?
rogert2 t1_j7ssv1l wrote
Reply to comment by Surur in What's your estimation for the minimum size of global population required for preserving modern civilization with advanced technology and medicine, and even progressing further? by Evgeneey
You're going to have to explain that a little more.
It seems that if AI is capable of keeping human soldiers in line, it's probably also capable of simply replacing humans soldiers with armed version of the Boston Dynamics robots.
rogert2 t1_j7r3jq5 wrote
Reply to comment by Consensuseur in What's your estimation for the minimum size of global population required for preserving modern civilization with advanced technology and medicine, and even progressing further? by Evgeneey
Relevant: The wealthy are plotting to leave us behind
Accurate summary by a redditor:
> people who were asked by billionaires to go to a meeting and advise them on what to do to keep staff loyal in their bunkers when money becomes worthless
Supporting quotes from the source:
> I got invited to a super-deluxe private resort to deliver a keynote speech... it was by far the largest fee I had ever been offered for a talk > > I just sat there at a plain round table as my audience was brought to me: five super-wealthy guys... from the upper echelon of the hedge fund world > > Finally, the CEO of a brokerage house explained that he had nearly completed building his own underground bunker system and asked, “How do I maintain authority over my security force after the event?” > > This single question occupied us for the rest of the hour. They knew armed guards would be required to protect their compounds from the angry mobs. But how would they pay the guards once money was worthless? What would stop the guards from choosing their own leader? > > Taking their cue from Elon Musk..., Peter Thiel..., or Sam Altman and Ray Kurzweil..., they were preparing for... insulating themselves from a very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is really about just one thing: escape.
rogert2 t1_j7cow3v wrote
Reply to comment by RamaSchneider in What happens when the AI machine decides what you should know? by RamaSchneider
Look up "reward hacking." This is a well-studied problem, and it exists outside of AI. Rob Miles is an AI researcher who has done a few videos talking about reward hacking.
rogert2 t1_jeh4h4h wrote
Reply to The Extraction of the Stone of Madness, Hieronymus Bosch, Oils, 1488 by Spiritual_Navigator
If you don't have somebody balancing a book on their head, you can't do brain surgery. It's just that simple.