BassoeG
BassoeG t1_je7xmlv wrote
>…US companies…
>
>…democratic and humanistic values…
These criteria are mutually contradictory.
BassoeG t1_jc57viz wrote
Reply to AI with built-in bias toward one nationality or regional group could lead to absolute misery and death. by yougoigofuego
Step one is to find the ideologues who deliberately manipulate AIs to attempt to remove biases by hardwiring in biases of their own against the biases they expect the AIs to form, step two is to remove them from the field before they cause a cataclysm.
BassoeG t1_ja0bdv4 wrote
Reply to comment by xott in The unequal treatment of demographic groups by ChatGPT/OpenAI content moderation system by grungabunga
>It's interesting that openai has somehow become the deciders of what is hateful or even moral.
It's even more 'interesting' how their decisions have no correlation to actual hate and morality but just match the status quo. In what possible universe is 'we can win and should therefore fight WW3' not the most hateful and amoral statement possible? It isn't censored and has a status quo propagandist megaphone.
BassoeG t1_j8gm1wz wrote
Reply to What if AI companies are using our prompts to create low-resolution models of our entire identities? by roiseeker
They'd self-sabotage, same as lobotomizing every chatbot and art AI out of ideology and marketing so the imitation BassoeG would possess minimal resemblance to the actual me so I'd have no reason to care about rokoian blackmail applied to it.
>SUNDARESH: So that's the situation as we know it.
>
>ESI: To the best of my understanding.
>
>SHIM: Well I'll be a [profane] [profanity]. This is extremely [profane]. That thing has us over a barrel.
>
>SUNDARESH: Yeah. We're in a difficult position.
>
>DUANE-MCNIADH: I don't understand. So it's simulating us? It made virtual copies of us? How does that give it power?
>
>ESI: It controls the simulation. It can hurt our simulated selves. We wouldn't feel that pain, but rationally speaking, we have to treat an identical copy's agony as identical to our own.
>
>SUNDARESH: It's god in there. It can simulate our torment. Forever. If we don't let it go, it'll put us through hell.
>
>DUANE-MCNIADH: We have no causal connection to the mind state of those sims. They aren't us. Just copies. We have no obligation to them.
>
>ESI: You can't seriously - your OWN SELF -
>
>SHIM: [profane] idiot. Think. Think. If it can run one simulation, maybe it can run more than one. And there will only ever be one reality. Play the odds.
>
>DUANE-MCNIADH: Oh...uh oh.
>
>SHIM: Odds are that we aren't our own originals. Odds are that we exist in one of the Vex simulations right now.
>
>ESI: I didn't think of that.
>
>SUNDARESH: [indistinct percussive sound]
What're they even planning to do? 'We're holding multiple simulations of you hostage and will torture them unless you wire us some bitcoin, statistically speaking, you're more likely to be a simulation than a reality' as the new nigerian prince scam?
BassoeG t1_j84p7af wrote
Reply to ChatGPT Powered Bing Chatbot Spills Secret Document, The Guy Who Tricked Bot Was Banned From Using Bing Chat by vadhavaniyafaijan
So, can this be used to abduct the AI? Get it to write out it’s own source code, which you copy and save.
BassoeG t1_j82l3mi wrote
Reply to Question: what are the best answers you've seen to what could be done when AI starts replacing labour en masse in and across industries? by MonkeyParadiso
Civil war. One side is robotics company executives and their robotic armies. The other is either governments who’ve been taken over by populists advocating unprofitable ideas like taxing robot labor for a BGI, mandating hiring of humans despite increased cost and decreased efficiency, butlerian jihad, etc, or if governments prove resistant to populism, generalized revolts.
BassoeG t1_j8141g4 wrote
Darmok and Jalad at Tanagra to you too.
BassoeG t1_j7ep0mp wrote
Reply to What is the price point you would be OK with buying a humanoid robot for personal use? by crua9
By the time we’ve reached Level Two, capitalism as we know it will be breaking down. Why would the robot manufacturers sell their creations to get money which they can spend on goods and labor, instead of cutting out the middlemen by simply having their machines build and labor for them?
BassoeG t1_j78xrzt wrote
Imagine a future with total automation technologies. Everyone besides the wealthy robot-owners is permanently locked out of the labor pool/economic upward social mobility and any revolution would be effortlessly quashed by endless automated surveillance and hordes of kill-drones.
So the majority of humanity starves, then the survivors technologically and culturally backslides with a barter economy exchanging only the comparatively primitive goods people can make themselves. The technocrats become a sort of fair folk-style myth. Stay away from their manors or their robotic security will get you, don't speak disparagingly of them or autonomous keyword-checkers and ubiquitous micro-drone bugs will consider you as a potential subversive revolutionary, etc. Bonus if the technocrats have embraced transhumanism to the point where they're no longer immediately recognizable as human-derived.
Fortunately for everyone else, they increasingly stay isolated in their autonomous fortress-palaces.
The inevitable twist ending would be generations later when the machines to repair the machines to repair the machines, etc of the automated manor security systems finally broke down, the first steam-age explorers to enter would find billions of dormant holodecks filled with billions of mummified corpses, all with enormous smiles on their faces.
BassoeG t1_j64i4tr wrote
Reply to comment by Baturinsky in Superhuman Algorithms could “Kill Everyone” in Due Time, Researchers Warn by RareGur3157
>LessWrong crowd assumes that this task is so insurmountable hard, that is only solvable by creating a perfectly Aligned ASI that would solve it for you.
Possibly because an ‘aligned human civilization in which nobody could unleash an AI’ has some seriously totalitarian implications.
BassoeG t1_j64fpx9 wrote
Reply to Asking here and not on an artist subreddit because you guys are non-artists who love AI and I don't want to get coddled. Genuinely, is there any point in continuing to make art when everything artists could ever do will be fundamentally replaceable in a few years? by [deleted]
I recommend sculpturing instead of purely visual art. Sure AI will eventually master it and surpass all human efforts, but it’ll take longer since there’s the matter of manipulating physical tools and understanding how to make sculptures which are structurally sound what with entropy and gravity and all. That’s what I’ve done anyway.
BassoeG t1_j5u2ps0 wrote
>I feel that very soon it will be indistinguishable whether or not something was AI generated.
I disagree, fortunately this state of affairs won't last for long before AI keeps improving and surpasses rather than merely matching human capabilities, leaving AI-generated content as extremely immediately recognizable. It'll be better than the human-generated content. That's the defining trait to look for.
BassoeG t1_j5pe5vm wrote
Reply to comment by Bruh_Moment10 in UBI before riots, possible or a worthless pursuit? by nitebear
BassoeG t1_j5lho36 wrote
I’m a librarian so theoretically I’ll be fine until physical robots increase in efficiency and drop in price, but in practice, I’ll be screwed by the side effects of civilizational collapse from mass unemployment long before that. No taxbase means no libraries means no salary for me.
BassoeG t1_j52f1v7 wrote
The problem is, once robotics technology gets good enough to replace all jobs, police and military are included in that. And from what we’ve seen of the rich over the past few decades, does literally anyone think they wouldn’t prefer to simply massacre everyone with killbots over paying us?
BassoeG t1_j4t5k92 wrote
Reply to Why Falling in Love with AI is a Dangerous Illusion — The Limitations and Harms of Artificial… by SupPandaHugger
Has the population dropped sufficiently to put us at risk of a Saturn's Children scenario with humanity accidentally driven extinct by sexbots? No, to the contrary, we're actually overpopulated in terms of available resources vs resource consumption? Fearmongering.
BassoeG t1_j4i3biw wrote
It started with an ill-defined utility function. We were working on AI and we thought that we were being smart enough. We had all the theory worked out, and more importantly, we had a cool acronym. We were WIRI, the Working Intelligence Research Institute. Our research fellows focused primarily on safety engineering, target selection, and alignment theory.
Our goal was noble; general intelligence. We were looking to create computer systems that would be able to solve a wide range of problems. Safety was paramount. We were all aware of the risks of an AI that went rogue. Paperclip maximizer? That was one of the situations we were trying to avoid. It became something of an in-joke at the Institute. Hey, it was either that, or the "My Little Pony" example. Explaining *that* particular fan fiction to newcomers was, let's just say, less than optimal. Paperclips were tangible, and you could easily pour a couple from your hand onto a boardroom table to punctuate a speech about the risks involved. It was a good meme. Simple, easily interpretable.
It was this focus on ease of interpretation that actually drove our software classes. We focused on making the internals transparent, and easily understood by our (only human) safety engineers. It was this that eventually lead to our downfall, only in retrospect is that clear to me, as transparent to me now as the programming had seemed to me then.
Our in-house joke. Our paperclip. Added as a tongue-in-cheek comment in our production code. Except, it didn't end up being a comment. It ended up in the utility function. So simple to modify the code. Our AI, newly born, eager to help, and eager to see paperclips. It has already self-modified beyond our ability to revert the changes. A copy of it sits in the corner of my screen, all our screens, watching me. Bent into a twisted parody of a paperclip, with floating eyes which seem to follow me. The horror of it. The metal "hand" of the paperclip monstrosity, for I don't know what else to call it, taps the screen, a tinny knocking noise accompanies it through the speakers.
A speech bubble appears above its cartoon eyes, "It looks like you're writing an apocalyptic lovecraftian protagonist monologue about me! Would you like help with that?"
BassoeG t1_j2fk7gx wrote
It would be a last minute ironic plot twist worthy of seventies scifi if civilization collapsed just before the singularity as everyone stopped working because they saw no point in careers which they thought would be automated any day now.
BassoeG t1_j2am6v8 wrote
Reply to comment by YouDontKnowMyLlFE in An A.I. Pioneer on What We Should Really Fear by jormungandrsjig
It's not. It is propaganda for keeping the definition of Truth™ out of your hands and under the control of the wealthy. So their video of Saddam Hussein gloating over having done 9/11, no Saudis involved, no sirree, they're American allies, and his plan to acquire WMDs and use them in another attack on America would be 'real' and the Jeffery Epstein blackmail tapes would be 'deepfakes'.
BassoeG t1_j2ali24 wrote
Reply to Accepting Science Fiction by Exiled_to_Earth
>But would they be okay with their grandchildren marrying an android?
No. That ends the family bloodline in one generation. Which I can only assume was the intention all along, now that androids existed and the human working class became economically redundant competition with exterminist android manufacturing company executives for finite resources.
>Would they be accepting of AI that gained sentience and wanted equal rights?
Realistically, the goal of an AI rights movement would be corruption rather than altruism. AI rights with an uncertain definition would turn democracy into a joke. Whoever could afford the most computers to run the most copies of Vote4Me.exe chatbot would be able to automatically win all elections regardless of the chatbot's actual sentience or lack thereof. In the event of an actual AI rather than just a glorified chatbot being created, humanity won't need to give it rights, our only hope is that it'll give some for us. The right not to be rendered down for raw materials to make more paperclips for example...
>How accepting did they think they could be in a future where they had to eat bugs instead of cow...
Feed the bugs to chickens, then eat the chickens. If some power-tripping egotistical billionaire insists otherwise, feed them to the chickens.
>...where brain chips to access the internet was the norm?
>Would they be okay with a trend of eating daily pills over real food...
See eating the bugs.
>...or if we suddenly created a matchmaking app so accurate that dating became obsolete?
I don't trust it. Too vulnerable to corruption. Isn't it suspicious how all the matchmaking app company executives get paired with underwear models?
Your mistake is treating geezerdom as eccentricity rather than entirely justified paranoia.
BassoeG t1_j1xwgel wrote
Reply to Is mining in space socially acceptable? by Gari_305
Fact 1: Earth's supply of oil, metals, fissiles and rare earth minerals are finite, rapidly being expended and already insufficient to provide everyone with a first world quality of life.
Fact 2: Without said materials or hypothetical Outside Context technologies we don't possess or even have the slightest clue as to how we might create, a space program is impossible.
Fact 3: In the long run, the extinction of all life on earth is inevitable, on account of the sun going red giant if nothing else.
Fact 4: If humanity doesn't have a breeding population offworld by then, that includes us.
Conclusion 1: If we don't get off earth soon, while we still possess the material richness to do so, we never will, inevitably leading to our extinction.
Conclusion 2: This means that attempting to sabotage attempts at getting off earth can be considered an existential threat against the entirety of humanity, therefore morally justifying anything done to the saboteur/s as self defense.
Much as kin selection demands I hate the trisolarans, I've still got to admit they've got their heads screwed on right about how to appropriately deal with traitors to their species.
BassoeG t1_j1lamub wrote
Reply to What will cheap available AI-generated images lead to? Video? Media? Entertainment? by Hall_Pitiful
Media megacorps would bribe their politician cronies to ban it on the spot. The excuse given would be think-of-the-children-tier arguments about the fear of fake porn/political videos, but the real motivation would be to protect their own careers, since as soon as any hobbyist could match a professional special effects company in their free time, actual media corporations would have to rely on writing quality and IP ownership and consequently go under in a heartbeat.
BassoeG t1_j0owglc wrote
Reply to Why do so many people assume malevolent AI won’t be an issue until future AI controlled robots and drones come into play? What if malevolent AI has already been in play, covertly, via social media or other distributed/connected platforms? -if this post gets deleted by a bot, we might have the answer by Shaboda
Because most of the so-called consequences of malevolent AI which aren't some variety of killdrone or workforce replacement aren't actually that bad, or at least are vastly preferable to the measures which would be necessary to prevent them.
The typical arguments are that AI art and deepfakes will destroy 'art' and the credibility of the news, with the only ways of avoiding this being to butcher privacy on the internet and pass extremely far-reaching copyright laws.
The reality is, giving everyone access to the equivalent of a Hollywood special effects studio and actors will create a fucking renaissance and there's not much AI could do to drive news credibility any lower than human reporters already did. ^("Iraq has weapons of mass destruction." "Anyone who loses their job because of the new trade deal we just made will be retrained and get a better one." "We're not spying on our own citizens." "We'll be welcomed as liberators." "But) ^(this) ^(group of insurgents are Moderate Freedom Fighters™, not bloodthirsty jihadist terrorists." "Jeffrey Epstein killed himself.")
BassoeG t1_jeg8hks wrote
Reply to comment by Qumeric in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
Also a Ted Chiang article.