World_May_Wobble
World_May_Wobble t1_jdi0se0 wrote
World_May_Wobble t1_jdeal1y wrote
Reply to comment by DreaminDemon177 in How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
I joke, but I do see this as the long suicide of our species. There's a skinner box in each of our futures.
World_May_Wobble t1_jddxzqm wrote
Reply to How will you spend your time if/when AGI means you no longer have to work for a living (but you still have your basic needs met such as housing, food etc..)? by DreaminDemon177
11am - Wake up
12pm - AI powered VR porn
1pm - AI powered VR porn
2pm - AI powered VR porn
3pm - AI powered VR porn
4pm - DnD
5pm - DnD
6pm - DnD
7pm - DnD
8pm - Browse Reddit
9pm - Vidya
10pm - Vidya
11pm - Vidya
12am - Vidya
1am - Write poetry under the stars.
2am - AI powered VR porn
3am - AI powered VR porn
4am - Sleep
World_May_Wobble t1_j8zam87 wrote
Reply to What would be your response to someone with a very pessimistic view of AGI? by EchoXResonate
My response to him would be, "Yeah. Those are legitimate concerns, but the subservient, narrow AI may kill us before the rogue AGI does."
World_May_Wobble t1_j8s49z7 wrote
Reply to comment by megadonkeyx in Bingchat is a sign we are losing control early by Dawnof_thefaithful
Why though? It makes it harder to get work done. That's ostensibly what it's there to do, help us do things.
World_May_Wobble t1_j6inou9 wrote
Reply to comment by reidlos1624 in Private UBI by SantoshiEspada
No, I think you're right about how the economic model works today. If population flat lines, consumption does too, and production follows, both because there's no growth in demand and because there are no new bodies to work.
World_May_Wobble t1_j6ihei4 wrote
Reply to comment by reidlos1624 in Private UBI by SantoshiEspada
I think something like half of annual GDP growth comes population growth, and the rest from gains in productivity. That's not a feature of capitalism, because even the Soviet Union's output benefited from more bodies.
That said, yes. The flattening of population growth hurts economic growth, but that might be mitigated against by removing labor as a bottleneck. Human consumption could stagnate, but there's no reason productivity has to, so the economy could continue to grow.
World_May_Wobble t1_j6g9lwy wrote
Reply to comment by SantoshiEspada in Private UBI by SantoshiEspada
I have an opinion. I expressed it. My opinion is that none of this is as obvious as you made it out to be.
World_May_Wobble t1_j6fv9bh wrote
Reply to Private UBI by SantoshiEspada
>I think by now we all agree that a highly automated society can become a post-labor society and therefore should resort to a wealth distribution instrument such as universal basic income.
Uh. Don't put me down for that. I don't have expert knowledge in macroeconomics, so I don't have a clue what an automated economy looks like. Not even a guess. North and South could become East and West for all I know.
Anyone here who purports greater certainty than that is lying to you or themselves.
World_May_Wobble t1_j64axax wrote
Reply to Asking here and not on an artist subreddit because you guys are non-artists who love AI and I don't want to get coddled. Genuinely, is there any point in continuing to make art when everything artists could ever do will be fundamentally replaceable in a few years? by [deleted]
A machine can cycle oxygen through an air sac. Is there any point to breathing?
You'll do art for yourself, because you enjoy it. There won't be a commercial incentive to do it.
I've enjoyed writing poetry for 20 years now. It's a dead art. My sonnets were never going to pay the rent. I understand what it's like to practice at something that has no monetary value, and visual artists will too soon.
World_May_Wobble t1_j595e36 wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>I don't have to 'justify' anything, that's not what I'm trying to do. I'm raising questions, not peddling answers. I'm trying to be a philosopher about AI, not a preist.
I've seen you put forward firm, prescriptive opinions about how people should think and about what's signal and noise. It's clear that you have a lot of opinions you'd like people to share. The title of your OP and almost every sentence since then has been a statement about what you believe to be true. I have not seen you ask any questions, however. So how is this different from what a priest does?
You say you're not trying to persuade anyone, then follow that with a two paragraph tangent arguing that AI needs to be handled under the paradigm of psychology and not economics.
You told me you weren't doing a thing while doing that very thing. This is gaslighting.
World_May_Wobble t1_j58u3xz wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Those examples are tedious and unrealistic, but I think by design. They're cartoons meant to illustrate a point.
If you want a more realistic example of the alignment problem, I'd point to modern corporations. They are powerful, artificial, intelligent systems whose value function takes a single input, short term profit, and discounts ALL of the other things we'd like intelligent systems to care about.
When I think about the alignment problem, I don't think about paperclips per se. I think about Facebook and Google creating toxic information bubbles online, leveraging outrage and misinformation to drive engagement. I think of WotC dismantling the legal framework that permits a vibrant ecosystem of competitors publishing DnD content. I think of Big Oil fighting to keep consumption high in spite of what it's doing to the climate. I think of banks relaxing lending standards so they could profit off the secondary mortgage market, crashing the economy.
That's what the alignment problem looks like to me, and I think we should ask what we can do to avoid analogous mismatches being baked into the AI-driven economy of tomorrow, or we could wind up with things misaligned in the same way and degree as corporations but orders of magnitude more powerful.
World_May_Wobble t1_j58r1hr wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>... 'absolute truth' is a load of nonsense ...
Is that absolutely true, "bro"?
If we can put aside our mutual lack of respect for one another, I'm genuinely, intellectually curious. How do you expect people to be moved to your way of thinking without "cartesian style explanations"?
Do you envision that people will just feel the weakness of "cartesian-thinking"? If that's the case, shouldn't you at least be making more appeals to emotion? You categorically refuse to justify your beliefs, so what is the incentive for someone to entertain them?
Again, sincere question.
World_May_Wobble t1_j57qkzx wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Invested readers will note that he didn't provide any concrete explanations here either.
World_May_Wobble t1_j57prno wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
I see you're having some trouble with verbosity. Let me help you.
>... arrogance, conceit and self-importance?
These words mean the same thing.
World_May_Wobble t1_j57owtb wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
That was a very butthurt response.
>It's not my fault you don't understand what I mean; 'storyteller' is not a complex word.
I think it actually is, because there's no context given. How does a storytelling AI differ from what's being built now? What is a story in this context? How do you instantiate storytelling in code? It has nothing to do with reading comprehension; there are a lot of ambiguities you've left open in favor of rambling about Descartes.
World_May_Wobble t1_j57gz8q wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>So it would be better, I think, to build 'storyteller' minds that can build up their senses of ethics independently, from their own knowledge and insights, without needing to rely on some kind of human 'Ten Commandments' style of mumbo-jumbo.
Putting aside the fact that I don't think anyone knows what you mean by a "storyteller mind," this is not a solution to the alignment problem. This is a rejection of it. The entire problem is that we may not like the stories that AIs come up with.
World_May_Wobble t1_j57fn7f wrote
Reply to comment by Baturinsky in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
Don't worry. So is OP.
World_May_Wobble t1_j57fjpq wrote
Reply to The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
So. You don't say anything about AI. You don't offer any ideas about what a useful approach to alignment looks like. You don't even justify your animosity toward Descartes or illustrate his relationship to AI safety.
I can't tell you that you're wrong, because you've literally said nothing.
World_May_Wobble t1_j3kkm8w wrote
Reply to What will humanity do when everything is, well, eventually discovered by ASI? by Cool-Particular-4159
It's not obvious that there can be a finite number of things to learn.
>I'm pretty sure the only real 'purpose' of humans will be to just live out our lives and then die
That has been (and frankly continues to be) the human experience for the vast majority of people.
World_May_Wobble t1_j392jpw wrote
Reply to comment by Quanlib in The future: our POV (ft. AIart issue) by thebestmtgplayer
I misinterpreted you to mean that it felt soulless. But whether the consumer sees the soul in a piece has more to do with them than the artist.
In any case, yes. Artists are bound to be negatively influenced, and I'm surprised I haven't seen anyone with the receipts to show how their commissions fell off a cliff. I'm sure it's happening for a lot of people.
World_May_Wobble t1_j389ajx wrote
Reply to comment by Quanlib in The future: our POV (ft. AIart issue) by thebestmtgplayer
>There’s nothing AI can currently produce like the organic expression of actual human artists
If that were the case, you'd think it would be easier to tell what is and isn't AI generated art, but people are routinely failing at this in both directions.
World_May_Wobble t1_j376cl3 wrote
Reply to comment by IcyyWizard in The future: our POV (ft. AIart issue) by thebestmtgplayer
>"there's no point for you to create any more".
If you were confronted by a robot that cycled oxygen through an air sac, would you give up breathing?
I've enjoyed writing poetry for decades, since before AI could do it, but it was a dead art since before I was born. The market for it is small and the barrier to entry is low, so I was never going to make money from it, and practically no one would ever see it, but I still do it for me.
Visual art is going to the same place, just much more suddenly.
World_May_Wobble t1_j1wbu2i wrote
Reply to Driverless cars and electric cars being displayed as the pinnacle of future transportation engineering is just… wrong. Car-based infrastructure is inefficient, bad for the environment and we already have better technologies in other fields that could help more. An in depth analysis by mocha_sweetheart
- Sure, but it doesn't need to be environmentally friendly to be the pinnacle of engineering. Nothing's less environmental than a Dyson sphere, and that'd be pretty impressive.
2. I don't want to have to walk in the rain or heat to a bus stop, hauling my stuff, wait for the bus, sit on a circuitous route, stopping at places I don't want to stop at, get off the bus, and walk through the rain or heat, again hauling my stuff, just to meet up at a friend's house for some board games. THEN DO IT AGAIN ON THE WAY HOME! That's not freedom. No. I did that until I was 30. Never again. Absolutely not.
-
Trains have pre-determined destinations. They don't go where you want to go. You know that. They cannot solve the problems that a driverless car can.
-
Yes, and?
-
So put the vehicles on a subscription service and run them like Uber.
-
We're not going to Replan, tear down, and rebuild Houston and Chicago as part of some kind of top-down initiative. If you want more compressed cities, you can either take this to Mars or wait for additional growth to fill in the empty space.
World_May_Wobble t1_je73nie wrote
Reply to comment by naum547 in The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
You can set its intelligence arbitrarily high, the fact remains that it may still bump up against hard physical constraints already familiar to us.
It's naïve to assume that everything is possible.