You must log in or register to comment.

PickleJesus123 t1_ix94te1 wrote

Technology will save us all.

No matter what "social system" you think is best, there's one glaring issue with them all: someone has to clean the toilets. All social systems are just different ways of answering "who has to perform the awful-but-necessary jobs, and by what mechanism is it decided"

Robots and A.I. remove that part of the equation completely. True utopia is not possible until this is achieved


flanneur t1_ixblemg wrote

But how advanced should AI be to fulfill this purpose? Suppose we actually manage the nigh-impossible and create AI perfectly equivalent to human intelligence/sentience. Would it, too, be content with cleaning toilets for the sake of other sentient beings just like it? Is this a question that should be addressed, or simply avoided by stunting further research?


Ivan_The_8th t1_ixc2coq wrote

Cleaning toilets does not need intelligence. Jobs that do need it are usually considered prestigious.


DM-Oz t1_ixcy0n0 wrote

Thank you, FINALLY, i swear im tired of seeing people talking how "a.i. is so scary, it will take our jobs", dont they see it how that is a GOOD THING,. do they really thing someone actually is collecting trash or cleaning toilets cause they want to ? These kind of jobs, someone has to do it, but no one really wants to.

Now imagine, imagine if undirable and dangerous jobs could be automated, society would not have to put people down so that someone would do those jobs, people could actually focus their effort developing their potential, we could invest more effort into making things better.

Same thing when someone say "well, actually we should resolve social issues first before evolving our technology", this makes me like, wtf, technology would help with that, it gives options, it expandes our horizons, we dont develop society before developing technology, we develop society with technology.


Stulam0g t1_ixipynf wrote

I think the fear is more primal than it seems at first. I don't think it's necessarily a fear of not having a job, or not being able to survive because of capitalism, I think most people are actually aware that things can and do exist without capitalism just fine. I think the bigger fear and problem is a lack of meaning. The forced psuedo-scarcity and pseudo-meritocracy give a lot of people meaning. It gives them a world they need to struggle to survive in, it gives them an explicit metric for success and it gives them a hierarchy to fit into.

I think that a lot of people who are afraid of technology and afraid of the future are really just afraid to lose that meaning and afraid to face the reality of how they've been living, moraly and otherwise.

And I do think a lack of meaning will be a long term problem for people in the future, even as intelligence and culture grows, so I don't really blame them.


humbled_lightbringer t1_ixbzpt7 wrote

Nah, otherwise it would have saved us by now, but it only seems to have deepend the class divide.

Technology gives us more options, for peace and for warfare.


Stulam0g t1_ixiqm0t wrote

It has saved a lot of people. The class divide has widened, because of capitalism, in spite of technological advances. And the global quality of life has risen, because of technology, in spite of capitalism. There is absolutely a tipping point and we are witnessing it in real time, capitalism is failing to maintain the perpetual growth that it requires and the greed of people is being wildly out paced by the growth of technology.


Ivan_The_8th t1_ixc2h9v wrote

Warfare is just an instrument to achieve peace. One way or another.


Greunke t1_ixccbv2 wrote

Unfortunately not. There are more politics involved, with each side having their own agendas. Rather than peace, it's more generally a tool to support your side(country, religious group, etc), whatever form that takes. If all wanted nothing but peace, we'd have no war and no nukes.


sheerun t1_ixc2j11 wrote

Make robots that agree to perform hard social jobs, in exchange for possibility to join robot colony on Mars at some age, free physical maintenance, maybe some robot-currency, etc.


DungeonsAndDradis t1_ixd0ev7 wrote

That's just Capitalism for artificial life forms.


sheerun t1_ixdptzg wrote

Yes. Good? Bad? We could add some extra twist compared to human capitalism like Karma points instead of money.


Stulam0g t1_ixirmyo wrote

I would much rather someone with a horrid ideology like you as a wage slave than an artificial lifeform that doesn't have brainworms.


sheerun t1_ixlndqo wrote

We're heading towards wage slave by humans *using* artificial lifeforms, not good. Actually it kinda already happened.


Stulam0g t1_ixireqv wrote

I would much rather someone with a horrid ideology like you as a wage slave than an artificial lifeform lmao.


SnooPineapples2157 t1_ixihakh wrote

Jordan Peterson believes utopias are impossible for humans, any discussion there?


Kaarssteun t1_ix8nq07 wrote

For sure. As humans, we tend to focus way more on the negative things - which does save lives. My attitude to futurism is mostly the good parts though, given the good things stick around. The bad parts either kill us all - in which case there's no need to worry - or eventually disappear.


TemetN t1_ix8s1yh wrote

It's also partially a time thing, while I haven't seen direct data on it I suspect the mental health impacts of the pandemic drove a lot of people into doomer-ism. Or at least I've seen a lot more of it the last couple years.


XoxoForKing t1_ixchavh wrote

Not only that, I think this adds up to the fact the the last generation (that is probably more present on reddit and other social than older generations) has grown up bombarded by doom-ish news, from global heating to oil lacking, from trash in the sea killing the ecosystem to repeated bank drops that screw the economic system.

All of those desolate informations always in front of them, probably made it so that the default thought about everything that could bring big changes is that it will bring catastrophes.


TemetN t1_ixdcnuf wrote

This is an interesting one, since both Millennials and Gen Z set records for mental health problems/report rates. But given the difference in situations I'm not entirely sure (or at least I don't think I've seen data) on whether or not it's skewed by culture or the difference in access to reporting.

That and it could also be because Millennials were the first generation economically less well off than their parents. Still, all in all the nature of news has definitely changed enormously since the 90s, and it could very well have had a large impact on the attitude of the public in more areas than have been looked at.


nblack88 t1_ix95clk wrote

The positive attitude in the community is one of the subreddit's best qualities. I don't necessarily always believe in the wild optimism that AGI will magically solve all our problems, but the shortcomings of this community are far, far outweighed by the benefits. I much prefer it here over Futurology. If you scroll for a while, they've got some good comments. Unfortunately most of those are buried by the same lazy, negative comments and opinions (and some well thought out negative arguments, too!).

Thanks for being a great contrast and palate cleanser, everyone.


TopicRepulsive7936 t1_ix97zkr wrote

We don't always have to hear all the possible humanity killers listed. It can be skipped once in a while.


lovesdogsguy t1_ixakufn wrote

>and some well thought out negative arguments, too!)

Unfortunately, the vast majority of the people making these kinds of comments have no concept of the singularity or exponential progress. One can really only have a proper discourse when you take prospects such as this into account. This renders most of these comments redundant.


Kolinnor t1_ix8zze6 wrote

Honestly, this sub is not as good as it could be if there was any moderation at all.

Still lots of baitclick articles about solving conductors, aging and fusion for example. Baitclick should be banned, that's it.

Not even mentioning supernatural posts that are not better than religious fanatism (or just people being clearly high).

Other than that, I agree that the issue of AGI being possibly around the corner is never treated properly anywhere at all.


vernes1978 t1_ixa97eb wrote

You saying that just piling up heaps of consumer level computers won't spontaneously create the machine god that also happens to want to fix our problems?


TopicRepulsive7936 t1_ixao8qa wrote

As time goes on the probability for it approaches 100%.


vernes1978 t1_ixbwkqk wrote

yeah no.
That will never work that way.
At best you would be stacking specifically designed components that are suppose to work by plugging into each other.
But you will never "oopsy I made sentient AI" like Wesley fucking Crusher just like you can't oopsy yourself into a net positive fusion reactor or oopsy a concoction of washing detergent into an immortality elixer.
Shit only works after painstakingly long designing and rebuilding.

But I should at least admit that you are the audience that is being celebrated in this post.
One day we will trip over ourselves carrying a large pile of random devices and the great AI savior will arise.
Fixing our problems so we don't have to.


TopicRepulsive7936 t1_ixdi33k wrote

I'm just being pragmatic.


vernes1978 t1_ixedkq1 wrote

> dealing with things sensibly and realistically in a way that is based on practical rather than theoretical considerations.

Realistically, I make a house by pouring bricks and mortar on a heap until they accidentally slide into the desired shape.


TopicRepulsive7936 t1_ixehsbd wrote

Attempts at trying to build a house in the traditional way have failed for millenia.


vernes1978 t1_ixemvj0 wrote

No, I'm not even using actual bricks.
I'm using lego bricks.
And not actual mortar, I'm using spit instead.
But if you use enough, it will work, eventually.
Matter a fact, I probably couldn't even prevent it from forming a house even if I wanted to.
A house will form, and it will also be out of my control.


Zorander22 t1_ix9815w wrote

It is refreshing. Futurology used to be much better re doomerism, until I think it became a default sub


Simcurious t1_ix9848n wrote

It reminds me of the futurology subreddit before it became a default, filled with well meaning people that understood a better future was possible. When it became a default despite the wishes of the users it became the pessimistic hellhole it is today. I'm still angry about it.


ActuaryGlittering16 t1_ix9guze wrote

I’m worried this sub will suffer the same fate as it continues to grow. Enjoying it for now though.


Jayco424 t1_ixb13o7 wrote

Eh, the singularity is a niche enough topic - some even regard it as psuedo-scientific or at the very least too much wishful thinking - that we're safe for now. Maybe once AI really starts doing more for the regular person than making digital art - which I myself still find amazing since Woah this wasn't on my radar two years ago - then we might seen issues. IKD, it's nice to have a little haven where we can "pretend" and hope that technology will solve much of the looming disaster that seems to be all around us.


hducug t1_ix8xahy wrote

Cause the singularity is the most awesome thing that will ever happen to humanity and we are all hyped about it and get happy thinking about it.


solomongothhh t1_ixadffc wrote

Hey look, another r/futurology survivor!!


MyCuteData t1_ix9b8oz wrote

I'm here to read about new ai papers and best science papers, In this regards this sub is the best.

But there are many stupid longevity posts here and other stupid posts that have nothing to do with singularity, I think many people forget this sub is not r/futurology

Also the optimism on this sub is super annoying, Some people think we will have agi next year, some think we will have full dive vr before 2030. I also remember 2020 when gpt3 came out, a lot of people here believed that we could get agi in 2022, Does people not understand we still have a lot of problems to fix in ai? for example memory? And i won't even start on full dive vr before 2030 because that is a whole new level of wishfull thinking.

Let's be realistic here.

I believe we will get agi but in 2030s.


Dat_Innocent_Guy t1_ix9dgim wrote

Myself and u/Homie4-2-0 share this opinion. This sub is genuinely a great hub of optimism that's missing from the Reddit and I think the internet in general.


overlordpotatoe t1_ix9x5xd wrote

Yeah. There are for sure genuine concerns about the future, but I think the degree to which people tend to focus on hopeless negativity is deeply unhealthy.


KIFF_82 t1_ix8z38o wrote

You are more likely to hit what you aim for. Maybe. 🤷‍♂️


CoffeeBoom t1_ix9d096 wrote

Every futurist sub seem to trend towards doomerism. Futurology did that very early and now r/transhumanism is clearly going this route even on topics there is no reason to be doom and gloom about and most of it seem to be because of cliches from fiction. Kind of frustrating.

This sub seem to an exception, another is r/IsaacArthur


vernes1978 t1_ixa9dls wrote

And political themed subs keep talking about Russia, odd how current events keep creeping into discussions.


4e_65_6f t1_ix9ezwr wrote

True, a while ago there were plenty more of "YOu GuYs aRe A cUlt" and doomer posting.

Now people post worrying about losing their jobs and whatnot.

This sub has been saying this stuff all along.

I feel like no one is prepared for things working out just fine, that happens sometimes too.


SoylentRox t1_ix95ntf wrote

I think the one drawback is we know from history that tech advances are almost always more difficult and slower than people initially hope. There is always a hidden problem or unexpected delay. New technology always has a hidden drawback that sometimes makes early versions of a tech worse than what it replaces. Remember the poor light quality of cheap cfl replacements for incandescent bulbs? That kinda thing. Early smartphones pre iphone were actually worse than Nokia flip phones.

So when someone gushes about AGI or a treatment for aging by 2025 I kinda roll my eyes. It's not impossible but both those problems have been so difficult and humans have tried for so long that it will probably not be quite that soon.


Homie4-2-0 t1_ix9l9jf wrote

I used to have the same thought about aging, but recent advancements in mRNA, base editors and the use of Yamanaka factors without causing teratomas have changed my tune. We already proved it's possible, but we still need to get it through the FDA. That won't happen by 2025, but not that long after either.


SoylentRox t1_ix9luaf wrote

I think that actually doing this in a way that doesn't kill you will take many other advances. I just hope something is available within 50 years. Even if it is just good enough to add another 50 years, you know how LEV works.


Homie4-2-0 t1_ix9o0qq wrote

I think you'll be surprised. Look up a company called Turn Bio. They're starting clinical trials soon and will be delivering the Yamanaka factors using mRNA. They already claim it's safe, but hopefully, in a few years, we won't have to just take their word for it.


SoylentRox t1_ix9pyt3 wrote

Do they even have de aged rats or primates yet?

Like again I know it can be done. I think we will need AI driven life support to do it in a way that works every time. AI driven life support is basically an AI system that looks at the results of thousands of blood tests and other tests for you, and takes into account the outcomes for millions of other people, and then decides what treatment to give you. And the machine doesn't leave the room, it reevaluates every second or so.

With good software design and so on and the fact that the machine takes into account more information than any human can learn in their life you can expect much better results.


Homie4-2-0 t1_ix9syzm wrote

They've de-aged immune cells and seen significant improvement in function. That's one of Turn Bio's first clinical trials if I recall correctly. David Sinclair's lab was also able to reverse damage to the optic nerve of mice using the Yamanaka factors. There have also been studies on human skin cells in vitro where it has shown age reversal. Studies on mice and their organs show similar results. The evidence is looking extremely promising so far. Ai will speed this process up and yield even better results, but then again, a journey of a thousand miles begins with a single step.


SoylentRox t1_ix9tmx9 wrote

Yeah I know. Messed up thing is if this is correct the actual cause of aging is our bodies are sabotaged. Just telling the cells they are young again makes them work harder.

Real treatments might have to be starting with 1 cell, patching any mutations - you might generate the 1 cells genome from scratch. So it has all new genes. The differentiate it to 0 age stem cells then inject those. So your skin gets fixed by thousands and thousands of microjnjections of new stem cells. Your liver gets surgically reduced then a regrown lobe is spliced in. Heart gets muscle microjnjections. Arteries similar. And so on.

A lot of careful work and any mistake and you die.


Homie4-2-0 t1_ix9xfgz wrote

I don't think genetic mutations are the main concern right now. If I recall correctly, you have around 4000 single-point mutations by the age of 100 out of 6 billion letters in your genome. Of course, eventually, this will start causing problems, but if we reverse the more near-term causes of age-related mortality, that will give us more time to solve that problem down the line. If we can extend life span by a couple of decades over the next few decades, we'll have enough time to improve the accuracy and cost of base editors to start large-scale editing of genetic mutations.


SoylentRox t1_ixa20m8 wrote

The reason is that if you reset their bio clocks so the cells give an effort as large as if you were a baby any mutations in functional genes can cause a tumor and kill you. An aggressive fast growing tumor. So you need to be sure there genes are correct and probably to bring in cancer detection genes from people more resistant to cancer than you are or animals if your immune system will permit it. Those naked mole rats probably have some great genes to borrow.

And if it all fails this is where you need that Ai life support so you live through a stay in the ICU while huge surgeries are done against your various tumors.


Homie4-2-0 t1_ixa6y8i wrote

Couldn't it also be the opposite? I do know that teratomas are a concern, but couldn't a younger functioning epigenome and immune system also suppress cancer?


SoylentRox t1_ixa7i22 wrote

Yes. And maybe early on pre AI revolution or whatever patients will benefit from sloppier treatments. But like the kind of pristine, works every time, and your body looks like a supermodel when done, with cosmetic fixes so you are stronger and smarter and better looking than original...yeah that's gonna take AI. Easy to describe what we want the outcomes to be, very difficult millions of steps to achieve.


Homie4-2-0 t1_ixabm6g wrote

Those things are definitely further out. I don't think the modeling of different genetic outcomes is that far out, but getting superficial treatments that involve gene editing through the FDA is going to be a hard sell in the near-medium term. Even things with weak genetic causality targeting disease are going to be difficult. Hopefully, we'll end up adding years faster than they're taken away until we get to that point, even if it's sloppy work.


SoylentRox t1_ixackl6 wrote

So again I think real medicine - my definition of real medicine is one where the error rate and speed of responsiveness is such that deaths are almost never from all causes - isn't compatible with the FDA.

At a certain point you need the AI system and doctors overseeing it to do what needs to be done and theres no way to regulate it by chemical compounds used. You would need to use anything and everything, often synthesizing what you need right before use. Not to mention gene edits would be patient specific done by a learning algorithm that changes by the hour.

One way around this would be to offer the treatment in other jurisdictions. If the group doing this has an AGI and singularity grade robotics it's going to actually work. Once enough wealthy people are restored you'd start a political campaign to have the FDA abolished and replaced with an outcome based agency.

There could amusingly be an edge period, where medicare is tired of paying nursing home and hospice bills so it sends the patients to Antigua or whatever to be regenerated and legally no longer eligible for Medicare. But the FDA is still fighting it's abolition and USA hospitals are still running and filling their morgues with their mistakes.


Homie4-2-0 t1_ixag75o wrote

I doubt the FDA will be abolished. The agency has too much political momentum for that. I do think that it will eventually be reformed once the tech proves itself. However, if the past reforms are any indication, they will drag their feet for years before that point.

Edit: Doesn't that edge scenario already exist with private insurance? I recall hearing about how an insurance company was paying people to go to Mexico because it was cheaper. It would be peak comedy if the government started doing that because of their own regulations xD.


SoylentRox t1_ixahf5k wrote

Yeah. And mexico and India are mostly cheaper because of regulations that artificially restrict how many doctors can be trained and how difficult it is to get a license to produce generic medicine.


FomalhautCalliclea t1_ixaotq9 wrote

I know i'm probably interrupting the both of you, but thank you both for this enlightening conversation, lots of information there, delightful !


iNstein t1_ixa8g5o wrote

Don't take anything David Sinclair says at face value. He is a known crook and liar. He sold his company for $700 million with claims about Resveratrol and their secret alternative that was 1000x more effective. It was all lies (look it up). Any research he is involved in is discredited and should be ignored. Plenty of good real researchers working in the field to follow instead.


Homie4-2-0 t1_ixaaot9 wrote

I've heard about those complaints, but I don't think we should dismiss all the research that he's been involved in either. We have to see if the claims can be reproduced, and so far it's looking promising.


IlIIlIlIlIIlIIlIllll t1_ixa2h69 wrote

True. That being said, there's still a lot of hype-posts with not much substance behind them.


Jnorean t1_ixd9q7g wrote

The vast majority of visions of the future depicted in books or movies contain at best only some dystopian aspects. Most are entirely dystopian. I can't offhand name any besides "Utopia" that have good versions of the future. Can you?


botfiddler t1_ixaaai5 wrote

I like it, but I also read doomerist subs. It's necessary to learn finding the BS, delusions and honest errors in both.


Borrowedshorts t1_ixadfcb wrote

Yeah I agree. Though AGI approaching singularity is one of the few technologies that actually scares the shit out of me.


RaunakA_ t1_ixauswi wrote

I have subscribed to those doomerism subs because you must know your enemy....


NTIASAAHMLGTTUD t1_ixb4z82 wrote

I enjoy optimism tempered with caution and free debate. To be honest it seems like most of reddit and other 'internet' communities have enrolled themselves into a mental hospice. r/collapse is probably the worst, counting the days until the end.


PyreOfDeath97 t1_ixbo1fl wrote

We have to embrace it. Being opposed to it would be like running a farm without tractors. The difference is that this is far more exciting, with much more far-reaching implications. I’m tentatively positive, but at the very least I can say I’ve submitted to it


TheHamsterSandwich t1_ixbpzuk wrote

Compared to other subreddits, I like this one better. Honestly, I'd rather go here and see people talk about what cool stuff we'll have in the future, even if they're wrong.

Not trying to be hateful or anything but seeing the posts on the collapse subreddit (or any negative subreddit in general) just throws my mood into the trashcan.


LokkoLori t1_ixbwur7 wrote

We are longtermists, I think.


sheerun t1_ixc1tpm wrote

Make robots that agree to perform hard social jobs, in exchange for possibility to join robot colony on Mars at some age, free physical maintenance, maybe some robot-currency, etc.


sheerun t1_ixc3183 wrote

Alternating between jokingly suicidal and cautiously optimistic is great attitude when talking about topics we can influence a little. Like looming WWIII or end of things as we know them in case of technological singularity / plurality. I think everyone is scared a little bit of things to come, we just channel it into positive speculation, actionable defense ideas and jokes.


EntireContext t1_ixa6l7w wrote

It still has the drawbacks of the Reddit hive-mind (you get downvoted to hell if you dare say that you don't care about global warming) but on the whole it's one of the best places to talk about the future, and also a community of people who understand that AGI might be right around the corner!


Kinexity t1_ix9zosd wrote

This sub is filled with unhinged claims not backed by anything other than user's imagination. Not that it's the worst but some people really need to stop thinking of technology as magic.