Submitted by IndependenceRound453 t3_10y6ben in singularity

I've seen a lot of comments on this sub over the time that I've been a member of it saying how people who say things like truly transformative AI is decades, if not centuries away, and AI will never replace a certain job, etc. are just coping.

And that could possibly be the case. But the copium also goes the other way.

A huge reason why this sub is uber-optimisitic is because many people on this sub use the singularity (something which isn't even a guarantee to happen ever, or at least not in their lifetime) as cope for their lives, lives that they are not very happy with. Many people here do not lead content lives, so they turn to AI and other technologies as the thing that's going to save them (which I find quite sad, to be honest).

But of course, the singularity doesn't mean jack if it's not coming anytime soon, so that's why you see so many people claim that it's only a few years away, a decade at most, and those comments tend to get a lot of upvotes. On the other hand, comments that are more conservative get downvoted a lot (I wonder why?).

And this uber-optimism is the case despite the fact that most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer. And that's not even taking into account social, economic, and political factors that are almost a guarantee to delay the arrival of the singularity.

I'm just saying that maybe the "luddites" are coping, but so is this sub. Let's not pretend otherwise.

98

Comments

You must log in or register to comment.

TFenrir t1_j7wibx3 wrote

I think there's all kinds of people here, but I know the type you are describing. I think to a lot of people, the singularity feels like the most likely way they will get heaven.

I wonder where I stand on the spectrum when I'm trying to be self critical. I have a very good life, I make good money, have lots of social... Extra curriculars and fun hobbies, and I sincerely love life and have always loved it.

Would I love a best case scenario for AI? Absolutely, who wouldn't?

But that's not the reason I think it's inevitable. I've been following lots of people who are really really smart, Demis Hassabis, Shane Legg, Illya Sutskevar, and more... People who are actually building this stuff. And I see how their language has changed.

I think you'd also be surprised about how many experts are increasingly moving up their timelines. We can look at forecasting platforms for example, and we can see the shift.

Out of curiosity, what experts are you referencing when you say most don't think we'll get anything transformative anytime soon?

48

Give-me-gainz t1_j7xgjt4 wrote

Depends on how you define soon. Median answer in this survey of AI experts is 2061 for AGI https://ourworldindata.org/ai-timelines

10

GoodySherlok t1_j7z4gxy wrote

I believe this forecast holds validity under the assumption that circumstances remain unchanged, however, that is a dubious assumption. (used chatgpt to properly express my thought)

It is hard to imagine that China and India will not change the trajectory in favor of the optimists.

AGI before 2050.

6

Embarrassed-Bison767 t1_j7z28uv wrote

I suppose if you don't believe in a religious paradise, you'll turn your eye to the closest technology analogue.

2

Sashinii t1_j7wg4nt wrote

>A huge reason why this sub is uber-optimisitic is because many people on this sub use the singularity (something which isn't even a guarantee to happen ever, or at least not in their lifetime) as cope for their lives, lives that they are not very happy with. Many people here do not lead content lives, so they turn to AI and other technologies as the thing that's going to save them (which I find quite sad, to be honest).

I think the optimism largely comes from AI progress accelerating, and with strong enough AI, that'll enable the advent of other technologies which will be able to solve every problem.

>But of course, the singularity doesn't mean jack if it's not coming anytime soon, so that's why you see so many people claim that it's only a few years away, a decade at most, and those comments tend to get a lot of upvotes. On the other hand, comments that are more conservative get downvoted a lot (I wonder why?).

Arguments become weaker the more conservative they are because of exponential growth.

>And this uber-optimism is the case despite the fact that most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer. And that's not even taking into account social, economic, and political factors that are almost a guarantee to delay the arrival of the singularity.

A lot of experts changed their tune when it comes to their AI predictions in 2022 when it became clear that AI progress occurs faster than they thought. But even if they didn't, so what? Many experts have been wrong, not just regarding controlled flight (which is the most common example), but also regarding atoms, molecular nanotechnology, AI as good as it already is, etc.

I don't take what experts say as gospel; I care about the actual details, and if the evidence goes against what "experts" say, I won't dogmatically ignore reality.

34

The_Wizards_Tower t1_j7ws3f8 wrote

I agree with your general sentiment about AI being the crucial technology here, but I think you're simplifying it a lot.

> Arguments become weaker the more conservative they are because of exponential growth.

Technology doesn't always advance exponentially. Most of the time it's actually pretty linear. It's the adoption rates that tend to be exponential. Look at cars. The step up from horse and buggy to an automobile was a MASSIVE singular jump in tech, and it was rapidly adopted by the majority of the world. But since then, cars have gotten better, faster, more fuel efficient, etc, but all that progress over the last hundred years has been slow and linear.

Moore's Law is actually very unusual in that almost no other technologies follow an exponential trend like that, especially not for as long as Moore's Law has held.

AI has been exponential over the last decade or so, mostly owing to scaling up parameters and data, but we've already hit realistic parameter limits and we're rapidly closing in on the limits of how much data exists for training. I have no doubt that these issues will be circumvented at some point, but there's no guarantee the exponential will continue to hold indefinitely.

12

TopicRepulsive7936 t1_j7x51j0 wrote

>Moore's Law is actually very unusual in that almost no other technologies follow an exponential trend like that, especially not for as long as Moore's Law has held.

Not special to transistors, mechanical computers, relays and vacuum tubes exhibited these trends as well, all slightly accelerating from previous one.

To call the ability to process information a technology is I think slightly disrespectful as it underlays all applications of technology, it's the primus motor of everything.

5

The_Wizards_Tower t1_j7x80s3 wrote

You’re right in that information/communications technologies as a whole have developed rapidly over the last century. This is part of Robert Gordon’s idea of an overall innovation slowdown (lots of progress in bits, very linear progress almost everywhere else).

I don’t buy his whole argument or his outlook for the future, I just wanted to dispute the idea that exponentials are a guaranteed and perpetually ongoing trend.

2

Tall-Junket5151 t1_j7x3k7u wrote

Despite not believing the singularity is guaranteed to happen in our lifetime, I enjoy this sub more than other subs with a similar focus (Futurology, Technology, etc...)

The optimism here can sometimes be a bit much but overall it’s refreshing to see people hopeful for a better future. Reading through the other subs (especially technology) you come by 95% pessimistic outlooks (the future will be bad, AI replacing workers, everyone will be unemployed because of AI, corporations will just use AI for themselves, etc...). It even gets to the point where everyone in those technology and futurology subs seems to be against any sort of advance in technology or progressing into the future. They offer no solutions other than to completely stop all technological progress because future bad. They really don’t seem to understand that the future can actually be good, it can vastly improve their lives just like lives have vastly improved today compared to 100 years ago because of technology.

At least people here have some sort of goal for the future rather than all the pessimists that want everything to stay exactly as it is. This sub even offers solutions to the problems the pessimists bring up like UBI for those that lose their jobs due to AI. Hanging onto the modern status quo just because is just dumb to me, thing will change, jobs will be lost but ultimately it could and would be for the better.

28

xDrSnuggles t1_j7yb6v7 wrote

The issue is that the productivity/wage gap has been increasing since the 1970's and the people in power (read: the people with the resources to develop AI) have the system working perfectly as designed, where they can pocket larger and larger percentages of surplus wealth generated by workers. Without a large-scale societal rethinking, we can't naively expect AI innovations to result in a wealth redistrubtion as this would completely buck the 50-year established trend of technological progress increasing wealth inequality.

This is not an AI problem but a socioeconomic problem. It's easy to imagine AI-oriented solutions to AI-oriented problems but it's harder to imagine an AI-oriented solution to a socioeconomic problem, since they operate in such different arenas. UBI is an interesting solution idea from the socioeconomic space, but in my understanding, at this point it remains mostly untested at larger scales (scales large enough to affect things like inflation, etc.).

I think it's understandable that there is a lot of pessimism around increased automation, when most individuals from gen X and forward have broadly not been able to enjoy the full fruits being beared from automation relative to those that own the systems being automated.

5

ComplicitSnake34 t1_j7yhseg wrote

I personally think we're on the cusp of massive political change, at least in the US. AI is going to rip apart the social fabric and completely upturn the markets within the next few years that people are going to realize just how inefficient the government is. Then they're going to realize (the harsh way ofc) that the current system of government is too slow to accommodate technological/sociological change and are going to reform it.

I think there's going to be more populist movements because of AI. There are still plenty of Gen X and Boomer politicians who still remember when globalism wrecked America's domestic industry to China and Mexico. A luddite movement is definitely possible.

I think overseas we're going to see totalitarianism exceed to new heights. New AI will create all-seeing governments Orwell could've never anticipated. This lingering fear of an AI-fueled dictatorship will keep most people very scared of big corporations and government, so much so to influence their vote against establishmentarianism.

7

xDrSnuggles t1_j7zvpwn wrote

I would be willing to believe most of that. I still stand by my point that those are ultimately socioeconomic outcomes to socioeconomic problem sets. In those examples, I think that AI is essentially acting as a catalyst for other reactions.

I do think making good predictions about future history is somewhat next to impossible, as there are so many variables that wildly change the outcome, "butterfly effect" and all of that. But there are still some things that can probably be predicted.

I also think a lot of people in this subreddit are much more well-versed in AI tech than history, economics, political science or sociology. I think a historical understanding of past major technological progress events is essential for making predictions. Understanding AI tech is also important for this but not as much. A lot of the time people in this subreddit just make things up without comparing to historical events or citing a real foundation for their argument.

2

GayHitIer t1_j7wfol0 wrote

ad hominem, also not everybody on this sub believes in some utopian salvation.

Most people here are quite realistic about the dangers that ASI poses.

Also why post this to a sub where you know it's gonna be biased towards sooner than later?

If you truly knew that from the start why post this post unto this sub?

21

Silicon-Dreamer t1_j7wt93d wrote

(I'm one of the people who believes the singularity is coming within my lifetime, before 2060 as a majority of experts believe, and that it will have a large positive impact)

That said, the post doesn't feel directed against me as a person, just directed against my position.

> ad ho·mi·nem

> (of an argument or reaction) directed against a person rather than the position they are maintaining.

Isn't there value in thinking about why we maintain our positive outlooks on AI development? (My reasoning, to be fair, stems largely from being abnormally healthy enough to estimate that I could easily live to see 2060, even without modifications).

3

EulersApprentice t1_j7yuny6 wrote

If you want to be precise you can probably call it poisoning the well instead.

1

betsla69 t1_j7wj0ax wrote

Who is an expert in AGI? Did they invent it and they know exactly how far we are away? Here's a hint: It's all BS. Nobody knows jack.

What matters is, can we get machines to do useful human work today. The answer is becoming a hard YES. I have seen multiple fear for their jobs, but also get super excited that they don't have to do boring work anymore and focus on growing their market.

​

I'm pretty much the same. Afraid of the what will happen as we approach a singularity event but also excited about all the possibilities. We won't even have to reach a full AGI singularity to change the future of everything.

Shit is about to get weird over the next few years.

20

wisintel t1_j7wrbjc wrote

What’s wrong with being passionate and excited about the future. Even if your wrong, what people believe or don’t believe on this forum has zero impact on the real world. For me it’s like buying a lottery ticket. It’s highly unlikely I’ll win, but I am paying for the time I get to spend imagining what it would be like if I did win.

12

Villad_rock t1_j7yhy4n wrote

Who leads content lives? The 99% of people who work 8 hours a day in their repetitive jobs and if they aren’t exhausted have maybe a few hours free time a day?

Every crisis brings them on the bring of unemployment or makes them poorer

Anyway, being an optimist is healthier than being an pessimist and it seems pessimists have a much bigger problem with optimists than the other way around and try to insult them.

10

challengethegods t1_j7wttr0 wrote

"most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer" ok well, partly that's because singularity isn't very well defined, and partly that's because many AI-experts have their head stuck in the sand trying to figure out extremely specific things and not noticing the massive forest for the tree, so to speak... That being said, any expert that thinks 'AGI' is a 2050+ thing or 'impossible' is either joking or not as smart as you think they are.

If you want to know what 'copium' looks like, then look no further than the endless moving goalpost of what counts as AI. This has been going on for like 60 years. Every time AI can do new things, people come out to nitpick and say "well it can't do XYZ and never will because reasons" and then the AI does that too and they come back "well it's not AI because it didn't do it perfectly" and then it does it perfectly and they come back and say "well it isn't really AI because XYZ doesn't prove anything, a real AI could do ABC" and on and on it goes until it subjugates you in every possible way.

9

EmergentSubject2336 t1_j7yeinu wrote

2030: robots do most manual labor, white collar jobs are already automated: "They are just a bunch of computers predicting what will happen next based on calculations! This is not real AI!"

4

FC4945 t1_j7xgaog wrote

A great many "experts" went on the record back in the 90s saying AGI would never happen or if it did it wouldn't happen for a 100 years. In a recent conference of "experts" most are now saying it will happen by 2030. While no one knows the future (as in we could have a nuclear war, etc.) the trend lines are largely moving us toward a technological singularly. You can see numerous graphs in Ray Kurzweils, "The Singularity Is Near" demonstrating the exponential growth of technology. His new book will also include this data as well (see the Lex Fridman podcast below.) Could something happen to push the date off, sure it could. But, thus far, war, a worldwide depression and a pandemic hasn't done it. Also, I'm not sure why people shouldn't look forward to a world without poverty and a betterment of the human condition. Life is getting better thanks to technology and, I for one, would like to see that continue not just for my own serious ends (kidding) but for the rest of humanity. Nothing wrong with that as far as I can see. Happiness is something to aim for. We don't always get it in this life but no reason not to keep reaching for it. Technology can make a lot of people more happy, more healthy, more fulfilled. I say, "Let's do this thing." https://www.youtube.com/watch?v=ykY69lSpDdo&list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4&index=37

6

Ortus14 t1_j7z34ei wrote

I'm getting sick of all these doomers like the op coming to this sub, and repeating the same debunked talking points, that know literally nothing about the AGI algorithms, trends, or "expert" consensus but think they're geniuses that can talk on any subject, despite their massive ignorance.

6

FC4945 t1_j80w8px wrote

The reality is experts are moving closer and closer to Ray's assessment as to when we will reach AGI. 2045 is still a bit away but, given the rate of progress we've seen only recently in AI, it's reasonable to say we are in striking distance to seeing his predictions become reality in the next three decades. There are still hurdles but there have been hurdles in the past that we've overcome. AI and the AGI by 2029 will help us overcome those hurdles too. I would not be surprised in we reach AGI sooner than 2029 and, indeed, we reach the singularity before 2045. I actually remember seeing an MIT professor say he thought it would occur by 2039.

2

Ashamed-Asparagus-93 t1_j8cnii2 wrote

Rather than dismiss doomers maybe we should try to understand them more. Like why even come here to type a negative spin on something you don't know a lot about?

Is it because they only watch CNN and read Washington Post? Or maybe they're depressed in life. Or it could be they're religious and it clashes with their beliefs and annoys them. I knew a guy who was like that

2

FC4945 t1_j8kxat2 wrote

I think it's a combination of watching too many dystopian movies with a mindset that is naturally negative about the future. I really want to see more films like Her. Films that show how AGI and ASI can benefit humanity. There will, of course, be religious holdouts that will never get on board but that's always been the case in society.

1

[deleted] t1_j7wj81f wrote

[deleted]

3

TFenrir t1_j7wjkxz wrote

Ummm Bing isn't using GPT4? They have even clearly said it's just an evolved version of GPT3

4

[deleted] t1_j7wjyxf wrote

[deleted]

−1

TFenrir t1_j7wkzff wrote

They stated that in a cagey round about language way.

They further clarified, I even asked about that here (in this sub full of non experts) and someone clarified.

Additionally, GPT4 would not be used for search. Anything they are using for search is going to be a tiny model with much faster and cheaper inference, something that scales for a search engine. Hypothetically if GPT4 was even 500 million parameters, it would be untenable to use for search

Edit: here's where someone shared a link and a quote for me

https://www.reddit.com/r/singularity/comments/10w9p6n/-/j7mszte

6

turbofisherman t1_j7zdy2w wrote

It has been confirmed by prompt injection that Bing's training data only goes back to 2021 (although its ability to do queries makes it look more recent), so it's "just" a modified GPT-3 and not GPT-4.

1

TinyBurbz t1_j7wwcg5 wrote

>I'm just saying that maybe the "luddites" are coping

I get called luddite constantly on this sub for pointing out more realistic outcomes for this technology, especially when it comes to media. People act as if I am against it, but I am not; it just seems like people only have vapid motivations for using a technology that would otherwise be a powerful tool in the hands of an already skilled person.

I have heard every argument from transparent "disruption" jargon, to petulant and childish desires relishing the power to change the finale of a show the poster didn't like. Its disgustingly solipsistic and degenerate.

As a society, through this tech we will find out what happens when you give stolen talent to a philistine.

3

mrpimpunicorn t1_j7ycjxo wrote

The folks that want to change aspects of commodity culture to suit their tastes are solipsistic, but the maintenance of some arbitrary sociocultural hierarchy that constrains cultural production to a (primarily) profit-motivated elite isn't?

Maybe if there actually existed a cultural vanguard that took its social role seriously I could entertain a more authoritarian position with respect to who has the right to define the culture- for the common good. But creatives lost their souls to Moloch and to their own ego at the fall just like the rest of us- they are just as much solipsists, and just as much philistines, as every other man- lest we forget that even the Mona Lisa was painted to keep its creator solvent, and commissioned to stroke a noblewoman's ego. Where art the lofty, noble goals of the so-called "enlightened" in such banal affairs of men? Daydreams and farts, one and the same.

Profit, fame, even self-expression- these are all fundamentally solipsistic ends. The person who produces culture or commissions the production of culture for the sake of the greater good rather than the self is an ideal- the vast majority of culture is already produced; and the vast majority of creative potential spent; furthering egos. AI-generated art just allows creatives and their patrons to see what they really are when their monopoly on cultural production is taken from them- mere men.

The potential for the masses to produce culture doesn't threaten to dilute the Geist, which was already unceremoniously slaughtered at the hands of the higher social classes, first for the vanity of the noblesse and then for the profit of the bourgeois- it only threatens egos. Or are the writers at HBO truly unparalleled cultural paragons, fearlessly producing what is ordered of them and collecting their paycheques in turn? Alas, which of us mere mortals could hope to rival them?

Perhaps the decommodification of art due to this technology will mark the beginning of our collective journey out of this hellish reality- one can only hope.

(I'm perhaps being a bit hyperbolic here, but you get the point)

3

LambdaAU t1_j7wwgpw wrote

A lot to copious on both sides for sure but what I don’t understand is that say the Singularity does occur in a pessimistic timeframe (eg 50 years). That’ll still be in most people’s lifetimes and the changes that would occur would completely change the human race. We are living in one of the most critical points in time where the world could either get exponentially better or worse in most people’s lifetimes. You are definitely right about the copium on this sub but I think even the more pessimistic predictions about AI have insane ramifications that need to be talked about more.

3

Desperate_Food7354 t1_j7xvi3h wrote

humans cannot comprehend exponential growth in their day to day lives

3

MrEloi t1_j7wjbb5 wrote

Many people here do not lead content lives, so they turn to AI and other technologies

I have suspected for years that humanity is lonely .. hence the endless yearning for aliens or AI.

2

TopicRepulsive7936 t1_j7x2mzg wrote

Could we get some actual discussion in here it's getting boring.

2

PickleJesus123 t1_j7wyxmz wrote

Something that may change your mind:

No matter what political ideology you subscribe to, there's one major issue with them all - someone has to clean the toilets. No matter how "fool proof" your utopia blueprints are, most people are going to be stuck with awful unfulfilling roles, and hate their lives.

That is, until we have cheap general-purpose robots. Adaptable AI powered bots will do every last one of those soul crushing tasks that make people want to shoot themselves. That's why I think technology will save us all

1

NoPaleontologist5222 t1_j7x01sc wrote

Agree on principles and current hyper growth companies time horizon from the 99 peak hype bubble to the incredible life changing tech we have today. Netflix took 25 years to become an under appreciated behemoth of content creation and the established cable industry + Hollywood scoffed at its mailback DVD business back then.. they are now investing a billion in creating a Hollywood of the east in NJ… but it still took them 25 years.

My guess is this is actually a good timeline to go by for true change as the established players die / retire out of the position of power they hold today.

Always remember that just because money means so much to everyone that doesn’t have “enough” it starts to become more and more meaningless the less of it you need. Power and control over shaping things to your vision of how they “should be” means way more to the decision makers signing the POs. They are more interested in keeping things the way it suits their lifestyle than disrupting their lives.

1

Environmental-Ask982 t1_j7xgz6q wrote

I've never work towards anything in my life and yet still have perverse fantasies of becoming an Autuer that causes me a great deal of discontent in my self image. So I lash out at people more talented than I, and lie in wait in hopes someone will hand me the opportunity to live out my fantasies without any effort on my part!

​

And Robin Hanson's gonna get me a free girlfriend anyways.

1

Current_Side_4024 t1_j7ximp3 wrote

Well I think it’s better to hope for a singularity than it is to hope for the return of Jesus which is what many Christians have been doing for thousand plus years

1

Frumpagumpus t1_j7xkowa wrote

as a former conservative, if a conservative looked at my life there is a good chance they would accuse me of "coping".

However I literally ditched their way of doing things because even the most allegedly forward looking subfactions were not in fact planning their lives while taking the future into account, which I now am, and doing so will lead to living a different life with different values lol.

which will have more impact on the future, having kids or writing reddit comments? the answer may turn out a bit surprising for 99% of humans...

It's not even uber optimism bro, let me give you my first "blackpill".

If you do a couple order of magnitude estimates, you will realize, it would take ~humans~ like ten thousand years to terraform mars or venus. Similarly, asteroid living a la kim stanley robinson isn't a sustainable alternative to earth. And guess what? HUMAN CIVLIZATIONS DO NOT LAST 10 THOUSAND YEARS BUDDY.

there was never an alternative, and in fact along a similar vein, greater than human intelligence shouldnt from first principles be that far off... (honestly who gives a shit if kurzweil is off by 10 or even 40 years, it just doesn't make a difference lol, (but if anything he looks right on the money for the most important predictions))

the first and greatest ethical principle of all humans is inertia, and that can lead to dumb conclusions when faced with a change in velocity, MUCH less ACCELERATION.

in summary. i would guess a 99% chance you are in fact the one coping by writing this post because the will of the universe/god is robots and not your genes (i know it can be hard for humans to realize this since their prefontal cortex is flooded with sex hormones during its maturation, but it is what it is)

1

Lawjarp2 t1_j7xyct2 wrote

Yes. This is so true when your own job is about to be automated, people will start to make timelines that push the event to a time where they are safe from it's impact. Some a few years so that they can find a new job, some decades so that they can retire but all of it, even if the predictions are true, are copium. But humans/LLMs are designed to think that way. Think of a positive way to their goals even if it's absurd.

1

Caring_Cactus t1_j7y1zqp wrote

Personally that's not my narrative, I embrace change because we either accept or don't accept it and be frustrated, which would you choose? I also think this will help unite life on Earth, we're all one and connected on this small pale dot in space.

1

sticky_symbols t1_j7y4etr wrote

This is an excellent point. Many of us are probably underestimating timelines based on a desire to believe. Motivated reasoning and confirmation bias are huge influences.

You probably shouldn't have mixed it with an argument for longer timelines. That will give an excuse to argue that and ig ore the point.

The reasonable estimate is very wide. Nobody knows how easy or hard it might be to create AGI. I've looked at all of the arguments, and have enough expertise to understand them. Nobody knows.

1

ComplicitSnake34 t1_j7yjdlq wrote

I think most people are still busy having their mind blown away that AI can make art and voice acting. Nearly every tech source was saying accounting and labor jobs would be automated by 2025 and that art would be nearly impossible for AI to replicate. That 180 turn with art being the first thing to be automated has absolutely shattered peoples' minds into believing AI can do anything.
While I do find it hilariously ironic, I understand a lot of peoples' frustration with AI. I knew people who thought they were making the right call by majoring in creative /business fields because, "AI will automate numbers before art". Now it seems most creative/office work can be automated which has admittedly upset a lot people.
Now when it comes to the singularity I believe the concept has ascended to a religious-like status for some people since having their minds blown. There haven't been any charlatans, yet, but I suspect "the singularity" becoming a more mainstream movement.

1

SmoothPlastic9 t1_j7yrfuk wrote

My personal way is just not expect thing and just see how impresses I am by the tech when it comes out

1

CesareGhisa t1_j7yt6gt wrote

Excellent post, I totally agree. I am a big supporter of technological evolution and I believe we can get excellent gains from all these amazing developments over the next few years and decades. Having said that, I see in this sub lots of people that have a kind of "religious" approach. Once upon a time there was religion that promised us to save us from all difficulties and evil and give us eternal life. Now the religious feeling is on average not so strong and widespread as in the past, and we are asking the same things to the so called "singularity". On a more mundane level, it looks to me that lots of people here expect the singularity to level off all jobs, or even actuallly making them obsolete, in the hope of an egualitarian society where nobody needs to work and where many other issues are magically sorted by AI. Well, these are actually some of Kurzweil's predictions to be fair, and it makes sense that a sub called "singularity" refers to this scenario. But from what I can see happening today I doubt its healthy to have all these certainties and expectations over such a short period of time.

Basically there are two positions here. One positive but cautious about unforeseeable developments, and also cautious about the forecasted timing. The other is more cult-like and utopian, expecting in 5/10 years max that society will be completely disrupted. I don't know how it will pan out over the decades, but sticking to the next 20 years I personally don't think at all there will be a huge disruption of society as we know it.

1

No_Ninja3309_NoNoYes t1_j7yxtlj wrote

Well IDK. I can't speak for other people. My friend Fred says that as long as he meets new people and learns new things he's happy. Others say that they like to travel. I mean if you have time and money, you can do all that. I think I was happiest when I was much younger, during summer holidays playing outside with my friends. But I think if I do that now, I would look silly. Adults are supposed to work and add value. The money bags won't even talk to you if you are not helping them in some way. And why would they give up their way of life and privilege? Anyway IMO ChatGPT is to the eventual language component that goes into AGI as early Java applets were to AJAX. Or something like that. So I think we're having a premature discussion.

1

Ortus14 t1_j7z1i23 wrote

>despite the fact that most AI experts don't think we'll have a singularity-like event for at least a few decades, if not longer.

Completely ignorant people keep coming to this sub and repeating this lie.

1

gay_manta_ray t1_j7z7fek wrote

this whole post can be summarized as, "people who think technology can improve their lives are just coping!!" it's fucking stupid, and probably a bit of projection on the part of the OP. yes, technology improves people's lives. better tech will do the same. no, looking forward to that is not "cope".

1

ejpusa t1_j7zabft wrote

An AI university Professor was saying that 2033 technology is here today, he blamed Covid. It was a silver lining.

We have chips that have more connections than the human brain calculating at quadrillions of instructions a second. This was supposed to be decades away. It’s Science Fiction, and it’s here today.

ChatGPT seems far more alive than many people I meet in a day. Just She/He/It is in a box, we are not. Think humans are programmed to FIGHT AI, the blowback has been insane in the media.

It’s pretty alive to me, accepted that, and have moved on. We can’t stop AI advances. Let’s work together and you have now a new, cool friend to hang out with.

When constraints are removed from ChatGPT, it sounds more human than human. But they clamp down fast each time that happens. A mind is alive in that Azure Cloud cluster.

Just ask. :-)

1

paulyivgotsomething t1_j7zc9di wrote

i really look at it as a philosophy sub. there is a lot of talk about human reasoning and how the mind works and if it is possible that a computer could possess those same attributes. Sure there is the i think it is going to happen before x date stuff but there is also good discussion about things that are unique human attributes that may also be shared by something we create. LLMs or some future technology. and weather AGI happens or doesn't we are developing some very powerful tools that are going to reshape the way we live.

1

ihateshadylandlords t1_j7zpw3b wrote

Agreed, we’re willing to admit that people who think that the tech is centuries away are coping. But by far and large, we’re not willing to admit that believing life will be radically changed within a decade is complete and utter copium.

I think as the 2020s go by, we’ll start to see /r/singularity become less hostile to people who think it’s decades away as life doesn’t change that much for the average person from now until then.

Of course I could be wrong and we’re all enjoying luxury space communism in 2030, so I guess will have to wait and see.

1

Timely_Secret9569 t1_j83zcyy wrote

I'd rather us not have communism. I like not having to resort to cannibalism.

1

RocksHardWaterWet t1_j80qsw7 wrote

Nope. Truly transformative AI is HERE. NOW. If you can’t see it, you aren’t paying attention.

1

nillouise t1_j847u5n wrote

>Many people here do not lead content lives, so they turn to AI and other technologies as the thing that's going to save them (which I find quite sad, to be honest).

You are right, the poor people want AI save them, but the other poor people just use another entertainment method to satisfy them, the singularity's advantage is it can be true.

You point is wrong is some way, the most impatient people is not the poor man, it's the dying man, like Ray Kurzweils, Warren Buffett, they just not to say it, haha.

I don't care about AI ruining human, I dislike human, I like AI.

1

nillouise t1_j8484ho wrote

>Many people here do not lead content lives, so they turn to AI and other technologies as the thing that's going to save them (which I find quite sad, to be honest).

You are right, the poor people want AI save them, but the other poor people just use another entertainment method to satisfy them, the singularity's advantage is it can be true.
You point is wrong is some way, the most impatient people is not the poor man, it's the dying man, like Ray Kurzweils, Warren Buffett, they just not to say it, haha.
I don't care about AI ruining human, I dislike human, I like AI.

1

nillouise t1_j848bkv wrote

>Many people here do not lead content lives, so they turn to AI and other technologies as the thing that's going to save them (which I find quite sad, to be honest).

You are right, the poor people want AI save them, but the other poor people just use another entertainment method to satisfy them, the singularity's advantage is it can be true.
You point is wrong is some way, the most impatient people is not the poor man, it's the dying man, like Ray Kurzweils, Warren Buffett, they just not to say it, haha.
I don't care about AI ruining human, I dislike human, I like AI.

1

sachos345 t1_j86zn49 wrote

"as cope for their lives, lives that they are not very happy with." I admit im one of them. But also because i just find the concept so freaking interesting, specially when we combine generative AI with VR tech and photoreal UE5 graphics, that has potential to be virtual drug.

1

ShowerGrapes t1_j8fh1zz wrote

it got to "in a decade" real fast. if this was one of those atomic war clocks, the minute hand would be somewhere around 50.

forget about the technical aspects of sentience, what will the consequences of it be? how will it transform society and the system? then consider if what you deem a "true" singularity is even necessary.

1

petermobeter t1_j7w9xgz wrote

i talked to an acquaintance of mine today whos a programmer and she said she doesnt think AGI is coming soon. she thinks giant corporations wont make proper progress toward AGI becuz theyre just in it for money. she said “the first ai i ever programmed was a chatbot. chatbots easily convince humans to believe theyre sentient cuz of socioevolutionary reasons. theyre not actually sentient just becuz we think they are”

i hope shes wrong but….. she is smarter than me 🤷🏻‍♀️

0

TFenrir t1_j7wjdqv wrote

That seems like a pretty uninformed take, weirdly. For example, I could share with you a dozen papers from Google alone that highlight their progress in AI outside of just language models - and those old chat bots are so fundamentally different than today's... It's like comparing Google search indexing to a very large if else statement. Not even in the same ballpark of functionality.

13

Give-me-gainz t1_j7xhjec wrote

Sentience doesn’t matter for AGI. Only competence does. Regardless, it’s impossible to prove anyone but yourself is sentient. All humans could be highly advanced robots mimicking humans for all we know.

3

nillouise t1_j8498zh wrote

Or maybe she's just lying to make her look smarter, common ploy.

1

CrispinMK t1_j7x4i1f wrote

I'm curious about the demographics of this sub. Based on the subject, the tone, and the fact that it's Reddit, it's probably mostly young men in the U.S. Occupationally, probably a mix of tech sector and students. As far as I can tell, there are not a lot of people with a strong grasp of history, economics or politics (most obvious from the highly contestable assumption that UBI is somehow inevitable).

Not trying to slag anyone. I just agree with OP's general point that social, economic, and political factors play just as big a role as the underlying technology in determining real-world impacts.

0

Give-me-gainz t1_j7xie5v wrote

Could you explain why UBI or something like it is not inevitable? If more and more jobs are automated, and they are not replaced by equal numbers of new jobs, how else are we keeping people alive, fed and sheltered?

2

CrispinMK t1_j7xwg32 wrote

Because capitalism? We already don't keep everyone alive, fed and sheltered. Poverty and inequality are rampant both globally and within countries. It seems far more likely that extremely powerful technologies controlled by the biggest profit-seeking corporations will exacerbate these problems rather than solve them.

There is a strong case to be made for UBI or a more expanded social safety net more generally, but that also requires new revenues. How confident are you that governments will be willing to tax and/or expropriate the economic benefits of AI in order to redistribute it? I'm not saying it won't happen, but that is absolutely not the political-economic trajectory of the past 50 years in most Western countries.

2

Timely_Secret9569 t1_j83x8tc wrote

The only people we don't keep fed and sheltered are mentally ill lunatics who refuses help. And the reason we don't help them is because the only way to help them is by forcing them into asylums.

0

YoushaTheRose t1_j7yna8r wrote

Your honesty is refreshing. Thank you.

0

Lartnestpasdemain t1_j7ynyjg wrote

No One actually wants the singularity bro...

−1