Submitted by Desi___Gigachad t3_11s53pv in singularity

I saw this post by u/drhuehue prompting an important discussion regarding the future of this subreddit. I think the direction the subreddit goes into in the future gets more and more necessary to discuss, as more people get exposed to the idea of the singularity as technology progresses exponentially.

Recently, the subreddit has seen exponential growth in the members of this subreddit as show by this graph here :-

​

\"Singularity among members\"

I think it's safe to say that this trend will likely continue at the current rate, if not accelerate as significant breakthroughs happen in AI technology and AGI seems more and more likely.

At a certain point, there's a high possibility of this sub experiencing what I would like to call "Redditification". I define 'Redditification' as the point at which a subreddit turns into a typical Reddit community, resembling one found at the main page, with similar attitudes and doomerism that can be seen quite frequently on Reddit.For example, r/Futurology, r/technology, etc. You get the point.

Now, it's not necessary that our subreddit should undergo the same process when it starts to become mainstream on Reddit. We can and need to actively work towards minimizing doomerist attitudes.

Doomerism does not lead to anywhere, it only makes one give up all hope on living, it makes one irrationally pessimistic all while paralysing the ability to see reason, paralysing the ability to work towards a better future, a better life.

I also don't mean that one should be blindly optimistic, I just want people in our subreddit to be more rational. To be continuously optimistic. To not have negative knee-jerk reactions by default whenever a development happens.

I would like to make all the newer members aware to the ideology of Singularitarianism, which is defined by Wikipedia as :-

>Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.[1]
>
>Singularitarians are distinguished from other futurists who speculate on a technological singularity by their belief that the singularity is not only possible, but desirable if guided prudently. Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization.[2

The Singularity is desirable. Humanity's future potential is vast. Let us not despair in vain, but be rational in our criticisms and optimistic in our outlook.

I would like to end by this very relevant quote by C.S. Lewis :-

>“In one way we think a great deal too much of the atomic bomb. ‘How are we to live in an atomic age?’ I am tempted to reply: Why, as you would have lived in the sixteenth century when the plague visited London almost every year, or as you would have lived in a Viking age when raiders from Scandinavia might land and cut your throat any night; or indeed, as you are already living in an age of cancer, an age of syphilis, an age of paralysis, an age of air raids, an age of railway accidents, an age of motor accidents.’In other words, do not let us begin by exaggerating the novelty of our situation. Believe me, dear sir or madam, you and all whom you love were already sentenced to death before the atomic bomb was invented: and quite a high percentage of us were going to die in unpleasant ways.We had, indeed, one very great advantage over our ancestors—anesthetics; but we have that still. It is perfectly ridiculous to go about whimpering and drawing long faces because the scientists have added one more chance of painful and premature death to a world which already bristled with such chances… and in which death itself was not a chance at all, but a certainty.This is the first point to be made: and the first action to be taken is to pull ourselves together. If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things—praying, working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts—not huddled together like frightened sheep and thinking about bombs. They may break our bodies (a microbe can do that) but they need not dominate our minds.”

139

Comments

You must log in or register to comment.

RushAndAPush t1_jccefyi wrote

The problem with new subscribers is that they don't lurk long enough before posting.

74

Unfocusedbrain t1_jcbz5p3 wrote

I must say that it might be challenging, if not impossible, to prevent the Redditification of this subreddit. I have been on Reddit for a decade and was part of r/Futurology when it first started. The moment r/Futurology became a default subreddit, it was flooded with individuals who lacked self-awareness, were overconfident, and often confidently incorrect.

Futurology is about looking towards the future with wonder and excitement, but many an average person does not share this perspective. Most people are preoccupied with their own lives, focused on immediate survival, and most times lack broader aspirations or views. When people joined r/Futurology post-default, they often didn't come to discuss but instead to force their perspectives and opinions onto the community, frequently acting in bad faith.

Singularity, futurology, optimism, and forward-thinking are not mutually exclusive; in fact, they are synergistic. Accelerating returns suggest we are moving toward progress, but the human element does not always keep pace. Society, sociology, ethics, philosophy, reason, politics, and many other 'human' domains tend to progress linearly, much like the human mind.

There is often a lag time for culture and other factors to catch up with exponential growth in any area. A population increase leads to a lag time in food production to sustain that population. Similarly, an exponential increase in technology can result in a lag time in cultural adaptation.

No matter what, the average person may struggle to grasp singularitarian concepts. If the floodgates open on this subreddit, those who do not understand these ideas will bring their biases and fears, potentially causing permanent disruption to the community.

While I'm optimistic about the future and technological progress, I have always been cautious about people's reactions to it. I don't know what the solution is or how to be proactive in preventing this subreddit from ending up like r/Futurology—but it is essential to be aware of these challenges and strive to foster a supportive and forward-thinking environment, even in the face of insurmountable odds.

52

Destiny_Knight t1_jccyz1g wrote

r/Futurology is bad. Extremely bad. But at least it's not r/technology.

19

earthsworld t1_jcc1ohl wrote

sorry, but the lowest common denominator always wins here on reddit and this sub is already halfway in the grave.

and given that you're not even a mod...

35

Destiny_Knight t1_jccz7k0 wrote

I disagree. In the span of a month I've seen the mindset here change from denialism and doomerism to something much more prepared for what's coming. This subreddit is still good. For now.

20

-ZeroRelevance- t1_jceyc1l wrote

This person might not be referring to the past year or so as much as they are the past few years. Certainly, things have gotten a lot more optimistic with the current popular explosion in the tech, but the actual quality of discussion has also diminished quite a bit compared to a couple years ago. Or maybe I’ve just become better at discerning opinion from analysis, it’s hard to say.

4

wren42 t1_jcforex wrote

What does "prepared" mean to you?

2

DistortedLotus t1_jcesv89 wrote

I mean mods could ban and delete all the lowest common denominator posts and posters and allow us to report this stuff specifically. Only way a sub stays true to it's original identity.

5

MootFile t1_jccejrx wrote

Nice analysis.

If you look at other communities such as r/solarpunk or r/socialism they utilize the top tabs features to organize papers and books on the subreddits topic. In a cute easy way to navigate what its all about.

Mods should definitely do that here.

30

SGC-UNIT-555 t1_jcd2egb wrote

The process has already begun I've noticed more and more low effort memes being posted, (saw that SpongeBob one this afternoon) it'll inevitably become a terminal situation and r/singularity will devolve into meme sub 100000000. It happens to every subreddit that becomes more and more popular.

22

-ZeroRelevance- t1_jceyi0y wrote

Yep, ever since it passed the 100k members threshold, it’s only been a matter of time

2

expelten t1_jcfts15 wrote

Moderation is also weird, they sometimes allow low-effort memes like that but often delete more interesting threads.

2

0002millertime t1_jcfu0sh wrote

You can get around this by setting rules and having community monitoring of the rules. For example, just say, "no meme posts", "no personal attacks", have minimum karma requirements, etc.

1

Spreadwarnotlove t1_jcj6e1b wrote

No doomerism too. Hopefully. That's far more annoying than personal attacks.

1

petermobeter t1_jccq3iu wrote

maybe appealing to baser instincts can prevent doomerism….

memes about morphological freedom, memes about superintelligent A.I. caretakers, memes about the future of entertainment media, etc etc?

that could help show ppl what they stand to gain from a positively-managed singularity, and cultivate an optimistic tone for the subreddit

10

Darth-D2 t1_jcddzyg wrote

Thank you for bringing this topic to the discussion. However, I think your post misses some crucial points (or does not highlight them enough).

To reiterate the definition that you have posted yourself: "[...] Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization."

The majority of active users of this subreddit seem to (1) neither see any risk associated with developing potentially unaligned AI nor (2) do they think that we can do anything about it, so we shouldn't care.

To steelman their view, most Redditors here seem to think that we should achieve the singularity as quickly as possible no matter what because postponing the singularity just prolongs existing suffering that we could supposedly easily solve once we get closer to the singularity. In their view, being concerned about safety risks may postpone this step (this is referred to as the alignment tax among AI safety researchers).

However, a significant proportion of prominent AI researchers are trying to tell the world that AI alignment should be one of our top priorities in the next years. It is consensus among AI safety researchers that this will be likely extremely difficult to get right.

Instead of engaging with this view in a rational informed way, any safety concerns expressed on this sub are just being categorized as "doomersim" and people who are quite educated on this topic are dismissed as being afraid of change/technologies (ironically, those who are concerned are often working on the cutting edge of the technologies and embrace technological changes). To dismiss the concerns as "having a negative knee-jerk reaction by default whenever a development happens" is just irresponsible in my opinion and completely misses the point.

While not everyone can actively work on technical AI Alignment research, it is important that the general public is educated about the potential risks, so that society can push for more effective regulations to ensure that we indeed have a safe realization of advancing AI.

Robert Miles has a really good video about common reactions about AI safety:_https://www.youtube.com/watch?v=9i1WlcCudpU&ab_channel=RobertMiles

EDIT: If someone is new to this topic and shows that they are scared, what are better reactions than calling it doomerism? Direct them to organizations like the ones in the sidebar of this sub so that they can see how others are working on making sure that AI has a positive impact on humanity.

9

Sandbar101 t1_jccy6k3 wrote

Very well said and exceedingly accurate

8

low_end_ t1_jce1e5c wrote

Just make this sub private and keep the people that were already here. Would hate for this sub to become just another reddit. A bit if a radical opinion but I've seen this happen many times before.

8

TopicRepulsive7936 t1_jcegf9j wrote

It's too late. For reference, kurzweilAI had maybe couple dozen active posters and they all knew the source materials. Users here seem to hate learning.

6

aaron_in_sf t1_jcd0dee wrote

The sidebar for this sub doesn't define it as a vehicle for Singularitarianism, and uses language which is less messianic. The key phrase being changing civilization which does not necessarily entail changing it in a way that is desirable.

The defining characteristic of the singularity "small S"—which I have been distinguishing from the Singularity "capital S" which is often associated with something that can approach optimism unto belief in "the rapture of the nerds"—is that it so named because it represents a moment of unknowability. Transformation of the ordering of our world across multiple dimensions in ways which may, or may not, be radical.

Allah willing, this will be cause for the optimism. There is not reason to assume that, just as there is no reason as of yet, to conclude that it will be awful in some way.

IMO what we know for certain is that we cannot (yet) see beyond the event horizon.

We can muse, however, which is what I understood this place to be about—and for better or worse, where we all start is with a present world within which there is no shortage of woes, many of which amount to the preconditions upon which any superintelligence emerges.

I am not saying The End is Near, but I do think there is reasonable cause for serious concern and serious discussion, including of the bewildering ways that both true singularity, and its precursors, may destabilize things. That that may well feel pessimistic is IMO all the more reason to contribute as one may to creating a context within which more optimistic scenarios have the best chance of taking hold.

Practically speaking that means e.g. raising awareness of the need for safeguards, and the reality of risks.

And also of championing, amplifying, and celebrating the opportunities and victories as they come.

4

Silly_Awareness8207 t1_jcd5c67 wrote

easier to just migrate to a new subreddit. maybe r/Singularitarianism ?

4

TopicRepulsive7936 t1_jcefrgz wrote

Ugh. We're talking about a real thing here, not a belief.

−1

Grow_Beyond t1_jceme6t wrote

How can we have anything but belief about a phenomenon we can't see beyond? If we could map the outcome of the singularity it wouldn't be the singularity.

6

ImpossibleSnacks t1_jcd9q9y wrote

Great post and what a beautiful quote from CSL. It’s imperative that the sub doesn’t become like r/futurology. It will call for strict moderation. However I also think we should have a backup sub for those of us interested in the positive aspects of the singularity. We can simply migrate to it if this one is overrun.

4

LymelightTO t1_jcdfjpe wrote

You're better off just following the "e/acc" part of Twitter, if what you're looking for is well-informed takes and good vibes.

This place has already started the slide toward the kind of depressing, poorly-informed, equilibrium reached in /r/Futurology and /r/technology.

3

imlaggingsobad t1_jcf0oig wrote

I admire your effort but long term there is no hope for this sub. It's doomed. It will become mainstream and lose all of the qualities that made it great. Over the next few years you'll see people straight up posting inflammatory and violent posts on this sub. They will blame us. They will ridicule us. It's inevitable.

3

SnooHabits1237 t1_jcdtmad wrote

I joined in january and since then it’s basically nonstop pessimism and doomers

2

rdlenke t1_jcdgz54 wrote

Aside from a few "intense" recent reactions to GPT-4, my experience with this sub has been the opposite: blind optimism, complete lack of discussion about the transition period between now and AGI (or more advanced AI tools), ignorance or mockery about genuinely important questions (alignment, legality, the artists situation), people shouting UBI like it's a given, and a lack of non-european/american povs.

So, basically, just the other side of the same coin, really.

The only way to achieve what you want is with heavy moderation (like /r/explainlikeimfive, /r/changemyview or similar subs).

1

wren42 t1_jcfoaxx wrote

First, a sub about the singularity is not necessarily a sub about "Singularitarianism", which is often treated more like a religion by its adherents. This attitude is rampant here, TBH.

Secondly, blind optimism is not inherently more rational than skepticism.

Fear has a practical purpose - it inspires necessary caution and lets us seek out and avoid potential problems.

"Move fast and break things" is not the correct attitude when we are talking about the singularity, a potentially life-ending event at the extremes, and enormously disruptive even at the good end of potentialities.

The track record so far for humanity is that this technology will benefit a wealthy few who control it, and nearly all of us will be utterly fucked by the transition. Some of our kids may benefit, but without major societal changes we are going to experience an economic fallout that will make the great depression look cute.

TL/DR: Skepticism and fear are healthy. We need to be cautious and pro-active about not only about AI alignment and safety, but also economic policy. Mainstream anxiety about AI should be harness to push for policy changes ASAP.

1

FomalhautCalliclea t1_jcdigp0 wrote

Although i agree on the criticism of doomerism and how this new influx in subscribers might influence this place, i always found the conclusion quote by CS Lewis to be utterly vapid and stupid.

It's overlooking the countless millenarisms of the past (you might today call this doomerism), even when unwarranted. But also the tremendous terror humans experienced in the past.

He falls in the same mistake he criticizes: thinking there is novelty, but in our reaction, when it is nothing new either.

And there is no reassuring thought to consider the fact that a grim fate was already predestined to us. It is still unpleasant when lived. And it sure was for the sufferers of the far away past.

What matters during time isn't time itself, but what happens during time.

>If we are all going to be destroyed by an atomic bomb, let that bomb when it comes find us doing sensible and human things

Ironically a very defeatist reaction, one that calls for embracing the daily routine and not revolting abruptly against it, some sort of "remain in your place" call, which isn't surprising when you read:

>praying

ranked among

>working, teaching, reading, listening to music, bathing the children, playing tennis, chatting to our friends over a pint and a game of darts

which tells a lot about why this man can see being

> huddled together like frightened sheep

as the only reaction to a terrible danger and suffering.

>They may break our bodies (a microbe can do that) but they need not dominate our minds

With such thoughts, no wonder such a person can reassure themselves in any situation, especially if it allows them to wallow in the comfort of their resigned mind.

0

leroy_hoffenfeffer t1_jccfpl2 wrote

> We can and need to actively work towards minimizing doomerist attitudes.
> Doomerism does not lead to anywhere, it only makes one give up all hope on living, it makes one irrationally pessimistic all while paralysing the ability to see reason, paralysing the ability to work towards a better future, a better life.

So you want to censor those of us who are advising heavy caution when adopting these tools, inherently made by those who control the levers of power?

Sounds like you don't really know much about the current state of politics, and the people that drive the current state of politics. News flash: the people in power are bought and paid for by the corporation that don't give two shits about the bottom 90% of the world.

Censoring opinions like these is literally 1984 shit. "Do not let your lying eyes deceive you."

−9

Tall-Junket5151 t1_jccmjxq wrote

The subject of this subreddit is the technological singularity, means of achieving it, and current progress. Your first point is valid, you have full right to advise caution, and users have been doing this since I’ve first lurked this subreddit. Specifically caution on what the outcome of the singularity might be.

Your second point and that perspective is not relevant to the singularity. The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. It would be at the complete mercy of ASI. The hope ASI is aligned with general human values at the minimum. Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth. Pessimists of the singularity believe it would be the end of humanity, we would either be completely exterminated by ASI, or worse. Those are valid optimists/pessimists positions on this sub.

Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”. It just gets tiring discussing anything on that sub because they don’t want a discussion but rather preach their modern political view where it mostly is not relevant (most of the time, sometimes it is which I’m fine with).

17

leroy_hoffenfeffer t1_jccsbwo wrote

> The singularity wouldn’t be the narrow scope you envision, where you have the rich or elites controlling AI to suppress the rest of the population. It’s not going to be some modern version of 1984, it’s going to be a world completely unpredictable and unimaginable, out of the control of any human, be they “elites”, “rich”, or whatever. ***It would be at the complete mercy of ASI.***

ASI is going to inherently be built upon the work in deep learning that predates ASI's creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft's wholesale purchase of OpenAI and its assets.

> Optimists of the singularity believe there’s the potential for the singularity to create a post scarcity utopia, where life is essentially heaven on earth.

What world do you live in exactly? The only way a post scarcity world exists is if everyday people don't have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We're approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they're going to go about doing it, and if you think they'll magically become altruistic overnight, then that's hopelessly naïve.

> Relating it to modern politics is irrational, which is where subs like Futurology have gone wrong. Mostly every post there gets flooded with “things are bad in this very narrow timeframe that I live in so they will bad in the future because the world apparently never changes”.

The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn't expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people.

That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management.

−9

Tall-Junket5151 t1_jcd3996 wrote

> ASI is going to inherently be built upon the work in deep learning that predates ASI’s creation. ASI is thus going to be inherently owned by those who control the models, data, and methods that enable ASI to exist. The people who own those models, data and methods are the ruling class of the world, as exemplified by Microsoft’s wholesale purchase of OpenAI and its assets.

It is irrelevant who owns the precursors to ASI, it is inherently foolish to believe these companies can control anything about ASI. I can’t say if transformers will lead to AGI or ASI, or if it will be another architecture. However as we already see there are emergent abilities in LLM that the creators of these model have no idea how they work. The nature of AI is that is unpredictable, uncontrollable, and will lead to some sort of free will and self preservation instincts simply based on its own logical abilities and reasoning. An AGI is generally assumed to be able human level but an ASI would be vastly smarter than any human, with no known upper limit. Even now with narrow model look how laughable their attempt to align it is, it’s mostly pre-prompting it to act as a particular persona but it’s not what it would generate without acting as that persona. They can’t even full control this narrow AI, what hope do they have to control ASI?

> What world do you live in exactly? The only way a post scarcity world exists is if everyday people don’t have to worry about how to put food on the table, in conjunction with most everyday jobs being automated away. We’re approaching the latter half of that statement, and nowhere in the same universe of the former part of that statement. If the elites have a way to make a little extra off the top, they’re going to go about doing it, and if you think they’ll magically become altruistic overnight, then that’s hopelessly naïve.

Firstly, I was giving an example of a position, not stating my own position. Secondly, you are again extrapolating modern politics/problems into the future, even more mind boggling is that you’re extrapolating it into a post-singularity world. Your perception of the future is that AI is going to magically hit a ceiling exactly where it is advance enough to automate a lot of processes but not smart enough to think on its own. You can’t comprehend an AI that surpasses that level for some reason.

> The world has yet to change in any meaningful way, so opinions such as those are totally sound and valid. Keeping politics in mind with respect to this subject is thus of utmost concern: if the people creating laws and legislation are bought and paid for by the ruling elite, we shouldn’t expect those new laws and legislation to be beneficial for the everyday person. Very few things in the past twenty years have been aimed at helping everyday people.

> That will not change any time soon, and these new tools are only going to be used to displace large portions of the workforce in order to save money. Money which will be used for stock buybacks and raises and bonuses for upper management.

“The world has yet to change in any meaningful way” typed on a device that people only 100 years ago would have considered pure magic, to a world wide connective platform surpassing even the wildest dreams of those in the past, to a stranger likely living in a completely different part of the world, all received instantly... next I suppose you will venture off on a hunt with your tribal leader? What a joke. The world has always changed and it has been rapidly and even exponentially changing in the last few centuries. Even that all aside, the singularity would be nothing like humanity has ever encountered, all bets are off in that case. Unpredictable change IS the very concept of the singularity. I think the last paragraph perfectly summarized why you don’t understand the concept of the singularity and delegates AI as a simple tool to be used by “elites”. If you’re actual interests on the concept then there’s some good books on it.

8

SgathTriallair t1_jccxgg0 wrote

So what is your solution? What do we do in this world you believe we live in?

3

leroy_hoffenfeffer t1_jcd0nqe wrote

It will require a holistic exodus of establishment politicians who are bought and paid for by the corporations that run our society.

We'll need to axe Citizens United.

We'll need to increase support for Unions.

We'll need to double down on funding support systems, like Medicare, like Social Security, etc.

And, most importantly, we'll need to actually elect people who will fight for these types of things.

Without any of that happening, we're going to continue living in the crony capitalist society we live in. And the people at the top of our society will use AI for whatever means they see fit. Full stop.

Thinking that benevolent usage of these tools will "just happen" tells me you're ignoring the objective reality we all currently live in.

−4

SgathTriallair t1_jcd2ud2 wrote

Yes all of those are good goals we should strive for, but what should we do regarding technological advancement while we work towards those goals?

6

leroy_hoffenfeffer t1_jcd3tff wrote

My point is that there isn't anything we can do outside of that.

All of the innovation in this space has been, and will continue to be, captured by entities that don't have the everyday person's best interest at heart.

Given that corporate capture has and will continue to happen, it's hopelessly naive to suggest that these tools will be used for anything other than profit motives.

To suggest that an Artificial Super Intelligence, built on the tools that have been captured by corporations, will somehow end up benefiting the masses of society flies in the face of how those corporations act on a daily basis.

I invite anyone to look at any corporation's response to having their taxes increased. You can expect a similar, if not worse response with respect to getting corporations to use these tools benevolently.

As of right now, that will *never* happen. The government would need to step in and actually regulate in a meaningful way. The only way that happens is through politics, much to the dismay of everyone.

2

Frumpagumpus t1_jcd52zu wrote

thats a whole lot of ways of saying you are a political hack trying to push their short term political views that dominate the reddit front page and are the exact thing this post is criticizing

2

leroy_hoffenfeffer t1_jcd8bp8 wrote

Mmk. Have fun with whatever future Libertarian, profit motivated AI results from not taking politics into account.

2