Comments

You must log in or register to comment.

magvadis t1_j3jwezp wrote

This seems like a lot of complex words to say something you could say in a sentence that everyone already agrees with. Young people are agents of change because they do not adopt the privileges of the system. Working into a system inherently asks you to question it. This is a recurring basic element of any system.

85

lizzolz t1_j3kghz3 wrote

I agree. The headline pontificates way too much. The mark of good writing is to convey something complex in a relatively simple way, not to make grand verbose statements that mislead people.

42

EarthTrash t1_j3kowoa wrote

The latest generation is destroying civilization? Never heard that one before.

21

cannaeinvictus t1_j3kkhmy wrote

“Due to the introduction of vastly new minds” that doesn’t make sense at at. Vastly and new Sony go together like the writer is forcing them to.

10

Gmroo OP t1_j3kz20v wrote

Yes, vastly was supposed to precede "different",.. which didn't fit..changed one word, forgot to change the other one... mind fart.

2

Gmroo OP t1_j3l009u wrote

With all due respect, this is not the topic...did you even read it?

It's about the introduction of new mind architectures with new subjectivity and the consequences of that. It answers the question: "What happens to civilizaiton when we can actually augment our minds and create all sorts of AIs?"

9

magvadis t1_j3opl3t wrote

New agents. New children. New ai...AI is still Intelligence.

1

Gmroo OP t1_j3plzf9 wrote

Categorically and drastically different than merely new humans. That's the whole point of the post.

1

magvadis t1_j3r9q6f wrote

I really don't see anything here but a label that still applies.

1

Ohgodgethelp t1_j3lxfdc wrote

No, thats not exactly what it says. Your framing makes it sound like a 1960s civil rights revolution. This is about increasing complexity in the way the new generations minds work, causing them to be incompatible with what came before.

3

ThePokemon_BandaiD t1_j3mvej4 wrote

its not about people at all... its about the future of artificial intelligences or transhuman minds

3

magvadis t1_j3opfok wrote

Given one instance applies to any function that fits it I don't see how something that works for civil rights in the ,60s doesn't work for general systems in many situations.

1

Ohgodgethelp t1_j3oxg4y wrote

That makes no sense. You're stating 1) young people are agents of change 2) because they do not accept the previous generation had certain privileges.

Article states 1) young people result in a changing social system because 2) they absorb and process data in a completely different level of complexity.

The civil rights of the 60's could be seen as a result of this, but it's a big assumption. In any case that does not mean the civil rights of the 60's is EQUIVALENT of this, any more than a dog having four legs means all quadrupeds are dogs.

The example in the article could also apply to older generations being unable to have any sort of discourse with younger generations due to more complex modes of thinking. You could say that, for example, grandpa going crazy because of facebook is a result of a younger generation processing information at such a level that they could literally hack grandpa's brain.

It's not too many complex words to say a simple phrase. I'm afraid you didn't grasp what was being said and you oversimplified it for your own understanding.

1

Prineak t1_j3kdly4 wrote

“There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable.

There is another theory which states that this has already happened.”

~ Douglas Adams

52

Excellent_Fig3662 t1_j3hqonp wrote

This really seems quite ridiculous and sensationalist to me. Major emotional emphasis is placed on the word “collapse.” The real problem is going to be what it has always been, lack of equality, justice, intelligence within the social order.

41

Gmroo OP t1_j3huj9n wrote

Did you actually read it and do you have an actual argument?

11

Excellent_Fig3662 t1_j3hxzob wrote

What you science fiction here already exists in reality through inequality. Well nourished and cared for minds are going to automatically have an advantage against materially deprived and disadvantaged minds. Your science fiction provides a narrative escape from reality, doesn’t deal with reality at all, doesn’t live in reality. Mass suffering is already taking place because of inequality and superstition.

5

Gmroo OP t1_j3i6imo wrote

I don't deny issues of inquality, but this is simply not the topic here. And you've brought no argument regarding the actual content of the post.

17

Excellent_Fig3662 t1_j3j9o6f wrote

Let me simplify. The biggest problem we face is not from sensationalized computers but human psychology sabotaging our species, more specifically, individuals in power using weapons of mass destruction.

1

WhollyHolyWholeHole t1_j3ju6mf wrote

Did you read it though?

9

Excellent_Fig3662 t1_j3jvtpi wrote

Just about to the end, it was really more than I could suffer. It’s very unserious, but the author is intelligent and has a good mind. He should try reading sociology.

1

Helios992 t1_j3ilxes wrote

İ think it's about how social structure changes over time that not only knowladge about it becomes history also "experience" related to it becomes invalid as well

2

Symboliboi t1_j3iagqz wrote

You don't think a "collapse" of everything down into a problem with equality, justice and intelligence is a bit... limited? It seems like you would inevitably miss something and likely recreate the problem you were trying to address, simply because you have acted as if you have found the absolute truth which you almost certainly have not. I would agree those three things are important and have always been problems, but it seems to me that we don't have the required understanding to truly fix these issues in an adequate way.

10

JackofAllTrades30009 t1_j3lrqxi wrote

If you have more expanded model than go ahead and propose it. Calling something “limited” on its face without even intimating at an improved model is vacuous.

5

kgbking t1_j3l3lzv wrote

>The real problem is going to be what it has always been, lack of equality, justice, intelligence within the social order

A lack of a sense of community too.

Rampant individualism goes hand in hand with inequality, injustice, false consciousness, etc.

5

k3170makan t1_j3icyr2 wrote

And what point did we ever achieve equality and justice?

1

VersaceEauFraiche t1_j3jrbsy wrote

It is non-falsifiable and intentionally ambiguous as to mean anything you want it to mean at any rhetorically advantageous juncture.

1

MidnightAnchor t1_j3lvuir wrote

Human beings are not built to serve justice. Equality on the other hands....that would be really nice... But it requires justice.

1

MaxChaplin t1_j3lyp55 wrote

What form would justice and equality take in a world where minds span the entire spectrum from lizard to chimp to human to superhuman, or where people can create an army of clones of themselves? Social theories formulated in the 19th century are not ready for this.

1

Excellent_Fig3662 t1_j3mg0wg wrote

What dumb science fiction are you throwing around like reality? An army of clones? 😂dude you’ve watched way too many Disney movies. Our existential threat is the proliferation of weapons, specifically weapons of mass destruction, in the hands of immature and violent humans. There are very successful social systems in the world that eliminate human suffering and level out inequality; see Rutger Bregman, Utopia for Realists.

2

MaxChaplin t1_j3mnikf wrote

"Disregard dangers from technologies that do not yet exist" is a heuristic with a rather poor track record, when you consider the costs and benefits. In particular, anyone who followed it in 1930 would have told you that bombs strong enough to pose an existential risk to humanity are impossible. And indeed, at that time it wasn't obvious they aren't.

You can't be confident that the technologies of the following century won't redefine the meaning of being a person, and a century is not much by historical timescales. Even if there's only a 5% chance, it's something worth preparing for.

(The army of clones comes from Robin Hanson's Age of Em, by the way)

3

Excellent_Fig3662 t1_j3mr0xv wrote

Sure, but you’re still engaged in science fiction as an escapism from the real world. The more important question you should be asking yourself is why your psychology is drawn to this? That’s all I have to say.

1

MaxChaplin t1_j3mwux3 wrote

Why? Because I'm a progressive. I want to stay on the pulse of social progress, which means not waiting for society to force me to adapt. New society-shaping technologies will almost certainly appear and will force us to reexamine our values. Those who refuse to do so are doomed to become conservatives.

Science fiction (and fiction in general) has always been a useful tool for social progress. The hypothetical scenarios allow readers to stress-test their beliefs and moral instincts, and to resolve internal contradictions that familiar real world scenarios couldn't.

2

Archelon_ischyros t1_j3kfgh4 wrote

What a fucking ridiculously unintelligible heading to this post.

8

lizzolz t1_j3kgw9m wrote

I wholeheartedly agree.

Please, just ELI5!

2

Gmroo OP t1_j3l256o wrote

Once we create new minds they will be so different that all communication will breakdown and we won't be able to predict each other's behavior or states.

Like if you cry now I can make the reasonable assumption you are sad or pain. Tremendously many assumptions like this one we take for granted because we're all humans and the diversity is quite low compared to a civilization that builds new types of minds.

So I argue this is a disaster waiting to happen.

2

lizzolz t1_j3l4b4a wrote

> So I argue this is a disaster waiting to happen.

It does sound dystopian. But can you elaborate on this?

2

Gmroo OP t1_j3l4zgi wrote

The post elaborates on it. I just tried to think the logical consequences through of what happens when you introduce basically alien minds into a civilization that for 99% caters to one. Dystopian or not, it is what it is.

3

LobsterVirtual100 t1_j3ms8yu wrote

What’s your thoughts on us returning to a visual language through the embrace of AI and these new minds?

I could see as AI normalizes, AI generated visuals becoming their own form of language and communication due to its basic form of being an image translated from words and concepts.

We already see this slightly with memes and I’d argue it proves the opposite of any collapse, due to the collective sharing and understanding of what most memes essence is.

1

Gmroo OP t1_j3mv47a wrote

I think we need to work on figuring out what sort universal languages may be created and may exist. For example exchanging knowledge graphs.

I don't think current memes prove anything, in the sense that with the introduction of new minds we'll have a whole other world of minds on our hands.

So tendences of current minds are not that relevant.

Just imagine entities whose behavior completely doesn't jive with what you're used to from humans. We're used to infer each other's states because we're so alike. That's how evolution optimized us. But there is no universal principle that this needs to be the case. At all. Hence a total collapse of intersubjectivity once we have a "free for all" mind designs.

1

Gmroo OP t1_j3kyy6m wrote

Sorry, I tried to put it in one sentence with just 270 characters. Once we will augment minds and create new minds with AI, we will have catastrophic communication issues due to diverging subjectivity of these minds.

2

RenlyTheLast t1_j3lnl3c wrote

“Catastrophic communication issues” and “diverging subjectivity of minds?”

So…communication issues because people think differently? Just say that bud, this isn’t English 203 lll

1

Gmroo OP t1_j3ltsvo wrote

"Because people think differently" is not the case I make. Read the post.

2

RenlyTheLast t1_j3lu2s5 wrote

That’s what the post says bud. Stop trying to big head everything, lol. This isn’t “Fight Club”

2

Gmroo OP t1_j3lu7ea wrote

Read the post, not just the title. A mind does not equal a human or "people".

1

RenlyTheLast t1_j3lulbh wrote

I read the post, how about YOU read some of the comments you’re getting?

Stop 👏🏻trying 👏🏻to👏🏻make👏🏻yourself👏🏻sound👏🏻smarter👏🏻by👏🏻using👏🏻unnecessary👏🏻words👏🏻

“Computers maybe might think differently, so that probably might change society.” Wow, one sentence, one comma- REVOLUTIONARY!

0

MidnightAnchor t1_j3lx0y8 wrote

It's an important message, being considerate of the authors writings. Please be kind and reeeeewind

2

RenlyTheLast t1_j3m026h wrote

It’s a 7th grade “I just watched Vanilla Sky and now I understand” epiphany lol.

1

MidnightAnchor t1_j3lwt5b wrote

Imagine a God shows up and speaks with you.

They are omnipotent and genuinely pleasant.... but they burn your house down.

They burnt your house down to help you.

You just know that there is no way that burning your house has helped you, but both opinions are true.... The difference being that one of you exists outside of Time.

your perspectives don't line up.

1

MaxChaplin t1_j3m24jv wrote

I don't see it as unintelligible at all. Must be a cultural difference between me and this sub's general users. I see people trying to relate it to social justice, which is probably the area they're more comfortable in, kinda like I often try to parse philosophical arguments in terms of systems and mathematical models.

As they say, when all you have is a hammer, a screw is an ugly nail with a helix that makes it needlessly difficult to hammer.

1

r0ndy t1_j3hbiup wrote

How soon does augmentation begin and how quickly would people accept it and grow it?

7

Gmroo OP t1_j3hcs00 wrote

Hard to say, but some argue phones and other external devices are already types of augmentations. I think large language models like ChatGPT are rapidly becoming ubiquitous and this year for many it'll become normal to have an A.I. assistant handy at all times. There is a gold rush underway.

So, in so many ways we're already accepting them. We just currently don't have the tech to connect them to the brain well. I think this can surprisingly rapidly change once we can put 1000s of A.I. scientists to work.

Getting a model like GPT 3.5 there is not too difficult. Fine-tune on science papers, do reinforcement learning for math (it's not quite good at that, for the similar reasons diffusion models like Midjourney or Dall-E2 produce text-like gibberish) and give it access to the web and let it self-verify its output. That'd be a good start.

5

r0ndy t1_j3he9hu wrote

Easy to say that having google or Siri in your pocket. All forms of AI. And yet, we work longer hours for less pay than years past.

I think this could be used to replace an entire workforce at a big box retailer.

5

JeffryRelatedIssue t1_j3jg1q5 wrote

Look at what co-op is doing in sweeden for instance, rewe in germany and amazon in the uk.

You only need a a couple of people to run multiple stores in a fairly large area

2

JeffryRelatedIssue t1_j3jftki wrote

Even basic conversational AIs like GPT are very far from being ubiquitous and neither will they be anytime soon. These are toys, stepping stones to broader implementations and initself is 20 years away from being able to even tutore primary school students in science, let alone be a scientist. Even using a GAN for reinforced learning (which is by no means effective) it would take years of processing for a marginal capability in doing math or science. These toys haven't been developed for precise output opperations.

In the specific case of GPT, it's just a semantic interpretation layer, an interface for a different AI who's role is to derive intended meaning out of a statement. The back and forth it does with people is just treating humans as an adversarial network. GPT will be the friendly face for the AI that will fire people for having predicted sub-optimal outputs in the next quarter.

Giving free reign to do result check online is what made the first generation of racist conversational bots. The internet isn't a fact book either and given how model scoring happens in a cnn, any AI would only validate with agreeable sources for the sake of fast evolutionary integrations.

AI assistants are already a thing. And i don't mean amazon or apple whose feature sets aren't spectacular. I mean the virtual assistants that are already available for office workers in certain sectors that can fix my PC, remind me to do things and reschedule meetings (according to it's own method of determining priority) on it's own to ensure i have enough time to do it given previous experience.

5

MidnightAnchor t1_j3lvyfe wrote

Imagine it started 100,000 years ago. How would you know?

1

r0ndy t1_j3p3o9i wrote

You would need to define it I'd think. That would give some direction to it

1

Responsible_Cloud137 t1_j3jgo6c wrote

Does this basically mean "messing with the Collective Unconscious?"

6

Gmroo OP t1_j3l04cz wrote

I suppose you could put it like that!

2

Dry_Turnover_6068 t1_j3i9sxf wrote

Someone needs to invent a new religion. The ones we have today are boring.

5

seeseabee t1_j3jo39t wrote

Technically I would say that we already have a new religion that popped up in the last few hundred years and is steadily growing stronger, especially in the last 50 years: Capitalism.

6

ValHova22 t1_j3jwcys wrote

Reminds me of the Chris Rock joke. "I been looking for Gold all my life and he's on the back of the dollar bill. In God We Trust

3

afraidfoil t1_j3j7itj wrote

I wouldn’t call them boring just more than a bit archaic.

5

lizzolz t1_j3kgomt wrote

> Someone needs to invent a new religion. The ones we have today are boring.

You've never read scripture, clearly.

2

agent_wolfe t1_j3kz813 wrote

All Bow Our Heads And Pray to the New God:

Manos!! The Hands of Fate!!!

2

Prineak t1_j3kdx83 wrote

Won’t happen until art gets a revival with AI and people finally understanding postmodernism through the postcontemporary.

1

True_Inevitable_2910 t1_j3kt7sz wrote

In English please.

2

Gmroo OP t1_j3l0koo wrote

Once we will start augmenting our minds and creating AIs that can participate in society, the subjectivity of these minds will be so different that all of our systems and ways of being will collapse.

These minds won't be able to predict each other. And none of our systems are ready for any of this.

When you think it through, it's a catastrophe about to happen, because we've custom-tailored our world to ourselves..since we're the only dominating species.

It's easy to just shrug at this, because we're so used to things being the way they are.

3

shcorpio t1_j3ktq34 wrote

Fascinating. I believe we are at the very early stages of this collapse in intersubjectivity the author is predicting. Our current use of computers, the internet and social media to expand the human mind's abilities are leading to nascent differences that can only spread further apart from here.

I just wanted to add OP, It's amazing to me how many people accused you of being verbose because they were too lazy to read your post yet had no trouble spouting off in the comments. Opinions are like assholes so the saying goes... Kind of argues your point.

2

agent_wolfe t1_j3kzjfc wrote

I know I speak the same language as Americans, but it’s like we’re speaking two different languages. It’s not just the regional dialects… there’s just this barrier that prevents clear communication. Of course it’s not all Americans…. Just most of them.

1

Gmroo OP t1_j3l1wgp wrote

The summary of the abstract, by ChatGPT:

The intersubjectivity collapse refers to the breakdown of social and cultural norms in a civilization due to the proliferation of minds of different types and subjectivities that cannot communicate or coexist.

This will lead to conflicts and power imbalances, and make it difficult or impossible predict the actions of others.

It's likely to occur in any society that significantly modifies its own minds or develops artificial intelligence, due to the vast range of potential mind designs.

To mitigate this risk, it's necessary to anticipate it by developing strategies for managing diversity of minds and working on imagining how to cooperate in a civilization of very different types of minds.

2

misterdgwilliams t1_j3jf8wy wrote

Deglobalization has been happening for a while now, and for those of us who grew up in the late 20th century - during that short, optimistic period of hyperglobalization - it can certainly feel like an apocalyptic collapse of civilization. But for the most part, we are just disconnecting from a dream state. I'm actually interested to see what sticks to the sand when the tide goes back out; and whether we actually succeeded in changing the foundations of what it means to be human. Because if we didn't, we are still stuck at Step One in creating AGI: understanding and altering human psychology. And there are plenty more steps to go before we can claim to know anything about these AGI mindgames you speak of.

1

sixsmalldogs t1_j3jn08z wrote

A political example would be the MAGA movement in America.

1

encompassingchaos t1_j3l0fjm wrote

This might also happen with changes in brain functioning such as with neurodivergent minds creating changes in a neurotypical system, and not with just AGI and mechanical brain modification. It seems as though the use of hormone mimicking chemicals and the like are doing their own organic brain modifications.

1

Gmroo OP t1_j3l1dwc wrote

Yes, any changes that make minds diverge can lead to this issue.

2

Cornflake6irl t1_j3lag5h wrote

Objectivity cancels out subjectivity every time.

1

JackofAllTrades30009 t1_j3lriu1 wrote

I’m so tired of this “effective altruism”-adjacent “existential threat” nonsense. The structure and precise operations of the mind are so far removed from the realm of concrete understanding. To state that there will be an “explosion of new types of minds” is meaningless when we cannot for certain enumerate the ‘number of unique types of minds’ (in scare quotes because I think the concept of a typology of minds is inane) in our current world.

As such, we have no ground on which to call this an “existential threat”, seeing as it remains to be seen that ‘artificial intelligence’ (again in scare quotes due to inanity) can even be produced; we have much more closely looming existential threats currently facing our present mode of existence and I am frankly offended at the equation of this non-problem with those such threats.

Also, to break a little with the decorum of this subreddit (though let’s face it I wouldn’t call my response up to this point decorous), it is my opinion that posting one’s own substack in a sub like this is incredibly cringe.

1

MidnightAnchor t1_j3lvhg8 wrote

Last night I dreamt I was alive.

I woke..... still propping the world up.

1

Ok-Librarian4752 t1_j3nb8b5 wrote

Interesting theory and blog article. I am wondering about the epistemological applications of your theory.

This is all well and good for English speakers but what about the innumerable languages out there particularly only 1 or bilingual speakers and how do you think they’ll play a part in shaping their speakers. Do you envision a significant difference between Portuguese, German, and Mandarin speakers (for e.g.) as they have different linguistic understandings, cultural value sets, and levels of communication/comprehension.

Additionally, to what extent do you think that this ‘augmentation’ will occur across global populations. Presumably the west and more developed countries will adopt and change far sooner than lower developed countries. How will that affect what you’re supposing?

1

Gmroo OP t1_j3nvkwi wrote

It's difficult to quantify, but the core point ia that despite these cultural and linguistic differences we're relatively the same. It's when really different types of minds and entities are introduced that huge deviations from the norm become..the norm.

This augmentation ia underway already in a soft way,.. phones and technology we use every day. I work in AI myself and I expect things to rapidly accelerate from here on out.

Although I think there will be lots of worldwide access to information...something that is getting better every day.. poverty levels too.. people worst off will likely to remain worst off.

The details are very hard to predict though. I personally am sure this intersubjectivity collapse must happen because both possibly mind design space is large as we see in the animal kingdom and we're just not equipped as a society to deal with it.

I even speculate that some of this new communication barriers can't be overcome for the same reason as me not being able to check inside your head ans bkdy to figure out your internal states.

2

engphilosopher t1_j4vzrp6 wrote

The intersubjectivity collapse is a breakdown of social and cultural norms in civilization due to the proliferation of minds of varying levels of complexity or sophistication.

This can lead to unpredictability among agents, as the introduction of vastly new minds can disrupt the unspoken rules that hold civilization together based on the subjectivity of the minds that have created it. This collapse can be a merger of subjectivities, a breakdown of trust, or a dissolution of a shared reality.

In traumatic status subordination, intersubjective parity – the counterfactual presupposition of being treated as an equal human being – is so violently betrayed that a collapse of the intersubjective structure of identity can result. This can manifest in impaired intersubjectivity, which displays underlying problems of deficient relational benevolence, misattributing agency, and a failure of the imagination.

1

BernardJOrtcutt t1_j5fn0dt wrote

Please keep in mind our first commenting rule:

> Read the Post Before You Reply

> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.

This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.


This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.

1