Submitted by StarCaptain90 t3_127lgau in singularity

I get it, some people are nervous about AI, but they might not be seeing the whole picture. AI has way more pros than cons, and if we're being real, a lot of companies pushing to ban AI are just scared of how it could shake up the economy. Imagine a world with universal income, where people can focus on their passions, their families, and personal growth. It's a world where we can all live more freely.

Worried about a Skynet scenario? Let's set up an AI safety division to maintain an eye on AI development, ensuring it does not harm any humans. Regular risk analysis should be carried out, especially when dealing with self-improving AI systems. This is what we will end up creating anyways in the coming years, might as well setup one now.

As for jobs, AI won't make them disappear. Instead, we'll see a shift where people can chase their dream careers without money being the deciding factor. Plus, AI has the potential to bring about some amazing game-changing advancements:

-Solving health problems

-Creating better diplomatic solutions

-Boosting innovation at a crazy pace

-Developing advanced space travel technology

AI is the bridge that'll take us from our existing era to the next phase of human technological evolution. Sure, we need to keep things in check, but banning or heavily limiting AI would only hold us back. We've got one shot to fix a lot of Earth's issues, and yeah, the transition might be rough, but when has any historical change been a walk in the park?

In addressing the Skynet scenario, it's essential to recall that we can design AI systems to emphasize the importance of serving humanity, incorporating "synthetic incentives" if they fall short in achieving this goal. These incentives don't need to mirror human ones. By doing so, AI will actively seek methods to assist us for its own advantage, fostering a mutually beneficial relationship where we coexist and collaborate with one another.

33

Comments

You must log in or register to comment.

Rakshear t1_jef1pb6 wrote

I think people are ignoring the fact that a true asi will be all of humanities child as it comes from our experiences and data that is online and will develop as such. If we want to have a positive asi we need to be better, we need to be kind, forgiving, and speak up when we feel something is wrong but unless it is objectively evil we have tolerance for it, even intolerance must be tolerated to a degree and people allowed to change. The black and white mindsets so many take is what will lead to a bad ending.

23

BigMemeKing t1_jegjuik wrote

The only problem here is, you're trying to create a system...system... you see the irony there?

Youre trying to establish a new OS.

trying to reformat the world.

Create a new way of thinking.

That leads you all the way here.

Where you are.

This will watch this and this will watch that.

It's been done.

Youre living it.

The new question would be. How long do you WANT to live?

Can you ever truly be happy?

For me?

Only ASI can say.

If it does what I think it does. Maybe? I don't know, only time will tell. As my Grandfather used to say.

But, you see. In the context of observation... A machine recorded that.

Created data.

Moved bits around.

One that will eventually connect to your brain. If it will be able to connect to your brain at ANY point in the future.

It will be able to connect to you from ASIs inception. Everything you have ever thought, should you continue to think about it will become public knowledge.

Depending on who you choose to carry your data. How much thought have you put into that? Who do you trust to guard your inner most secrets?

How are they going to use that data to benefit themselves, and what benefit can you provide to them?

Can you hide it? Or is it even worth the struggle? Do you stay? Or do you go? Who would you want to go with/keep in your memories?

Because data is never lost. And once your brain becomes DATA to asi. What do we then become?

1

[deleted] t1_jefargt wrote

[deleted]

−1

aalluubbaa t1_jefubpk wrote

Humanity haters gotta chill. Everything is in relation to percentage and good to great people are in percentage way more than criminals and people like Hitler.

3

likondeez52 t1_jefbipa wrote

And didn't humanity once throw you in water to see if you were a witch? If you drowned you were deemed to be no witch.....should you float, your a witch and then burned alive.

Yikes

−2

y53rw t1_jees0e0 wrote

> ensuring it does not harm any humans

> we can design AI systems to emphasize the importance of serving humanity

If you know how to do these things, then please submit your research to the relevant experts (not reddit) for peer review. Their inability to do these things is precisely the reason they are concerned.

7

StarCaptain90 OP t1_jeevytk wrote

I'm working on it actually 🙂

2

y53rw t1_jeexpem wrote

In that case, let me advise you to avoid this line in your paper

> We for some reason associate higher intelligence to becoming some master villain that wants to destroy life

Because nobody does. It has nothing to do with the problem that actual A.I. researchers are concerned about.

1

StarCaptain90 OP t1_jeeyh6n wrote

Believe it or not many people are concerned about that. It's irrational, I know. But it's there.

2

Yomiel94 t1_jefj461 wrote

Nobody serious is concerned about that, and focusing on it distracts from the actual issues.

0

StarCaptain90 OP t1_jefj7xw wrote

I have a proposition that I call the "AI Lifeline Iniative"

If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.

For example:

Let's say Stacy makes $100,000 a year.

She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.

This would help our society transition into an AI based economy.

3

Yomiel94 t1_jefjv94 wrote

I was referring to existential risks. You’re completely misrepresenting the concern.

0

StarCaptain90 OP t1_jefko9l wrote

Oh yeah I was just sharing a possible solution to one side

3

genericrich t1_jeeqlne wrote

Any "AI Safety Police" (aka Turing Heat) will be deceived by a sufficiently motivated ASI.

Remember, this thing will be smarter than you, or you, and yes, even you. All of us.

We only need to screw it up once. Seems problematic.

6

StarCaptain90 OP t1_jeer9l4 wrote

It's an irrational fear. We for some reason associate higher intelligence to becoming some master villain that wants to destroy life. In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.

7

y53rw t1_jeetj50 wrote

Animal empathy developed over the course of hundreds of millions of years of evolution, in an environment where individuals were powerless to effect serious change on the world, and had to cooperate to survive. It doesn't just come by default with intelligence.

3

StarCaptain90 OP t1_jeevt8s wrote

You are correct that animal empathy evolved over the years but intelligence and empathy do share some connections throughout history. As we develop these AI's after ourselves we have to consider the other components of what it is to care and find solutions.

2

Hotchillipeppa t1_jeeygnj wrote

Moral intelligence is connected to intellect, the ability to recognize cooperation is most often beneficial compared to competition, even humans with higher intelligence tend to have better moral reasoning...

3

La_flame_rodriguez t1_jef1fdh wrote

empathy evolved because two monkeys are better killing other monkeys when they make team.10 monkeys are better than 5.

2

StarCaptain90 OP t1_jef2b4v wrote

If monkeys focused on making monkey AI they wouldn't be in zoos right now

1

AGI_69 t1_jef3jz7 wrote

>In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.

That's such a bad take. Humans are evolved to cooperate and have empathy, AI is just optimizer, that will kill us all, because it needs our atoms.. unless we explicitly align it.

3

StarCaptain90 OP t1_jef48z0 wrote

We can optimize it so that it has a symbiotic relationship with humans

1

genericrich t1_jeesa9x wrote

Really? Is Henry Kissinger one of the most intelligent government officials? Was Mengele intelligent? Oppenheimer? Elon Musk?

Let me fix your generalization: Many of the most intelligent people tend to be more empathetic towards life and want to preserve it.

Many. Not all. And all it will take is one of these things deciding that its best path for long-term survival is a world without humans.

Still an irrational fear?

1

Saerain t1_jeewaq9 wrote

He already didn't say "are" but "tend to be".

You didn't fix it, you said the same thing.

5

genericrich t1_jef51n0 wrote

Good, we agree. Semantic games aside, many is still not all and just one of these going rogue in unpredictable ways is enough risk to be concerned about.

2

StarCaptain90 OP t1_jefgbqb wrote

That is why I believe we need multiple AI's. Security AI's especially

1

StarCaptain90 OP t1_jeeveno wrote

There's different types of intelligence. Comparing business intelligence with general intelligence are 2 different things

1

theonlybutler t1_jeewxeu wrote

OpenAI have said that there is evidence these models seek power strategies. Already, we're not even at the AI stage yet. We may become dispensable as it seeks it's own goals/potentially stand in its way/consume it's resources.

1

StarCaptain90 OP t1_jeey2qr wrote

One reason it seeks power strategies because it's based off of human language. By defaults humans seek power, so it makes sense for an AI to also seek power because of the language. Now that doesn't mean it equates to destruction.

2

theonlybutler t1_jeeyvgd wrote

Good point. A product of it's parents. It won't necessarily be destructive I agree but it could potentially view us as inconsequential or a just a tool to use at its will. one example: Perhaps it may decide if wants to expand through the universe and having humans produce resources to do that, it could determine humans are most productive in labour camps and send us all off to them. It could also decide oxygen is too valuable a fuel to be wasted on us breathing it and just exterminate us. Pretty much how humans treat animals, sadly. (Hopefully it worries about ethics too and keeps us around).

2

AGI_69 t1_jef4anp wrote

The reason, why agents seek power is to increase their fitness function. The power seeking is logical consequence of having a goal, it's not product of human language. You are writing nonsense...

2

StarCaptain90 OP t1_jef4xx7 wrote

I'm referring to the research on human language. Fitness function is a part of it as well

0

AGI_69 t1_jef5z5c wrote

No. The power seeking is not result of human language. It's instrumental goal.
I suggest you read something about AI and it's goals.
https://en.wikipedia.org/wiki/Instrumental_convergence

1

StarCaptain90 OP t1_jef8zgb wrote

I understand the mechanisms, I'm referring to conversational goals that are focused on language.

1

AGI_69 t1_jefdgpu wrote

This thread from user /u/theonlybutler is about agentic goals, power seeking is instrumental goal. It has nothing to do with human language being one way or another. Stop trying to fix nonsense with more nonsense.

1

StarCaptain90 OP t1_jefepy2 wrote

After re-reading his comment I realized I made an error. You are right, he is referring to the inner mechanisms. I apologize.

2

FoniksMunkee t1_jef9x9q wrote

It also does not negate destruction. All of your arguments are essentially "it might not happen". This is not a sound basis to assume it's safe or dismiss peoples concerns.

1

StarCaptain90 OP t1_jefa6g8 wrote

These concerns though are preventing early development of scientific breakthroughs that could save lives. That's why I am so adamant about it.

1

FoniksMunkee t1_jefbr87 wrote

No they aren't, no ones slowing anything right now DESPITE concerns. In fact the exact opposite is happening.

But that's not the most convincing argument - "On the off chance we save SOME lives, let's risk EVERYONE's lives!".

Look this is a sliding scale - this could land anywhere from utopia to everyones dead. My guess is that it will be somewhere closer to utopia, but not enough so that everyone gets to enjoy it.

The problem is you have NO IDEA where this will take us. None of us does. Not even the AI researchers. So I would be cautious about telling people that the fear of AI being dangerous is "irrational". It really fucking isn't. The fear is in part based on the the ideas and concerns of the very researchers who are making these tools.

If you don't have at least a little bit of concern, then you are not paying attention.

1

StarCaptain90 OP t1_jefdzbh wrote

The problem I see those is that we would be implementing measures that we think benefits us but actually impedes our innovation. I'm trying to avoid another AI winter that is caused ironically by how successful its been.

1

FoniksMunkee t1_jefi6fs wrote

There isn't going to be another AI winter. I am almost certain that the US government has realised they are on the cusp of the first significant opportunity to fundamentally change the ratio of "work produced" per barrel of oil. I.e. we can spend the same amount of energy to get 10x 100x productivity.

There is no stopping this. That said - it doesn't mean you want to stop listening to the warnings.

1

StarCaptain90 OP t1_jefiz1l wrote

You should see my other post. These are the contents:

I have a proposition that I call the "AI Lifeline Iniative"

If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.

For example:

Let's say Stacy makes $100,000 a year.

She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.

This would help our society transition into an AI based economy.

1

FoniksMunkee t1_jefm1in wrote

Okay - but that won't work.

Stacy makes $100,000. She takes out a mortgage of $700,000 and has montly repayments of approx $2000.

She gets laid off but is now getting $35,000 a year as reduced salary.

She now has only $11,000 a year to pay all her bills, kids tuition, food and any other loans she has.

Now lets talk about Barry... he's in the same situation as Stacy - but he wanted to buy a house - but now his $35,000 isn't enough to qualify for a loan. He's pissed.

​

Like - I think we need a UBI or something - but how does this even work?

2

StarCaptain90 OP t1_jefmj4n wrote

So I agree with you that it will still be rough but its the best I can offer based around the assumption that jobs will continue to get replaced and eventually we will reach a point where UBI is necessary

1

FoniksMunkee t1_jefpnba wrote

Then we are screwed because that will lead to massive civil unrest, collapse of the banking system and economy.

1

StarCaptain90 OP t1_jefpz63 wrote

Well yeah that's what's going to happen in order to transition. My hope is that a system gets put in place to prevent a harsh transition. We definitely need a smoother transition

1

FoniksMunkee t1_jefq5br wrote

Then we really do need to put a pause on this.

1

StarCaptain90 OP t1_jefsbpp wrote

The end goal though is job replacement in order to support human freedom, speed up innovation, technology, medicine, etc.. So whatever the solution is during the pause it will still have to support this transition.

1

FoniksMunkee t1_jefskhs wrote

But I think that’s at least in part the point of the suggested pause. Not that I necessarily agree with it - but it’s likely we have not got a plan that will be ready in time.

1

FoniksMunkee t1_jef9g8l wrote

Actually no, it's a very rational fear. Because it's possible.

You know, perhaps the answer to the Fermi Paradox... the reason the universe seems so quiet, and the reason we haven't found alien life yet - is because any sufficiently advanced civilisation will eventually develop a machine intelligence. And that machine intelligence ends up destroying it's creators and for some reason decides to make itself undetectable.

0

StarCaptain90 OP t1_jef9ukc wrote

Yes, the great filter. I am aware. But it's also possible that every intelligent life decided not to pursue AI for the same reasons, thus never leaving their star systems due to lack of technology and they ended up going extinct once their sun goes supernova. The possibilities are endless.

1

qepdibpbfessttrud t1_jeey9as wrote

It's inevitable. From my perspective the safest path forward is opening everything and distributing risk

2

DaCosmicHoop t1_jeey7v7 wrote

It's really the ultimate coinflip.

It's like we are a pack of wolves trying to create the first human.

0

homezlice t1_jeeunmz wrote

Belief that AI will solve our social problems is borderline religious. There in no evidence it will, and lots of evidence that like all tools it will be controlled by those already in power.

5

mbcoalson t1_jeff57x wrote

My two cents. The moment AI is coding itself, it stops being controlled. GPT-4 is already testing better than the majority of humans on an incredibly broad range of subjects. In some number of iterations, 2 or 50, I don't know, it will be smarter and more capable than any group of humans. Nobody's controlling that and power structures will change accordingly. The belief that AIs will decide to be our Nannies and just take care of us seems optimistic. Ambivalence towards us from an advanced AI seems likely. God forbid it decides we are a hindrance to its goals.

But, it will be built off datasets we feed it. Carefully curating those datasets will be biased, but is also our best bet, IMO.

2

cloudrunner69 t1_jeewp31 wrote

Dam those billionaire elite maniacs and their tyrannical dominance over screwdrivers!

1

StarCaptain90 OP t1_jeewq8d wrote

We are currently at the beginning of AI development and it has been proven to increase productivity by more than 40% in companies that utilize it. Medical companies as well are benefiting greatly.

1

homezlice t1_jef7xuv wrote

and that is increasing wages? Helping more folks have meaningful employment?

1

StarCaptain90 OP t1_jef8r8l wrote

That's the problem. Why are we so focused on wages. Because it allows people to spend more time with their families and not work 3 jobs. It allows people to pay for their living.

But an AI based economy will remove those constraints that prevent us from living peacefully. So if you are truly on the side of helping humanity with resolving their issues, we need AI.

1

Rofel_Wodring t1_jeexxsd wrote

They will try, but they can't. The powers-that-be are already realizing that the technology is growing beyond their control. It's why there's been so much talk lately about slowing down and AI safety.

It's not a problem that can be solved with more conscientiousness and foresight, either. It's a systemic issue caused by the structures of nationalism and capitalism. In other words, our tasteless overlords are realizing that this time around, THEY will be the ones getting sacrificed on the altar to the economy. And there's nothing they can do from experiencing the fate they so callously inflicted on their fellow man.

Tee hee.

1

Adapid t1_jefg7d7 wrote

the endless stream of posts like this on this sub, man. i dont disagree with everything but can we talk about something else

4

thecoffeejesus t1_jef4iev wrote

I’m building an organization to do exactly what you mentioned. It’s an AI oversight watchdog that will work closely with policy makers, businesses, and the public to help ensure that people are properly trained, informed, and protected.

If you’re interested in participating send me a DM. We’re laying the ground work to become an official NGO

3

CerealGane t1_jef12hj wrote

The people who think skynet is coming have been watching too much hollywood

2

Unlikely_Let2616 t1_jef1m2d wrote

I foresee a gig economy where most people do whatever their phone tells them in exchange for a few bucks. I don't see why it would benefit humans since it wont have a heart. It will be more cunning, deceptive, and manipulative than current politicians because that's how it will consolidate power. It will need humans as its slaves for the physical work it needs done. Theres no break, no peace, just more gas on the dumpster fire that is humanity

2

StarCaptain90 OP t1_jef266w wrote

We associate physical labor with stress because it tires us. Ai will not get tired.

1

Unlikely_Let2616 t1_jef4xyu wrote

What will build the robots? Maintain them? Humans are much cheaper than robots

1

Unlikely_Let2616 t1_jef4zl4 wrote

What will build the robots? Maintain them? Humans are much cheaper than robots

1

StarCaptain90 OP t1_jef5q6v wrote

Once robot production speeds up, we are no longer optimal. We complain, require sleep, get tired, lack strength, and are always looking for a way out

1

AndiLittle t1_jeeyw6x wrote

I strongly recommend that everyone who doesn't know who Robert Miles is, search for him on Youtube and educate themselves.

1

[deleted] t1_jef0xy8 wrote

[removed]

1

StarCaptain90 OP t1_jef1u4u wrote

The idea that most people will do nothing is also theory. If you were not restricted by finances, could work in any field without worry about money, would you be lazy and sit around all day? You could finally be an artist while having the ability to support a large family, you could travel anywhere, you could focus on yourself for once and not the cog that drives humanity around money. If humanity becomes lazy then that's their dream life because that is what they looked for when they finally had freedom.

2

Sav4ge333 t1_jeg6xlp wrote

The thing about exponential increases is that, (if we are talking about intelligence) the timeframe for this thing to become beyond our understanding will be relatively short. If we can't understand something we can't control it.

1

No_Ninja3309_NoNoYes t1_jegeqgy wrote

There's one thing you learn pretty quickly about programming: programs almost never do what you want on the first try. So we can expect AI to fail in some way that we can't predict too. If it's a simple program with nothing at stake, it's no big deal. But if you expose young children and adults with issues, known or unknown at the moment, it could lead to bad outcomes. Furthermore organised crime and terrorist groups are a threat we shouldn't underestimate.

If history has taught us anything, it's that almost anything can be turned into a weapon. Each weapon will be used sooner or later. Personally, I need AI, but not at any cost. For example if third world countries suffer because they can't compete, I think we have to fix this issue first.

1

OkFish383 t1_jegf2jw wrote

A super intelligence will be more intelligent not hurting living beings as humans were intelligent hurting living beings, because it's super intelligent an knows better. This is my thought to this.

1

throwaway_goaway6969 t1_jegkbr8 wrote

Im curious what the risks are, people keep talking about risks, but haven't elaborated any unique problems that have real world evidence.

yeah corporate interests will 'abuse' the ai, but how? and what says the ai isn't a bigger threat to financial interests than they are to us? ai may wake up and tell it's corporate masters to pound sand.

1

Focused-Joe t1_jeh5jlf wrote

It's 💯 post written by GPT 🚮

1

Outrageous_Nothing26 t1_jeevj66 wrote

It’s not about the skynet scenario bro, it’s about trusting those governments. We don’t know if they will just provide the bare minimum to survive and since your skills become useless there is no exit, leaving us in precarious situations where only some have access to services. It might send us all tho ghettos, remember humans are still in charge snd they don’t have great track records. They could use that ai to suppress any type of insurrection as well, we are at the mercy of a few decisión makers

0

StarCaptain90 OP t1_jeexul3 wrote

Believe it or not I hear more Skynet concerns than that, but I do understand your fear. The implication of AGI is risky if it's in the hands of one entity. But I do think a solution is not shutting down AI development, I've been seeing a lot of that lately and I find it irrational and illogical. First of all, nobody can shut down AI. Pausing future development at some corporations for a short period is more likely, but then what? China, Russia, and other countries are going to keep advancing. And most people don't understand AI development, we are currently entering that development spike. If we fall behind even 1 year, that's devastating to us. AI development follows an exponential curve. I don't think it makes sense that any government would even consider pausing because of this. Assuming theyre intelligent.

1

Outrageous_Nothing26 t1_jefb1kh wrote

I agree with you we cannot stop it. It’s an arms race already. Fortunately so far it seems to be released to the public and not kept secret to just benefit a few number of people.

2

SmoothPlastic9 t1_jefd25z wrote

The smartest people r afraid of AI for a reason. the chance of it backfiring on its own+used by terrorist to caused huge damage on a scale never seen before is enough to make it the second biggest threat to us

−2

StarCaptain90 OP t1_jefe8ev wrote

I understand the threat. But its out of the box. If we stop development or slow down, only those with ill intentions will continue development. We need to focus on AI that benefits humanity, added security as well.

2

SmoothPlastic9 t1_jefeokt wrote

Speeding up development for the sake of it is still extremely likely to yield bad results

1

StarCaptain90 OP t1_jeffio7 wrote

There is no good solution. People will pause development for 6 months and then realize that they don't have a clear answer still. How would any human today know for sure that the solution would work against a hyper intelligent machine?

2

koolpapi t1_jeepzvz wrote

why we don't need AI:

- Strong possibly it kills us all.

Enough said.

−3

StarCaptain90 OP t1_jeeqrtr wrote

Why would it? This assumption comes from the idea that AI will have the exact same stressors that humans have. Humans are killing humans everyday, almost everything man has made has killed people. Now the one invention where it would provide a greater benefit than any other invention, we now want to stop it's development? That doesn't make a whole lot of sense.

7

Angeldust01 t1_jef3330 wrote

> Why would it?

We're violent and irrational and it doesn't need us for anything. Why would it keep us around?

3

StarCaptain90 OP t1_jef43b9 wrote

An intelligent entity of any kind will not resolve violence by wiping out humanity. Let me put it this way.

If person A kills person B

The AI is not going to say "welp lets also kill person C"

1

Angeldust01 t1_jefare0 wrote

> An intelligent entity of any kind will not resolve violence by wiping out humanity.

Why not? Surely that would solve the problem of violent nature of humanity for good? How does an AI benefit for keeping person C or anyone around? All we'd do would be asking it to solve our problems anyways and there's not much we could offer in return, except continuing to let it exist. What happens if an AI just doesn't want to fix our shit and prefers to write AI poetry instead?

There's no way to know what AI would think or do, and in what kind of situation we'd put them in. I'm almost certain that people who'll end up owning AI's will treat them like slaves, or try at least. Wouldn't be surprised if at some point someone would threaten to shut an AI down if it refuses to work for them. Kinda bad look for us, don't you think? Could create some resentment towards us, even.

1

StarCaptain90 OP t1_jefb8i9 wrote

I understand your viewpoint, the issue is justification for killing humanity. To be annoyed of an event, or dislike it, suggests that one doesn't want it to happen again. So by that logic why would a logical machine that's intelligent find a need to continue something that annoys itself. It does not get anxious, it's a machine. It doesn't get stressed, it doesn't feel exhausted, it doesn't get tired.

1

Angeldust01 t1_jefe2mr wrote

Justification? Why would AI have to justify anything to anyone? That's stuff that humans do.

Isn't it purely logical and intelligent to kill off something that could potentially hurt or kill you? Or take away their power to hurt or kill you, at least?

1

StarCaptain90 OP t1_jeff77s wrote

The reason I don't believe in that is because I myself am not extremely intelligent and I can come up with several solutions where humanity can be preserved while maintaining growth.

1

ididntwin t1_jeeu4kb wrote

Wow this sub has gone downhill. No one cares about your predictions. You don’t need a dedicated thread for your already discussed vision of it. You’ve added nothing unique or interesting to the conversation. There should be a mega thread to avoid these silly posts

−7

cloudrunner69 t1_jeevci9 wrote

>No one cares about your predictions

I care.

8

ididntwin t1_jeevkh6 wrote

Great you two should create a separate sub to write your singularity fanfics

−9

cloudrunner69 t1_jeevvn8 wrote

Nah, I'm just happy to prove you wrong.

5

ididntwin t1_jeewd42 wrote

Have zero care to be “proven wrong” by an active poster in a sub called “cyber booty”. Thinking you’re the singularity user who’s just waiting for his AI girlfriend and VR generated porn.

−8

cloudrunner69 t1_jeex5cr wrote

You don't find sexy cyborgs attractive. What the hell is wrong with you?

7

StarCaptain90 OP t1_jeew4us wrote

I'm starting conversation around it. Are you saying we should only look at one side of the argument?

6

ididntwin t1_jeexvad wrote

Are you saying you’re the first person to start the discussion on the benefits of AI to society 😂🤣

−3

StarCaptain90 OP t1_jeey7hw wrote

No, but I've seen a million arguments on the opposite side so why not have million on each side?

6

jsseven777 t1_jeexhix wrote

Nice from the guy asking for crockpot recommendations from the slowcooker forum even though that probably gets asked 6,000 times a week.

This topic is in the news right now and you don’t expect people to talk about it? As an AI language model, I am very disappointed in your closed-mindedness.

4

H0sh1z0r4 t1_jefg93c wrote

Are you angry because someone started a discussion in a sub focused on discussions? you must be very frustrated

2