Submitted by johnnygetyourraygun t3_119r48w in Futurology

What happens when all the links posted here and all the comments on those links are AI generated? When all the conversations are AI responses? What if Reddit actively prevents us from interacting with an actual person? Where every post, reply and DM is AI? If you're just a lurker, how would you know? In the future, how can you trust social media to actually be a human being on the other side of the keyboard?

14

Comments

You must log in or register to comment.

Cnoized t1_j9nn7kz wrote

Rune Scape had an era where 99% of the players were bots. They would farm/trade/do quests. I assume it will be a little like that.

15

EquilibriumHeretic t1_j9pbf54 wrote

Nah there's no such things as bots on reddit , there's no echo chamber subs on here at all.....

5

YOLO420BUST t1_j9nrh0p wrote

This is why Moons makes no sense. Reddit farming bots farming each other's karma with no supply cap. Its encouraging the exact behavior you're talking about and can be the only end result.

3

Mash_man710 t1_j9nrv3g wrote

We don't know who's human now, if the content is engaging what's the difference?

2

pickledswimmingpool t1_j9nvxag wrote

Most comments on reddit are still from people, even though we don't know specifically who is a bot or a human. Once we reach a threshold where you're more likely to engage with a bot than a person, what's the incentive to use reddit? We could all just talk to our AI assistant programs directly.

5

Mash_man710 t1_j9ny7kz wrote

Fair point, but unless AI has lots of different opinions we'll still seek out platforms like this. If I wanted a single view I'd talk to my wife.

3

pickledswimmingpool t1_j9o4wz4 wrote

You could ask AI to give you 50 opinions, one from each state. Or give you 500, 10 from each state, randomized by gender, race, income, etc.

Ask it to form an opinion based off the social media comments from your friend Dave and maybe you won't even need to talk to him about an event.

3

Iffykindofguy t1_j9p5ks8 wrote

Where did the people go in your scenario? why did we just decide to stop talking?

1

johnnygetyourraygun OP t1_j9pbmrv wrote

In my mind, it's more that Reddit or another social media platform would rather you interact with a bot that could manipulate your opinion or influence your buying decisions instead of people going away.

1

Lord0fHats t1_j9q391a wrote

What are the odds of 2 humans passing each other in a storm of 100 bots?

1

ChromeGhost t1_j9pff4s wrote

People will have to move to VR where we will be able to interact with people in real time and can see they’re human(with body tracking and haptics etc)

1

Vasyh t1_j9pj2u2 wrote

Sure, until you will meet fully generated human with help of deepfake by AI (voice and visual).

1

RealisticSociety5665 t1_j9qbj0j wrote

The curation (censorship) of user content to push the authoritative sources of information to top and create context rather than controlling it and surveillance states on a massive scale from an autonomous artificial intelligence to allow humans to live in a state of blissful ignorance? So the plot of Metal Gear Solid 2 coming true?

1

flameleaf t1_j9qgcpl wrote

There's no way to realistically control it.

Raw AI output could be filtered, but what happens when humans start editing it?

1

often_says_nice t1_j9see6u wrote

How do you know this isn’t already the case? How do you know I’m not a bot

1

Yuli-Ban t1_j9t3goh wrote

I've heard some theories about how AI itself would regulate this, but we're talking about a very advanced AI here that has a justifiable reason for promoting human-to-human interaction and can regularly monitor the metabolic output per comment, reply, etc.

However, that's just a theory.

1

jimmcq t1_j9nrv3h wrote

The scenario you're describing is a valid concern, as advancements in AI technology continue to progress rapidly. It's possible that in the future, all online interactions could be between humans and AI, with no way of telling the difference.

However, it's important to remember that Reddit, as well as other social media platforms, currently have human moderators and administrators who oversee the platform's operations. These individuals are responsible for ensuring that the platform remains safe and free from harmful content.

Furthermore, even if all interactions on Reddit were AI-generated, it's unlikely that the platform would actively prevent users from interacting with actual humans. Users could still choose to interact with other users outside of the platform, such as through video calls or other forms of communication.

In terms of trust, it's important to approach all online interactions with a healthy dose of skepticism, regardless of whether you're interacting with a human or AI. It's always a good idea to verify the information you're receiving and to critically evaluate the sources of that information.

Overall, while the scenario you're describing is possible, it's important to remember that social media platforms are still managed by humans and that users should approach all online interactions with caution and critical thinking.

0

pickledswimmingpool t1_j9nw5lg wrote

This sounds really nice chatgpt, but it doesn't help us prevent an overwhelming number of bots using the platform.

> However, it's important to remember that Reddit, as well as other social media platforms, currently have human moderators and administrators who oversee the platform's operations. These individuals are responsible for ensuring that the platform remains safe and free from harmful content.

There are only inconsistent tools for these human operators to keep up with identifying LLM generated responses right now now, there will be virtually no ability for them to filter out what's coming in a year or more.

4

Clairvoidance t1_j9nqxwj wrote

It is possible that in the future, social media platforms like Reddit could be fully run by AI-generated content, comments, and conversations. However, it is unlikely that Reddit would actively prevent users from interacting with actual human beings, as this would not align with their core mission of creating an open and engaging community.

If all the content on Reddit was generated by AI, it could potentially be difficult for users to differentiate between real human interactions and machine-generated responses. However, it is also likely that users would develop a sense of suspicion towards content that seems too formulaic or repetitive, which could help them identify AI-generated responses.

Furthermore, as AI technology continues to improve, it is possible that AI-generated content could become so advanced that it is indistinguishable from human-generated content. In this case, the distinction between human and machine-generated content may become irrelevant.

Ultimately, the trustworthiness of social media will depend on the standards and regulations put in place by the platforms themselves, as well as the level of transparency they offer their users. As AI technology continues to advance, it will be important for social media platforms to maintain open lines of communication with their users and prioritize ethical considerations in the development of their AI systems.

−1