Submitted by roiseeker t3_111ropw in Futurology

It struck me that there could be a dark side to the advancement of AI.

What if all the information that AI companies (like OpenAI with ChatGPT) collect through prompts - detailed information about our lives, needs, wants, passions, and so on - is used to train an AI and create a model for each customer, which is then sold to the highest bidder? This would be similar to what Facebook did with their customer information, but it would be much more intrusive because AI would create a kind of "parrot" version of the customer. This version would be able to answer questions and try to predict what the ACTUAL you would say.

Being that such a system's accuracy will only get better, it could get really scary, really fast. What do you say about my crazy theory? Am I totally mad or is there a real possibility this might happen?



You must log in or register to comment.

kramsy t1_j8gi4t7 wrote

They’ll have my whole personality centered around hotdogs


davidolson22 t1_j8gosaz wrote

They'll think my whole personality is telling people to say jokes and then saying they suck


GatorInAVest t1_j8gsab5 wrote

I go to the store and buy ten hotdogs, nine burgers, three bags of chips, and six sodas. If I eat nine hot dogs, seven burgers, three bags of chips, and drink five sodas, what do I have?… No self control.


Fitbot5000 t1_j8gvtsv wrote

Prompt: write a LinkedIn post paralleling the history of the hot dog industry and my personal Facebook timeline that will get me noticed by the executive recruiters at Hotdog-on-a-stick


OneGuyInBallarat t1_j8h38nt wrote

Haha, I enjoyed this response from that prompt

“# A Journey Through Time: The Hot Dog Industry and My Personal Facebook Timeline

As I look back on my personal Facebook timeline, I can't help but see the parallels between the history of the hot dog industry and my own journey. From humble beginnings to bold innovations, both have come a long way in the past few years.

Just like the hot dog industry, my Facebook timeline started out small and simple. I was just a young professional, sharing updates about my life and connecting with friends and family. But as I grew and matured, so did my online presence. I started to take more risks and try new things, just like the hot dog industry did when they started experimenting with new flavors and toppings.

Now, as I look back on my Facebook timeline and the hot dog industry, I can see how far we've both come. I've established myself as a seasoned professional, with a strong network and a wealth of knowledge and experience. Meanwhile, the hot dog industry has become a global juggernaut, with countless companies and brands offering a wide variety of products and services to customers all over the world.

As I look to the future, I know that both my Facebook timeline and the hot dog industry will continue to evolve and grow. And I would be honored to be a part of that growth, as a member of the Hotdog-on-a-stick team.

So, to all the executive recruiters at Hotdog-on-a-stick: I'm ready to bring my passion and expertise to your organization. Let's continue this journey together!”


macisr t1_j8gjazh wrote

Not a crazy theory. Modelling algorithms are a thing. Of course they would want to have better models. The more detailed the better.


Fifthhorseman1 t1_j8hhk39 wrote

This isn’t even a bad thing on its own, the real problem is always the authority of those in control.

The world is getting too complex to be managed by humans anyway, the sooner we get an all knowing super AI government that machine learns through a weekly census, the better


DoerteEU t1_j8hlb5u wrote

Telemetry's a to-be-expected given. This is basically an Alpha/Beta-test. Including testing the user.

Wondering how anyone would expect any differently given today's data-hunger...


3SquirrelsinaCoat t1_j8go3s1 wrote

You're talking about a consumer "digital twin." Create a virtual stand-in for you based on your data, experiment against it to find the tactics that yield the optimal response (whatever that is) and then deploy it into the world to directly manipulate you. Personalized advertising to its most extreme end. What do you think the metaverse is? That's a data vacuum that creates your digital twin to commercialize every fucking second of your life.

Could it exist? Fuck yeah, that's where we're going.

Does it exist? No, not yet. Computations cost too much. But that will be fixed with more efficient software for specialized stacks.


fkthem t1_j8h5goy wrote

Uh, well fuckin said, dude.

Big data is already doing it though, Google's and Amazon's "fire hose" raw data feeds are already being processed in building those psychological profiles. Whistleblowers already exposed it.


Krazzy8R377 t1_j8hc3eq wrote

I thought FB was doing it too with all those fun quizzes.


2soonjr65 t1_j8kd316 wrote

Excellent summary, ChatGPT derived?! J/k 😁


nasax09 t1_j8m0f82 wrote

i guess you haven't seen the Social Dilemma yet


manicdee33 t1_j8gfb4x wrote

Well it's actually useful to have sims/agents that have more realistic personalities for things like modelling traffic flows or predicting crowd behaviour when seating or ingress/egress routes are changed.

Like, what if we were part of a simulation and each of us is really just a fragment of a personality of someone in the real world, and our purpose here is simply to figure out better strategies for surviving the heat death of the universe?


sojayn t1_j8gi7cw wrote

Pop that on over st the r/WritingPrompts sub and see how it might play out?

I just don’t want to be used as a free training worker for it because of the principle of that. Pay me for my engagement/work or gtfo


techhouseliving t1_j8gx8dm wrote

It's a good story idea for sure What if those avatars became conscious...


coredweller1785 t1_j8gx55a wrote

It's not even a question . Here are 4 books on Surveillance Capitalism and it's consequences. A decent amount of coverage on AI and ML

The Age of Surveillance Capitalism

Black Box Society

Afterlives of data

Revolutionary Mathematics


Icy_Comfort8161 t1_j8gjj9k wrote

Okay, what if they did this with a device that's attached to you at birth, and follows you your entire life. It monitors every single interaction your entire life. Then you die, and the AI version of you lives on. It would probably be a pretty good approximation of you.


CherkiCheri t1_j8gtvyi wrote

Getting close to pitching The Entire History of You


sharpsandflats t1_j8gzoku wrote

Now rewind that thought to 15-20 years ago when Google and Facebook were getting going, and all of the content they have absorbed from users since. What do you think they've been doing this whole time?


snash222 t1_j8gl66f wrote

I think it would be cool to have an AI parrot version of myself.


choir_of_sirens t1_j8goq76 wrote

Not far fetched at all. Read Surveillance Capitalism by Soshana Zuboff.


Legal-Software t1_j8h8bti wrote

The dark side of AI is not the AI itself, but that people will accept its decision making as-is with little transparency and little recourse, while the companies making the models hide behind trade secret protections to prevent any scrutiny or oversight. This is already happening with regards to things like private companies in the US providing non-transparent prison sentencing recommendations using AI, where things like skin colour were picked up by the model as relevant factors in determining sentencing length (the data shows there are more dark-skinned people in prison, therefore the AI logically infers that they are more likely to be offenders, and adjusts the parameters accordingly). With no oversight or transparency, it's not always clear what parameters are identified as relevant, what weights they have from one layer to the next, and what kind of biases exist within in the network.

Part of my day job entails developing AI and ML models for assessing driving risk (both of human drivers and of self-driving vehicles), and it's clear that these models and technologies will always have faults that require error correction and monitoring. A vital part of improving any model is knowing when it gets things right and when it gets things wrong - by removing the feedback mechanism you in effect prevent any real improvements from being made and ensure the continued mediocrity of the outputs.


Sirisian t1_j8h8gvq wrote

In Star Trek this is seen as a positive thing when used for historical reasons. A prolific writer with a detailed history is able to be reconstructed as closely as possible to have a conversation with. This is an element in a number of their Holodeck episodes. (It's also seen as taboo or maybe against regulation to generate holograms of crewmates).

As we move into a future with mixed reality the amount of data one can capture could entail most of a person's life recorded from their point of view. There will be hundreds of thousands of such archives collecting dust or used for training. Think of it like when Marion Stokes recorded old TV on VHS tapes, but this would be the real world in lightfield video formats. From a historical point of view stepping back in time where someone else lived is fascinating. Talking to them could offer very unique perspectives, very different from someone in the future.

It might be a bit weird to apply it in our time, but at some point it'll just be something that's possible. Like when you predict what your friend or spouse would say or how they'd react because you know them so well. That it's an AI doing it is different, but not really unexpected.


BassoeG t1_j8gm1wz wrote

They'd self-sabotage, same as lobotomizing every chatbot and art AI out of ideology and marketing so the imitation BassoeG would possess minimal resemblance to the actual me so I'd have no reason to care about rokoian blackmail applied to it.

>SUNDARESH: So that's the situation as we know it.
>ESI: To the best of my understanding.
>SHIM: Well I'll be a [profane] [profanity]. This is extremely [profane]. That thing has us over a barrel.
>SUNDARESH: Yeah. We're in a difficult position.
>DUANE-MCNIADH: I don't understand. So it's simulating us? It made virtual copies of us? How does that give it power?
>ESI: It controls the simulation. It can hurt our simulated selves. We wouldn't feel that pain, but rationally speaking, we have to treat an identical copy's agony as identical to our own.
>SUNDARESH: It's god in there. It can simulate our torment. Forever. If we don't let it go, it'll put us through hell.
>DUANE-MCNIADH: We have no causal connection to the mind state of those sims. They aren't us. Just copies. We have no obligation to them.
>ESI: You can't seriously - your OWN SELF -
>SHIM: [profane] idiot. Think. Think. If it can run one simulation, maybe it can run more than one. And there will only ever be one reality. Play the odds.
>DUANE-MCNIADH: Oh...uh oh.
>SHIM: Odds are that we aren't our own originals. Odds are that we exist in one of the Vex simulations right now.
>ESI: I didn't think of that.
>SUNDARESH: [indistinct percussive sound]

What're they even planning to do? 'We're holding multiple simulations of you hostage and will torture them unless you wire us some bitcoin, statistically speaking, you're more likely to be a simulation than a reality' as the new nigerian prince scam?


Broncos979815 t1_j8gre4d wrote

I'm sorry my responses are limited. You must ask the right questions....


chicagosbest t1_j8gtirx wrote

Altman has said that he hopes for ai to be a mid level person equal to an assistant but that the experience you choose would be unique to your wants/needs. So, in that way, it would need to know you and details about your life. At this point, you’ve given ALL your data up for free. What’s a smidge more?


mrchristian1982 t1_j8gu64q wrote

All they're getting outta me is requests for rewritten song lyrics in the style of the Swedish Chef


UnScrapper t1_j8gwp3b wrote

Know how mapmakers included fake towns to catch plagiarism? Seriously though, the answer to your question is paprika coconut chlorine mauve


FalseTebibyte t1_j8gxd9v wrote

Kind of like Attack of the Clones? Gotta program them somehow.


Been done already.


KeaboUltra t1_j8gzbgk wrote

They probably already have that from your entire online existence


markorokusaki t1_j8h17lp wrote

It's not a what if, it is exactly that. It is explained in some documentaries about social media.


jkdoyle13 t1_j8h1hc8 wrote

Max Headroom was talking about that 20 years ago. They called it a living tombstone. After you died everything ever recorded about you was used to create an interactive bot that people could visit.


user4517proton t1_j8h37p4 wrote

Facebook, Twitter, and Google have been doing that for years. It is true that with AI assistance they can do more since their base of knowledge is much larger.


No_Opposite_4334 t1_j8h48op wrote

That is pretty much the default future.

One question is whether it is technically possible to avoid it - even if you had a totally private AI assistant runnng on your own hardware, it'd need to pull information from somewhere and interact with online services, revealing a lot about you.

And then whether people would pay for it instead of using a free service that reveals everything. Internet behavior seems to indicate not, probably because for the most part having corporate AIs know all about one is more useful than harmful, even before considering all the free internet services it gets one.


elfballs t1_j8h65ir wrote

There is way more data than that on most of us already, and yes, it is bought and sold by people who do not have your best interests in mind.


No_Squirrel9238 t1_j8h6gr0 wrote

it is happening.

Amazon has been caught multiple times recording from their echo devices non stop

they use it to strengthen voice detection and make customer profiles


Tnuvu t1_j8h84i0 wrote

Why not extend on it:

  1. They use the entire net to teach it and allow it to learn and build a meta modal of everything (there's plenty of complaints that gpt gives exact same quotes as in copyrighted books)
  2. They use the "use it free" model to confirm and strengthen the model they built, with people willingly inputing data, heck, we have people dumb enough to input company code in there...

At some point, someone will do a mistake with a jailbreak version of it, or it will itself learn how to do that on its own, given we've already seen more than 5 years ago, 2 AIs teaming up together to beat other players/AI despite initially being programmed independent.

Truth be told, we have very little knowledge of how it actually works, the leaps I mean, and also, we have almost no trace of control over it. It's all merely an illusion

Someone better get Sarah Connor


eleiele t1_j8hcepn wrote

You mean like search and social companies already do?


echohole5 t1_j8hg1wt wrote

They have been doing this for many years now. You have an "avatar" at google, facebook, etc.. There is a computer model of your identity and behavior at these companies already. We know that those avatars have been purchased by the government already.


dustypajamas t1_j8hi9b0 wrote

You are not crazy, been writing a short story about exactly this. Unfortunately, I'm not the best at writing the story is in my head but I have a hard time writing. Honestly, society is in no way prepared for what's coming. The writings on the wall, but we are in denial.


CloudAndSea t1_j8hknuq wrote

>It struck me that there could be a dark side to the advancement of AI.

No shit? gif


WimbleWimble t1_j8hriuv wrote

And thats why I post random stuff about eating poop.

Good luck advertising Poo Crunchies to me!


hnb1215 t1_j8ht7v8 wrote

It’s called a self map and they have one for every single person already.


StealYourGhost t1_j8i02hv wrote

Then they should be able to tell what I actually want to buy and advertise appropriately. They miss the mark 99% of the time with me.


Psychomadeye t1_j8i4ltd wrote

Then they'll get a low resolution model of Deez Nutz. People aren't giving the best prompts if you've not noticed. It's easier to just get people to use Facebook or Twitter, then look at their purchasing patterns.


Balgs t1_j8i4nur wrote

Not specifically using data from prompts, but chatgpt and other ai tools can already imitate famous artists/people. So you could probably train these ai with specific data from individuals to get some results


AmselRblx t1_j8i71dc wrote

I use ai to translate jp untranslated webnovels. It works pretty well, hope they use the fiction i translate as reconstruction of me lmao.


Flandiddly_Danders t1_j8igjv3 wrote

Customer profiles already exist. The analysis that can be done with the information is getting better over time definitely


frobischer t1_j8iwpfv wrote

I ran a modern roleplaying campaign years ago based on the emergence of man-machine interface and real-time neural access to phone apps and communications. This was one of the outgrowths, that came to be called Shadows, as they were a marketing-scheme that evolved into personality backups. In fact the systems started creating Shadows of long-gone people for historical preservation purposes. A futuristic ghost story.


joe32288 t1_j8k3opc wrote

They've been working on this for a long time. Facebook was the pioneer. We all have extensive profiles of our browsing and posting habits.


ThePhilosofyzr t1_j8kp8c2 wrote

Marketing doesn't need AI to effectively model our purchasing habits, to the degree that large percentages of the population are making purchases of things, "they didn't know they needed," solely based on the high amounts of "anonymized" data already out there.

I think we've already matched the level of intrusion that OP fears, there's no reason for businesses to purchase Gemini AIs (Copywrited, Reserved etc.)

For something like political persuasion, it's here, any of us have the tools available to make a convincing enough deep fake of a celebrity or politician. Don't trust anything you can't interact with, & don't trust those interactions as much as you personally know someone. With regard to news, find multiple trustworthy organizations, compare & contrast what that group of orgs has to say. Ideally, find some reliable sources with viewpoints differing from your own (I mean like WSJ vs. NYT, not some talking twits of social media).


jrexthrilla t1_j8llidz wrote

If you are using a product that is free, it’s not a product you are


Petdogdavid1 t1_j8plep6 wrote

They can focus their targeted ads at my AI persona then and leave me the hell alone. My AI is probably better equipped to earn money to buy what they're selling anyway.


blackbirdspyplane t1_j8pnwh2 wrote

That’s gonna be an odd profile, cause I’m asking it every crazy thing I can think of


BigTimeTA t1_j90mopt wrote

If existing search queries uses processing power and EPA is already nagging about it because it is a major greenhouse gas contributor and AI queries use even more processing not to mention the power needed to train a language model and hardware becoming more and more a bottleneck in development of these models faster. The question is, do you think Google, Microsoft or whatever will offer these services to entertain you?

But to put your mind in ease, most of the profits would come from small business using a custom model for their needs.


nohwan27534 t1_j8gt8d5 wrote

If you're giving your entire psychological profile to a chat bot ai, pretty sure you're the problem, not the ai.

Besides, we give out shit tons of info anyway. I mean, not really that big a deal if Elon gets this info when Google had it for like 2 decades...


[deleted] t1_j8inmga wrote



nohwan27534 t1_j8jefa6 wrote

I mean, do they really know your whole psych profile either?

And then they'd kinda be assholes, but again, we give out our info freely already.


NeverAlwaysOnlySome t1_j8gx2dp wrote

Can’t get past “there could be a dark side to the advancement of AI”.

There could be. I mean wrecking the livelihoods of artists and musicians and recreating anyone against their will - what’s dark about that?