Comments

You must log in or register to comment.

TangerineDream82 t1_itxbb3d wrote

I think that's called targeted marketing

339

CorruptCashew t1_itylxc3 wrote

"If you look like the person in the ad, you can imagine it being you using the product".

Same like a strong independent woman in a hollywood movie. Same principle.

35

MrChoovie t1_ityoovh wrote

Are you saying men over 55 look like teenage girls?

75

Penis_Bees t1_itz81ka wrote

They're probably more likely to click that ad. Some meta data gets stored and utilized and directs them those ads more.

17

CardboardJ t1_itziqeg wrote

This is the sad downside to having an impartial algorithm that's ranked only on success rate.

6

TangerineDream82 t1_itzo8ga wrote

What should the algorithm be ranked on instead?

9

CardboardJ t1_iu0r53k wrote

It's currently ranked against what makes Facebook the most money with a secondary ranking of what advertisers want to use to target a demographic. The algorithm is fine. The marketing creeps opting to use teen girls to advertise to 55 year old men should probably change and Facebook should probably have better moral standards as well.

It's like seeing parents that sold their teen daughter to a pimp and the pimp selling the girl to an old dirty man, then asking how we can change human reproduction to prevent this. It's the society that has accepted "Sex Sells" as a moral business practice that should be changed, but anyone that attempts to do so gets labeled a puritan and dismissed.

0

chute_amine t1_iu0cuel wrote

Ranking by that is fine, but there should be equity constraints in place. Ads are one thing, but think about applications for jobs or credit cards. Amazon and Apple already had huge issues there because they chose to ignore sensitive traits instead of actively enforcing equity.

−6

RonPMexico t1_iu0yuoe wrote

Are you saying that algorithms should include factors like race and age when determining credit scores and filtering job applications?

3

myspicename t1_iu0znoz wrote

Are you saying if algorithms lead to a pattern of ad distribution that say, doesn't advertise properties in mostly white areas to non white people, that's ok?

1

RonPMexico t1_iu1037h wrote

I'm saying these algorithms are designed self optimize. you give them a goal and they try to achieve it most efficiently. If the goal is to sell property and the most efficient way to sell the property favors ads targeted white people then it's fine.

3

myspicename t1_iu10dsx wrote

It's fine to have discriminatory advertising patterns for housing if it's in an algorithm? What if an algo designs an ad campaign that's providing misleading and false claims but it increases conversion rates and hit rates?

Do you think illegal acts are ok if they are done by an algo?

−2

RonPMexico t1_iu10v69 wrote

Who said anything about illegal? Is it illegal to put up up for sale signs in a white neighborhood? Is it illegal to claim the earth is 6000 years old from a pulpit?

If something is illegal it is illegal but that's not really useful or meaningful to this discussion.

1

myspicename t1_iu1ok0e wrote

So you think it's ok to exclusively advertise properties to white people?

−1

Tall-Log-1955 t1_iu1u7p6 wrote

I think that if the reason is that non-white people did not engage with the ad (because they are not interested in it), then yes it is okay

If the reason is that some property developer wants to keep out non-whites then it is not okay.

1

myspicename t1_iu1zh5f wrote

If the algo doesn't advertise to non white people, how would we know the problem is engagement. I'm trying to lead y'all through a line of logic that ends with the idea that outsourcing racist activities to an algo isn't not racist.

1

Tall-Log-1955 t1_iu26rpu wrote

These algorithms don't have that problem because they show ads to everyone in small amounts. Then whichever demographic/group engages at the highest rate, they show to more people like that.

You can read how they work here:

https://en.m.wikipedia.org/wiki/Multi-armed_bandit

1

myspicename t1_iu26zou wrote

Ok so generalizing from a small sample size and then using race or race proxy demographics. Do y'all seriously not see the issue?

1

Tall-Log-1955 t1_iu2eaz5 wrote

If race is actually predictive of interest in a product, I don't think it's that bad. Is it sexist to less often show tampon ads to men? Is it ageist to less often show ads for toys to senior citizens?

If people of a given race are genuinely not interested in a product, I don't think it harms them to show them the ads less often.

1

myspicename t1_iu2egdv wrote

I think for many consumer products, like hair care or tampons, this is true. It becomes insidious when it's real estate, education, accommodations, etc if unchecked.

1

RonPMexico t1_iu1xhnz wrote

The only way the algorithm would exclusively advertise to whites would be if that was an explicit direction given to the system. If you program the model to sell advertise at the highest price point and the ads were sent to high income earners in the school district who have searched for realtors and any other numbers of relevant variables then the results were mostly whites I'd have absolutely no problem with it.

1

myspicename t1_iu1z7ms wrote

So you are ok with a system being racist so long as it doesn't explicitly call it out. There was a reason that advertising directed at only one race for a property was made illegal.

1

RonPMexico t1_iu1zro2 wrote

The thing is the model isn't racist. I am explicitly saying including race in these systems should be prohibited.

1

myspicename t1_iu20n0k wrote

This is like when politicians carve up districts based on other factors to proxy race. The model is definitionally racist if it continues to fuel racial segregation.

1

RonPMexico t1_iu21nq4 wrote

Politicians are optimizing for political affiliation and use race as a proxy for that. I am saying that is the exact opposite of what ought to be allowed.

1

myspicename t1_iu24bff wrote

Racism is ok if you find a proxy. Got it.

1

RonPMexico t1_iu25qoq wrote

Have you considered the opposite case? Using the real estate example. You have x number of variables including salary, school district, visiting real estate websites, and so on. Each on of those variables is given a weight by the system. We don't know what those weights are, the systems operates in a "black box" to determine the appropriate values. You look at the results and decide native Americans are under represented. now you have to add native American as a variable and in order to get the results you want you have to decide how much that should impact the final results. So who decides to favor native Americans by how much? Would that not be illegal under the fair housing act?

1

myspicename t1_iu27918 wrote

If companies ever backchecked their algos for mistakes or systematic bias, I might not be against it.

1

RonPMexico t1_iu27fim wrote

I don't know what that sentence means

1

myspicename t1_iu27s7l wrote

Is the concept of machine learning making a racist assumption and enforcing racism alien to you? It's pretty widely discussed.

1

RonPMexico t1_iu28ega wrote

I know. Thats what we are discussing. You take the view if an algorithm returns results that are not directly proportional to racial demographics the system is racist. I'm saying that is ridiculous.

What doesn't convey meaning is:

If companies ever backchecked their algos for mistakes or systematic bias, I might not be against it.

0

myspicename t1_iu28ibd wrote

Did I say directly proportional? Stop strawmanning my argument.

1

RonPMexico t1_iu28p2s wrote

How far from proportional would be okay and after that it's racist?

0

myspicename t1_iu296ce wrote

Clearly there's no strict line. Just like a white passing black person crossing the color line in Jim Crow, racist systems aren't absolute.

I'd say if there's a vastly disproportionate discrepancy it's worth checking. And I'd say if it's around things like housing, or education (rather than say, hair care items) it's more salient.

1

RonPMexico t1_iu29jmi wrote

How about this? We remove race from the equation entirely. Surely that would lead to the best outcome no?

0

myspicename t1_iu2a7pt wrote

Absolutely not and I think it's fairly obvious it wouldn't. This was tried for education and housing and because of historical inequity and cultural in group bias of systems for a majority it doesn't work.

Even workplace or academic institutions that just have policies that appeal to white majorities can enforce that. It's trivial, but even not having say, vegetarian or halal items can be a blocker, and it's "race blind" to be fine not having it.

1

RonPMexico t1_iu2axl8 wrote

So you are saying they can't be race neutral and you can't define when it's racist. Who gets to decide where to draw these arbitrary lines? How would they work with optimized systems? What is fair enough?

−1

myspicename t1_iu2b900 wrote

This is why we have laws around this. Let me guess, you think markets correct all inequities?

1

RonPMexico t1_iu2cfec wrote

I'm saying when you artificially favor one race over another in an otherwise race neutral algorithm to give your desired results it's a bad thing. You believe race should factor into everything. And you have the temerity to claim the moral high ground. Racism is bad and you ought to be ashamed of your views.

0

chute_amine t1_iu11wxz wrote

It’s complicated, but yes. We don’t use the sensitive traits in training as a normal feature - we use them to correct bias in the model along that dimension. It can be done in training or after training, but it is a necessary check in any human-influencing AI model.

0

RonPMexico t1_iu127ye wrote

It sounds like your are reducing the efficiency of the model in the name of equality.

2

chute_amine t1_iu13vjw wrote

Exactly, but what is more important? Revenue or fairness? It’s about finding the right balance. Each project/model has its own level of compromise.

1

RonPMexico t1_iu144rm wrote

I would say in the long term efficiency will benefit every one more than handicapping systems to provide desired outcomes.

2

chute_amine t1_iu16o6j wrote

Fair enough. But academia, the big names in tech, the USA, the EU, and I disagree.

2

RonPMexico t1_iu17ibt wrote

Do you mean big tech as in Facebook or Google ad services. In academia are their engineers and data scientists that prefer nice data over accurate data.

1

AnotherTakenUser t1_itzfyld wrote

Men over 55 generally don't care what you're peddling so the algorithm has to get creative

−3

thta7btce t1_iu4b3h7 wrote

A bit dark but that's funny as hell. Why would grown ass men click on underaged girls, christ

1

NoUntakenNames t1_ity8zof wrote

>Ads with teenage girls are mostly directed at men over 55

bad omen

134

cannondave t1_ityw8aj wrote

How do we sell things to men? Let's try using beautiful faces of women. Wow that worked over expectations. It's not a bad omen, it's maximizing profits by hijacking basic human instincts like the urge for beauty youth and reproduction

−18

WeirdAndGilly t1_itz03sk wrote

Teenage girls aren't women, they're children.

25

Dark-SideOfTheHorse t1_itz1qyz wrote

18 and 19 year olds are adults

11

WeirdAndGilly t1_itz505v wrote

And yet the article mentions teenage girls but not young women. And it certainly doesn't say anything about only 18 and up.

16

cannondave t1_iu1l6vr wrote

Not if you're a true christian person, then they are too young. Unless you're a pastor, then they are too old.

1

cannondave t1_iu1lcwq wrote

Yes, just like the Teen section in every porn site is child porn. That's why they are 2nd or 3rd most searched category, depending on demographic of the user.

1

Elanapoeia t1_itywrew wrote

Found the libertarian

16

sfzombie13 t1_itz9srv wrote

you act like op was wrong. it ain't wrong. if it didn't work, you wouldn't see it. they don't waste much money on advertising, they spend it effectively.

−6

wtgreen t1_itzet4f wrote

Think of all those old, white men being tricked into clicking ads with teenage girls. No place is safe for them!

6

sfzombie13 t1_itzfc5l wrote

well, i consider myself an old white man, even though i'm only 52, and all i can think of when i see college cheerleaders and any girls that age is how they look like babies to me.

1

Elanapoeia t1_itzhgdx wrote

More, their wording is weird and claims teenage girls to be the epitome of sexual beauty which is, y'know, very libertarian.

3

VitaminPb t1_itxen4n wrote

I mean this is the whole point. You show ads that are more likely to get clicked to the people more likely to respond to them. You figure that out from statistical analysis of past behaviors. It isn’t like there is a deep dark conspiracy driving this.

102

Elestriel t1_ityiauw wrote

Yeah but in today's Social Justice world, it's racist to identify with someone who looks more like you.

Or something. I dunno. I can't keep up any more. I just moved to a country where I'm the vast racial minority. /shrug

−27

[deleted] t1_itz5jgr wrote

Prepare yourself to be racially butt fucked by the majority.

−1

ninthpower t1_ityddgp wrote

Nearly all ad placement is driven by machine learning algorithms, so although I'm not saying Facebook wouldn't do this anyways, I would bet these statements are true because 55-yr-old men actually do click on ads with teenage girls and white people actually don't click on ads with black people in them. And the machine learning process simply picked up on this trend to maximize click-through rates :(

93

ChrisFromIT t1_ityham5 wrote

>Nearly all ad placement is driven by machine learning algorithms

Yes and no. What the ad displays to a user is done through AB testing. As an advertiser, you can select what group you would like to target. You can have multiple different ads targeting different groups.

With the AB testing, you will typically find what ad has the best click through.

The machine learning comes in when serving the ad, by matching people to the target group selected by the advertiser. The machine learning will try to pick people from the target group that best fit the profit and are more likely to click through. It doesn't generate a new ad campaign for the advertiser to increase click through rates.

7

Frosti11icus t1_ityq495 wrote

This used to be true. Facebook has leaned heavily into marketing automation to allow ai to do constant multivariate testing and optimization without the advertiser necessarily even knowing what creatives and copy are being served.

21

ChrisFromIT t1_iu0xp49 wrote

Nope, Facebook still does A/B testing. It doesn't have marketing automation to allow AI to do A/B testing and optimization via changing the ad copy without the advertiser's input.

Now it does use AI during the A/B testing to try and help figure out which ad copy will likely perform the best and try and push that ad copy over the others. But it does not create new ad copies.

−1

Frosti11icus t1_iu0ycl7 wrote

Ya but the advertiser can literally give the AI thousands of inputs of copy and creative to choose from and the AI will optimize ads for the best performing. And even further, you can use 3rd party API's to generate ad copy for facebook ads, so an advertiser could absolutely create a campaign and have really very little idea what ad content is being served. We've seen it in gymnastics, hockey, cheer, any travel sport...haven't heard about it in basketball yet but it's almost definitely happening which is horrific.

2

ChrisFromIT t1_iu1179t wrote

>Ya but the advertiser can literally give the AI thousands of inputs of copy and creative to choose from and the AI will optimize ads for the best performing.

Yes and no, it won't completely be able to optimize it. The larger the audience, the worse the click through rate will be per impression. A more targeted audience, the better the AI will perform.

>And even further, you can use 3rd party API's to generate ad copy for facebook ads, so an advertiser could absolutely create a campaign and have really very little idea what ad content is being served.

While true, this don't support what you were claiming before, which was that Facebook's AI generates the ad copy.

0

Frosti11icus t1_iu13z54 wrote

I never implied that the AI generate the ad copy, I said that the advertiser can run ads without necessarily knowing what copy was being served.

2

ChrisFromIT t1_iu15qbz wrote

>This used to be true. Facebook has leaned heavily into marketing automation to allow ai to do constant multivariate testing and optimization without the advertiser necessarily even knowing what creatives and copy are being served.

That is what you said.

This was in response to my comment that Facebook's AI does not generate any ad copy.

By you saying "this used to be true" in response to my comment and the rest of your comment, it very much is saying that Facebook's AI is generating ad copies and displaying those ad copies.

And the advertiser knows that Facebook will serve one out of all the ad copies it has been given for the ad campaign. The only way that Facebook AI will serve a copy that the advertiser doesn't know about, is if it generates its own ad copy.

1

Frosti11icus t1_iu1llun wrote

>without the advertiser necessarily even knowing what creatives and copy are being served.

Bro, c'mon, reading comprehension. I didn't imply anything. This is verbatim what I said.

If I'm an advertiser and I give facebook 1000 pieces of copy to choose from, and I don't explicitly check every single of the thousands or millions of ads served, then I would have no idea which of the ad copy facebook is serving. Further, if I used a 3rd party AI to generate my ad copy, which isn't that unusual, then I might not have any idea at all what ad copy is being served.

0

ChrisFromIT t1_iu1plgv wrote

>I don't explicitly check every single of the thousands or millions of ads served, then I would have no idea which of the ad copy facebook is serving.

Two things here. First, if you are ad advertiser on Facebook, you know explicitly that Facebook can only serve the ad copies you give it for the ad campaign. You can give it a million different ad copies and you know 100% that any ads served for your ad campaign is one of those million different ad copies.

Second, Facebook tells you how many impressions are given for each ad copy and the click thru rate, so as an advertisers you can see what ads are being served.

>Bro, c'mon, reading comprehension.

I explained why I read it the way I did. It is your fault for not making it clear.

1

Frosti11icus t1_iu1r360 wrote

>Two things here. First, if you are ad advertiser on Facebook, you know explicitly that Facebook can only serve the ad copies you give it for the ad campaign. You can give it a million different ad copies and you know 100% that any ads served for your ad campaign is one of those million different ad copies.
>
>Second, Facebook tells you how many impressions are given for each ad copy and the click thru rate, so as an advertisers you can see what ads are being served.

Ok first thing, yes advertisers do give facebook the inputs on ad copy, but if you are doing large amount of programmatic advertising and cranking out ads left and right you're not spending a lot of time dissecting the nuances of your ad copy.

Second, yes facebook gives you data to track your ads, but if you are running hundreds or thousands of ads you aren't as a single person going to be analyzing the performance of any specific ad, (yet another thing that can also be easily handed over to AI btw) let alone each iteration of a given ad. It wouldn't even be possible to do that as a human if you have enough variations on your ad copy and creatives.

0

ChrisFromIT t1_iu1tyir wrote

You are essentially trying to argue that it is impossible for an advertiser to know what ads are in their ad campaign. I'm just arguing that there are tools there for them to know.

And frankly if they don't know what ads are in their ad campaign, that is not an advertiser you would want to use.

1

WeirdAndGilly t1_itz0aes wrote

That doesn't seem to be what the article is saying. The advertisers aren't making the choice, the Facebook algorithm is.

5

ChrisFromIT t1_iu0yc0u wrote

That is what the article says. Essentially the study authors used A/B testing. Which the AI algorithm that Facebook uses, determines which users to serve them to. But again, based on the choices that the advertiser sets out.

0

sfzombie13 t1_itza0uz wrote

it is, after years of tweaking to find out what works. surely you've heard about all of those tests facebook ran on users, right? you think they haven't been testing and revising this for years? it's also a well known fact that folks prefer to be in the company of those more like themselves, so the targeted ads make a lot of sense, and money.

−2

WeirdAndGilly t1_itzroj3 wrote

Uh huh.

Have you read the article yet? Because it looks like you're reacting entirely to the headline and yet seem to think you can add things to the discussion that weren't covered in the article.

3

sfzombie13 t1_iu0c7jx wrote

the only thing i added was what i heard from a person who worked on the thing they're talking about, and i not only think i can add them, i think it is irresponsible not to add them. had the reporting been more responsible i would not be adding them, just commenting on them, but that is out of my control. and he didn't tell me anything that wasn't common knowledge, or at least available on the internet, so it's not really something i'm making up and can't back up. it's just not a debate class or peer reviewed paper so forgive me for not adding references.

1

funatics t1_iu461dt wrote

These machine learning algorithms can be sometimes weird for me, if I'm searching for a product or talking about it then it would appear on my newsfeed. Pretty creepy if you ask me

1

chrisdh79 OP t1_itx3u90 wrote

From the article: Facebook’s promise to advertisers is that its system is smart, effective, and easy to use. You upload your ads, fill out a few details, and Facebook’s algorithm does its magic, wading through millions of people to find the perfect audience.

The inner workings of that algorithm are opaque, even to people who work at Meta, Facebook’s parent company. But outside research sometimes offers a glimpse. A new study published Tuesday in the Association for Computer Machinery’s Digital Library journal finds that Facebook uses image recognition software to classify the race, gender, and age of the people pictured in advertisements, and that determination plays a huge role in who sees the ads. Researchers found that more ads with young women get shown to men over 55; that women see more ads with children; and that Black people see more ads with Black people in them.

26

Malamutewhisperer t1_itz83me wrote

"Researchers found that more ads with young women get shown to men over 55; that women see more ads with children; and that Black people see more ads with Black people in them." This is from the article, which got turned into this for the headline:

"Ads with teenage girls are mostly directed at men over 55, while white people see fewer ads with Black faces, a new study finds." They wanted the emotional reaction to drive clicks from the headline, and it worked.

12

Stibley_Kleeblunch t1_ity1cna wrote

I'm seeing a bunch of ill-conceived takeaways based on the assumption that Facebook is an authoritative source on demography in here, such as "old men are gross."

This information says nothing about your middle-aged neighbor. It does, however, say a lot about what Facebook THINKS about older men. Nobody turns 55, then suddenly thinks, "hey, I like young women now!" Do some older men like younger women? Sure. Do most of them? Perhaps, I have no idea.

While we'd all like to think that their advertising is based on good data and that their conclusions are sound (despite the company having fallen completely out of public trust these last few years), there's always a risk that the algorithm actually creates a stereotype, rather than just operating on what is given. The more a certain type of ad is shown to a demographic, the more likely a positive feedback loop is to emerge. If all you see is young white women, you don't even get the chance to contradict the algorithm's assumptions about you. It becomes a more binary decision at that point: "responds to ads" or "doesn't respond to ads." And if you're being targeted as a 55-year-old male, then what else is being left behind? The man could be gay, or have a fetish for old women, or just really like trees.

Point is, it's not reasonable to make assumptions on whole demographics of people based on what Facebook tells you they like. Besides, I thought we already learned that Facebook has a penchant for misinformation -- what happened to that?

not an old man yet, but not particularly fond of seeing entire age, race, or gender groups being maligned in here

15

sfzombie13 t1_itz9nl0 wrote

the only problem with your theory is that nobody THOUGHT about anything, they tried all sorts of things over the years of tweaking the algorithm, and this is what it ended up like based on the results of that tweaking. they didn't start out with many assumptions, based on what i've heard from a former employee. they're pretty efficient at advertising and they got that way with lots of practice, stuff you don't know about and most likely wouldn't believe.

6

Stibley_Kleeblunch t1_itzqwp7 wrote

That's where things get really interesting and all the fun questions start to pop up. Even if human influence of the system was minimal and occurred back in the system's infancy, just how impactful is that influence today? What were those inputs? If we don't understand what's going on under the hood, can we really trust that the system is still doing a good job, or is it possible that its current success is perceived based on reputation gained from past successes? And at what point does such a system transition from identifying patterns to pattern creation?

Is it possible for a neural network to lose its mind somewhere along the way? Google Flu worked fantastically, right up until it didn't, and nobody understood what went wrong.

Then the moral questions -- should our values impact how these things work? And, if so, to what degree? This article essentially implies that the system has re-discovered phrenology, which we decided long ago was a flawed theory that was unpalatable in no small part due to its roots in racism. If AI comes up with the same theory, does that make it an acceptable theory? We're still very early in our exploration of our relationship with such systems, and there's potential danger in how we interact with them, with respect to both how we reach them and how we learn from them.

Really, though, my issue right now is with how some people are interpreting this information. "Oh great, they're advertising to pedos and racists" is certainly not the right takeaway here, yet that exact sentiment seems to be what some people are taking away, based on some of the comments that popped up in here last night. I don't believe that "the system has been training for a long time, so we should trust it" is an especially useful conclusion either.

−1

sfzombie13 t1_itzrv9g wrote

you're reading way too much into this. what it says is that it targets ads. if the ads didn't work, they would use others. that's what the metrics are for. yes, humans introduce bias, and it could affect the actions to some extent and introduce the bias and become self fulfilling, but only to a limited degree. enjoy the day.

edit: one of the assumptions they started with was that sex sells. to men. the tweaking part is now that young women sell things to old white men. other groups may have different influences.

2

tkenben t1_itz5s6e wrote

Good point. The danger of a feedback loop, basically. Sort of like echo chambers, but with advertising.

5

Stibley_Kleeblunch t1_itzsh2v wrote

Fortunately for us, the most capable systems are focused on relatively harmless things like advertising... For now. But there are major advances every day in fields like medtech and finance. We really should be very careful with how we interact with these systems, and with how much trust we're willing to place in black-box systems.

0

SerialStateLineXer t1_iu04v59 wrote

>The more a certain type of ad is shown to a demographic, the more likely a positive feedback loop is to emerge. If all you see is young white women, you don't even get the chance to contradict the algorithm's assumptions about you.

I'm sure that you and I know more about this issue than people who work on these algorithms full time, but "show people a variety of ads" and "account for the base rate when calculating click-through rates" seem like super basic and obvious things to account for.

If you have a Facebook account, open it up now. Does it look like Facebook is only serving you one specific kind of ad?

4

Stibley_Kleeblunch t1_iu0boo6 wrote

There's plenty I don't know about the topic, and I'll happily admit it. I've never had a Facebook account, and never plan to. But at some point, doesn't "showing people a variety of ads" end up being at odds with "selling highly-targeted ad space?" I wonder if the business model accounts for that by, for instance, not charging the client to display their ads to a group that is unlikely to respond well to them.

0

chickanpoo t1_ityhy6d wrote

I’m not a Facebook fan but it pays the bills - there’s about a zero percent chance that this is intentional or even specifically spelled out within facebooks Algo team in this context. Part of the algorithm recognizes the correlation because it’s backed with actual billions of clicks driving how people interact with ads. This makes it sound like someone is punching that in manually and that’s not happening.

People in Ads is something I’ve personally been testing and have driven results of for most of my career. The aesthetics of a person drive ad responses - that’s been marketing forever.

8

bottomfeeder_ t1_ityykv4 wrote

There's a 100% chance that the team deciding to use facial recognition for mapping ads to customers knew that it would end up targeting based on age, race, and gender. Even without explicitly programming it - this is the natural consequence of training a model with target labels (clicks) that we know are dependent on age, race, and gender.

2

dryj t1_itzdn2d wrote

If you accidentally make a racist robot you're still obligated to fix it.

−5

bloxed t1_iu0bvg5 wrote

It's not racist though.

3

dryj t1_iu0wyu0 wrote

Isn't it? Should we be targeting ads by race?

1

NephelimWings t1_ityme1g wrote

Clickbaity headlines. It can be expected that all ethnic groups will tend to have biased commercials. It will probably get more common and more distinct in the future.

6

DGCbest t1_iu3lbd2 wrote

that is absolutely true, no wonder people will have their own biases depending on their ethnic groups

1

Erasmus_Tycho t1_ityepbj wrote

I'm glad I stopped using Instagram and Facebook, as well as Twitter.

3

tkenben t1_itz6ezx wrote

There's hope for the future. My niece and nephew, ages 14 and 12 respectively, refuse to participate in social media, and with no prompting from their parents, who are relatively laid back and lenient.

1

substandardpoodle t1_iu098fi wrote

Omg - they’re going to start using AI to create an amalgam of your friends’ faces to make you trust their advertising, aren’t they?

2

AutoModerator t1_itx3oz1 wrote

Welcome to r/science! This is a heavily moderated subreddit in order to keep the discussion on science. However, we recognize that many people want to discuss how they feel the research relates to their own personal lives, so to give people a space to do that, personal anecdotes are now allowed as responses to this comment. Any anecdotal comments elsewhere in the discussion will continue to be removed and our normal comment rules still apply to other comments.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

budlystuff t1_ityewmj wrote

Facebook is a war machine

1

MTL_t3k t1_itzlov4 wrote

A post about facebook immediately following a post about 'gamers'.

The sorry state of clickbait publishing.

1

Reddituser45005 t1_iu15u6b wrote

I’m less concerned with bias in targeted ads than bias in targeted news. If your average redneck engages more with stories that portray immigrants in a negative light, does the algorithm feed him more negative stories, memes, and disparaging comments of immigrants while avoiding anything that might portray immigrants in a positive light. We know it happens, but the lack of public data , research and transparency at a time when social media is being called out for amplifying social division shows a need to better understand the algorithms being used by social media to do demographic and individual targeting

1

i_have_thick_loads t1_iu1i0u1 wrote

So facebook ads don't target black users with ads featuring fewer White faces, or was that not important to highlight?

1

ilovebuses t1_ityp1td wrote

Only another 8 years to go

0

BruntLIVEz t1_itytx0x wrote

Low number of comments confirm shame.

−2