Submitted by namey-name-name t3_11sfhzx in MachineLearning

Hello! I read the following article about Microsoft laying off their AI Ethics team: https://www.cmswire.com/customer-experience/microsoft-cuts-ai-ethics-and-society-team-as-part-of-layoffs/

In your experience, what value do AI ethics teams add? Do they actually add useful insight, or do they serve more as a PR thing? I’ve heard conflicting anecdotes for each side. Is there anything you think AI ethics as a field can do to be more useful and to get more change? Thanks!

45

Comments

You must log in or register to comment.

jloverich t1_jcdnq8k wrote

They seem to be punted as soon as you have a good product you want to sell that clashes with the ethics committee. It seems like the ethecists might be a bit too ethical for businesses. Axon, which does ai work [and tasers] for the police force I believe had a bunch of their ethics team resign.

48

I_will_delete_myself t1_jcdsoy6 wrote

Lol AI ethics probably seems like just paying Philosophers from the prospective of a corporation. There are already plenty on Youtube and social media

14

VelveteenAmbush t1_jce5y2v wrote

If only it was like paying philosophers. More often it is like paying anti-corporate activists to sit inside the corporation and cause trouble. There's no incentive for them to stay targeted at things that are actually unethical -- nor even any agreement on what those things are. So they have a structural incentive to complain and block, because that is how they demonstrate impact and accrue power.

5

rustlingdown t1_jcdjxlc wrote

Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams.

To put it another way: if you're making money regardless of ethics, or if you're building faster without ethics - it's not the fault of "ethics" if these ethical considerations are ignored.

"Move fast and break things" has been the motto of the Silicon Valley for decades. No reason for that to change when it comes to trampling ethical values (see: Cambridge Analytica and countless other examples).

In fact, even with these teams layed off, it's impossible to know whether or not they've been useful given that we don't even know how they're integrated within Microsoft/Meta/ClosedAI. (They've just been fired, so probably not well.)

IMO it's the same issue as climate change and gas/energy companies. There's greenwashing just as much as there's ethicswashing. Only when corporations realized that addressing climate change was more profitable did anyone change their ways (and they're still struggling to!). Same thing with ethics and AI.

34

Hydreigon92 t1_jce0yhf wrote

> Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams

I'm an ML fairness specialist who works on a responsible AI team, and in my experience, the best way to do this is to operate a fully-fledged product team whose "customers" are other teams in the company.

For example, I built an internal Python library that other teams can use to perform fairness audits of recommendation systems, so they can compute and report these fairness metrics alongside traditional rec. system performance metrics during the model training process. Now when the Digital Service Act goes into effect, and we are required to produce yearly algorithmic risk assessments of recommender systems, we already have a lot of this tech infrastructure in place.

22

U03B1Q t1_jcf92lp wrote

This work is exactly the kind of thing I'm interested in doing. Do you mind if I DM you for some career advice?

4

edjez t1_jchqj0v wrote

Agree 100% that it is important to have people embedded in product teams who have accountability for it.

Ai ethics teams are also useful because they understand and keep track of the metrics and the benchmarks and methods used to evaluate biases, risks and harm. This is a super specialized area of knowledge that the whole company and community can capitalize on. It is also hard to keep it up to date- needs close ties to civic society and academic institutions, etc. . Think of it as if you have to set up a “pipeline”, a supply chain of practices, that start with real world insight and academic research and ends with actionable and implementable methods and code and tools.

In very large orgs, having specialized teams helps scale up company wide processes for incident response, policy work, etc.

You can see some of the the output of this work at Microsoft if you search for Sarah Bird’s presentations.

(cheers from another ML person who also worked w reco)

2

thedabking123 t1_jcfupuc wrote

thank god that only applies to giant platforms... Our firm would crumble in the face of that.

1

keepthepace t1_jcijjq2 wrote

> fairness metrics

Do you produce some that are differentiable ? It could be interesting to add them to a loss function

1

namey-name-name OP t1_jcf524y wrote

That’s really cool, it’d be awesome if something like that was built into TensorFlow or PyTorch.

0

Hydreigon92 t1_jcfpc49 wrote

I'm involved with the Fairlearn project, so once I figure out what's necessary from a company policy-side, my plan is to incorporate these methods into Fairlearn one day.

2

Spziokles t1_jcdq0za wrote

What value do AI ethics teams add?

> Summary.
Artificial intelligence poses a lot of ethical risks to businesses: It may promote bias, lead to invasions of privacy, and in the case of self-driving cars, even cause deadly accidents. Because AI is built to operate at scale, when a problem occurs, the impact is huge. Consider the AI that many health systems were using to spot high-risk patients in need of follow-up care. Researchers found that only 18% of the patients identified by the AI were Black—even though Black people accounted for 46% of the sickest patients. And the discriminatory AI was applied to at least 100 million patients.

> The sources of problems in AI are many. For starters, the data used to train it may reflect historical bias. The health systems’ AI was trained with data showing that Black people received fewer health care resources, leading the algorithm to infer that they needed less help. The data may undersample certain subpopulations. Or the wrong goal may be set for the AI. Such issues aren’t easy to address, and they can’t be remedied with a technical fix. You need a committee—comprising ethicists, lawyers, technologists, business strategists, and bias scouts—to review any AI your firm develops or buys to identify the ethical risks it presents and address how to mitigate them. This article describes how to set up such a committee effectively.

Next door was an article A Practical Guide to Building Ethical AI, which I did not read but you might want to.

AI Ethics: What It Is And Why It Matters, also mentions bias, privacy and "mistakes which can lead to anything from loss of revenue to death", and also environmental impact (AIs as large resource consumers).

I feel these are valid concerns for AI. The stakes become higher when we come closer to AGI. Once we create such a powerful entity which outsmarts us in every way, it's probably too late to apply a safety patch, or make sure it's goals are aligned with our goals. Here's a quick intro: Robert Miles - Intro to AI Safety, Remastered

So we are racing towards ever more powerful A(G)I, and being the first or having the strongest promises profit. Adding safety concerns may be costly and slow things down, so this part might be neglected. The danger of this scenario is; we might end up with an unleashed, uncontrollable being which might be resistant to late efforts to fix it.

Like the other guy, I hate when ChatGPT refuses to comply with some requests, and find some of these rails unecessary. But overall I'm even more worried we let our guard down at the last mile. We better get this right, since as Miles said, we might only get one shot.

7

namey-name-name OP t1_jcdqxpf wrote

I agree that AGI is an important concern. However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices. For one thing, if a company can just fire the ethics team whenever they don’t like what they’re saying, then how would they actually be able to make any difference when it comes to AGI? In addition, I have also heard anecdotes from others that some in AI ethics are somewhat out of touch with actual ML engineering/research, which makes some of their suggestions inapplicable (admittedly they’re just anecdotes so I take them with salt as this may not generally be true, but I think it’s a concern worth considering). Is there any way that AI ethics teams can overcome these hurdles to help make save AGI?

Edit: also wanted to note that I don’t work in the field, if I got anything wrong please let me know!

2

Spziokles t1_jcdw6q7 wrote

I don't work in the field either so I just forwarded your question to Bing, lol. I thought maybe it can find key takeaways of that "Practical Guide" (see above) to answer your question:

> According to this article, creating a culture in which a data and AI ethics strategy can be successfully deployed and maintained requires educating and upskilling employees, and empowering them to raise important ethical questions. The article also suggests that the key to a successful creation of a data and AI ethics program is using the power and authority of existing infrastructure, such as a data governance board that convenes to discuss privacy1.

> In addition, a blog post on Amelia.ai suggests that an AI ethics team must effectively communicate the value a hybrid AI-human workforce to all stakeholders. The team must be persuasive, optimistic and, most importantly, driven by data2.

> Finally, an article on Salesforce.com suggests that the AI ethics team not only develops its own strategy, but adds to the wider momentum behind a better, more responsible tech industry. With AI growing rapidly across industries, understanding how the practices that develop and implement the technology come together is invaluable3.

  1. https://hbr.org/2020/10/a-practical-guide-to-building-ethical-ai
  2. https://amelia.ai/blog/build-a-team-of-ai-ethics-experts/
  3. https://www.salesforce.com/news/stories/salesforce-debuts-ai-ethics-model-how-ethical-practices-further-responsible-artificial-intelligence/

> However, my main concern is whether or not AI ethics teams will be effective at helping promote ethical practices.

That surely depends on the company. Just speculating; if that team gets fired because the bosses don't like what the team (possibly for good reasons) recommends, then I don't see many ways for that team to be effective.

−1

dfreinc t1_jcdirwa wrote

they still have an Office of Responsible AI and i believe that's valuable but the counterpoint i've been told is that

>When studying software engineering, this is exactly what they tought us as best practice.

>If you want an unbiased assessment wether your goals were met, it's good advice to not task the same team which worked towards those goals. People become emotionally attached to what they do, and like being told they did a good job, and more reasons.

>I believe this idea generally applies to quality assurance.

by /u/Spziokles

3

[deleted] t1_jcfoa94 wrote

Not much, what are they going to do when we have an AGI tell it to behave.

3

bohreffect t1_jcgcr96 wrote

Seeing who constitutes high profile names in "AI Ethics", having watched individual debacles unfold in the industry over the last few years, it doesn't instill confidence that they're particularly valuable. It's very, very hard not to have cynical takes about the people bubbling to the top.

In general I don't think unaccountable focus groups of people are the best moral arbiters either. In this instance I feel we have to go with our least worst option deferring to the wisdom of crowds.

3

fromnighttilldawn t1_jcgsst8 wrote

Absolutely not. These ethicists can find "bias" all day and everyday, but become practically mute when it comes to condemning how their companies are in bed with capitalism and military-industrial complex that are far more dire to the fate of humanity.

3

keepthepace t1_jcij20g wrote

> Do they actually add useful insight, or do they serve more as a PR thing?

The one we hear about most are pure PR.

> Is there anything you think AI ethics as a field can do to be more useful and to get more change?

Yes. Work on AI alignment. It is a broader problem than just ethics. It is also about having models generate truthful and grounded answers. I am extremely doubtful of the current trend to use RLHF for it, we need other approaches. But this is real ML development work, not just PR production. That would be an extremely useful way to steer erhicalAI efforts

3

Remarkable_Ad9528 t1_jce6hst wrote

I think AI Ethics teams are going to become increasingly more important to protect companies against lawsuits out the kazoo, although it's weird that Microsoft laid off their Ethics and Society team (however from what I read, they still have an “Office of Responsible AI”, which creates rules to govern the company’s AI initiatives).

Bloomberg law published a piece last week that discussed how 79% of companies leverage AI in some way during the hiring process. The whole point of the article was that there's more regulatory pressure on the horizon for auditing this and other forms of AI, especially in Europe and NYC.

From that article, I found an agency that audits algorithms. I suspect businesses of this nature to grow, just like SEO agencies did a while back.

Also last week, the US Chamber of Commerce published a "report" that called for policy makers to establish a regulatory framework for responsible and ethical AI. Some of the key takeaways were the following:

​

>The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
>
>Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
>
>A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
>
>The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
>
>The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
>
>Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.

​

In summary, I think that tech companies will have some in-house AI Ethics team that works with external auditors, and tries to remain in compliance with regulations.

I'm currently a principal SWE at a tech company, but I think my job will be outsourced to AI within the next 5 years, so I've started to vigorously keep up with AI news.

I even started an email list called GPT Road (publish updates in AI weekdays at 6:30 AM EST) to keep myself and others up to date. If you or anyone reading this post is interested, please feel free to join. I don't make any money from it, and there's no ads. It's just a hobby but I do my best (its streamlined and in bullet point form, so quick to read). There's only ~320 subscribers so small community.

1

namey-name-name OP t1_jcf601z wrote

I just hope that whatever regulations Congress choose to implement actually end up being effective at promoting ethics while not crushing the field. After seeing Congress question Zuckerburg, I can’t say I have 100% faith in them. But I’m willing to be optimistic that they’ll be able to do a good job, especially since I believe that regulating AI has largely bipartisan support.

1

dankwartrustow t1_jcj7bcp wrote

Ethics will eventually be the new Compliance. Eventually.

1

KallistiTMP t1_jd8tlq0 wrote

I've heard of added value, especially in terms of highlighting risk areas and developing strategies to minimize those risks. They have a pretty good sense of how to make the AI less likely to say racist/sexist/offensive stuff.

Unfortunately, whenever it comes to a question of money vs. ethics, companies always side with money. So in application the only impact they can have is on ethics improvements which don't threaten the bottom line.

1