Hydreigon92 t1_jce0yhf wrote

> Ethics teams are only useful if they are actively incorporated with and listened by engineering and business teams

I'm an ML fairness specialist who works on a responsible AI team, and in my experience, the best way to do this is to operate a fully-fledged product team whose "customers" are other teams in the company.

For example, I built an internal Python library that other teams can use to perform fairness audits of recommendation systems, so they can compute and report these fairness metrics alongside traditional rec. system performance metrics during the model training process. Now when the Digital Service Act goes into effect, and we are required to produce yearly algorithmic risk assessments of recommender systems, we already have a lot of this tech infrastructure in place.


Hydreigon92 t1_j2zi0s7 wrote

One of my areas of interest is the intersection of machine learning and social work:


Hydreigon92 t1_it8qycv wrote

Stanford has a Computational Policy Lab that focuses on using ML and data to measure the impact of policy changes. Carnegie Mellon has a joint PhD program in Machine Learning and Public Policy. There's also a research conference ACM EAAMO about combing algorithmic theory, economics, and public policy together.

My personal research interests in combining social work with machine learning to design better social work interventions, and there has been work in this space to use NLP to intervene on gang shootings and optimizing homelessness services for youth in LA.