maxToTheJ t1_j9rwaum wrote
Reply to comment by Additional-Escape498 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I worry about a lot of bad AI/ML made by interns making decisions that have huge impact like in the justice system, real estate ect.
Appropriate_Ant_4629 t1_j9slydb wrote
> I worry about a lot of bad AI/ML made by interns making decisions that have huge impact like in the justice system, real estate ect.
I worry more about those same AIs made by the senior-architects, principal-engineers, and technology-executives rather than the interns. It's those older and richer people whose values are more likely to be archaic and racist.
I think the most dangerous ML models in the near term will be made by highly skilled and competent people whose goals aren't aligned with the bulk of society.
Ones that unfairly send certain people to jail, ones that re-enforce unfair lending practices, ones that will target the wrong people even more aggressively than humans target the wrong people today.
maxToTheJ t1_j9u6gj5 wrote
> Ones that unfairly send certain people to jail, ones that re-enforce unfair lending practices, ones that will target the wrong people even more aggressively than humans target the wrong people today.
Those examples are what I was alluding to with maybe a little too much hyperbole with saying “interns”. The most senior or best people are absolutely not building those models. Those models are being built by contractors who are subcontracting that work out which means its being built by people who are not getting paid well ie not senior or experienced folks.
Those jobs aren’t exciting and arent being rewarded financially by the market and I understand that I am not personally helping the situation but I am not going to take a huge paycut to work on those problems especially when that paycut would be at my expense for the benefit of contractors who have been historically scummy.
Appropriate_Ant_4629 t1_j9ubt3u wrote
> I understand that I am not personally helping the situation but I am not going to take a huge paycut to work on those problems especially when that paycut would be at my expense
I think you have this backwards.
Investment Banking and the Defense Industry are two of the richest industries in the world.
> Those models are being built by contractors who are subcontracting that work out which means its being built by people who are not getting paid well ie not senior or experienced folks.
The subcontractors for that autonomous F-16 fighter from the news last month are not underpaid, nor are the Palantir guys making the software used to target who autonomous drones hit, nor are the ML models guiding real-estate investment corporations that bought a quarter of all homes this year.
It's the guys trying to do charitable work using ML (counting endangered species in national parks, etc) that are far more likely to be the underpaid interns.
maxToTheJ t1_j9vpq25 wrote
>The subcontractors for that autonomous F-16 fighter from the news last month are not underpaid, nor are the Palantir guys making the software used to target who autonomous drones hit, nor are the ML models guiding real-estate investment corporations that bought a quarter of all homes this year.
You are equivocating the profits of the corporations vs the wages of the workers. Also you are equivocating "Investment Banking" with "Retail Banking" the person making lending models isnt getting the same TC as someone at Two Sigma.
None of those places (retail banking, defense) you list are the highest comp employers. The may be massively profitable but that doesnt necessarily translates to wages.
Top-Perspective2560 t1_j9umv6r wrote
My research is in AI/ML for healthcare. One thing people forget is that everyone is concerned about AI/ML, and no-one is happy to completely delegate decision making to an ML model. Even where we have models capable of making accurate predictions, there are so many barriers to trust e.g. Black Box Problem and general lack of explainability which relegate these models to decision-support at best and being completely ignored at worst. I actually think that's a good thing to an extent - the barriers to trust are for the most part absolutely valid and rational.
However, the idea that these models are just going to be running amock is a bit unrealistic I think - people are generally very cautious of AI/ML, especially laymen.
Viewing a single comment thread. View all comments