Submitted by namey-name-name t3_11sfhzx in MachineLearning
Remarkable_Ad9528 t1_jce6hst wrote
I think AI Ethics teams are going to become increasingly more important to protect companies against lawsuits out the kazoo, although it's weird that Microsoft laid off their Ethics and Society team (however from what I read, they still have an “Office of Responsible AI”, which creates rules to govern the company’s AI initiatives).
Bloomberg law published a piece last week that discussed how 79% of companies leverage AI in some way during the hiring process. The whole point of the article was that there's more regulatory pressure on the horizon for auditing this and other forms of AI, especially in Europe and NYC.
From that article, I found an agency that audits algorithms. I suspect businesses of this nature to grow, just like SEO agencies did a while back.
Also last week, the US Chamber of Commerce published a "report" that called for policy makers to establish a regulatory framework for responsible and ethical AI. Some of the key takeaways were the following:
​
>The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
>
>Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
>
>A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
>
>The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
>
>The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
>
>Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.
​
In summary, I think that tech companies will have some in-house AI Ethics team that works with external auditors, and tries to remain in compliance with regulations.
I'm currently a principal SWE at a tech company, but I think my job will be outsourced to AI within the next 5 years, so I've started to vigorously keep up with AI news.
I even started an email list called GPT Road (publish updates in AI weekdays at 6:30 AM EST) to keep myself and others up to date. If you or anyone reading this post is interested, please feel free to join. I don't make any money from it, and there's no ads. It's just a hobby but I do my best (its streamlined and in bullet point form, so quick to read). There's only ~320 subscribers so small community.
namey-name-name OP t1_jcf601z wrote
I just hope that whatever regulations Congress choose to implement actually end up being effective at promoting ethics while not crushing the field. After seeing Congress question Zuckerburg, I can’t say I have 100% faith in them. But I’m willing to be optimistic that they’ll be able to do a good job, especially since I believe that regulating AI has largely bipartisan support.
Viewing a single comment thread. View all comments