cmustewart
cmustewart t1_j24px5g wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Somewhat fair as the article was fairly blah, but I've got serious concerns that the current regimes will become much more locked into place backed by the power of scaled superhuman AI capabilities in surveillance, behavior prediction and information control.
cmustewart t1_j24bxuf wrote
Reply to comment by glass_superman in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
I feel like either you or I missed the point of the article, and I'm not sure which. I didn't get any sense of "what if ai takes over". My account is that the author thinks that "ai" systems should have some sort of consequentialism built in, or considered in the goal setting parameters.
The bit that resonates with me is that highly intelligent systems are likely to cause negative unintended consequences if we don't build this in up front. Even for those with the most noble intentions.
cmustewart t1_j249qd2 wrote
Reply to comment by [deleted] in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Who doesn't understand this already? Given the incredible depth of human ignorance, I'd imagine a fair amount of corporate tech hierarchy hasn't given it a single thought. My intuition is that the vast majority of humans have a view of AI driven by cultural depiction, rather than by experience or education.
cmustewart t1_j24yd84 wrote
Reply to comment by oramirite in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Intuition just meaning my take on it, based on what I know and believe. Intuition as opposed to me having access to some sort of truth.
I disagree that its not hard to do the research and understand the ethical risks. I come from a software background, which lays some of the groundwork for research and understanding. Someone from a non-tech background with a layperson's knowledge might face a significant struggle understanding all the foundational elements underlying AI and it's ethical issues.
Someone whose life is mostly consumed by work and family life could easily never give these issues much or any thought, because it seems irrelevant to their life. In my mind, this is a serious problem. AI is changing, and will continue to change, the lives of nearly everyone in ways they are unable to see or comprehend.