AndreasRaaskov OP t1_j25buis wrote
Reply to comment by bildramer in How the concept: Banality of evil developed by Hanna Arendt can be applied to AI Ethics in order to understand the unintentional behaviour of machines that are intelligent but not conscious. by AndreasRaaskov
Honestly, this was my main motivation for writing this article, as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
As someone with an engineering mindset I am not really that interested in AGI may and may not exist one day unless you know a way to build one. What really interests me though is building an understanding of how the Artificial Narrow Intelligence (ANI) that does exist is currently hurting people.
To be even more specific I wrote about how the Instagram recommendation system may purposefully make teen girls depressed and I wanted to expand on that theory.
https://medium.com/@andreasrmadsen/instagram-influence-and-depression-bc155287a7b7
I do understand that talking about how some people may be hurt by ANI today is disappointing if you expected another, WE ARE ALL GOING TO DIE by AGI article. Yet I find the first problem far more pressing and I really wish that more people in philosophy would focus on applying their knowledge to the philosophical problems that other fields are struggling with instead of only looking at problems far in the future that may never exist.
robothistorian t1_j26dpqc wrote
>as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
I'm afraid, in that case you are either not looking hard enough or are looking at the wrong places.
I would recommend you begin by looking into the domain of "technology/computer and ethics". So, for example, you will find a plethora of works collected under various titles such as Value Sensitive Design, Machine Ethics etc.
That being said, it may also be helpful to clarify some elements of your article, which are a bit disturbing.
First, you invoke the Shoah and then focus on Arendt's work in that regard. But, with specific reference to your own situation, the more relevant reference would have been to Aktion T4 of the Nazis (This is an article that lays out how and where the program began). As is well known, the rationale underlying that mass murder system (and it was a "system") was grounded, specifically, on eugenics, and, more abstractly, on the notion of an "idealized human". The Shoah, on the other hand, was grounded on a racial principle according to which any race considered to be "non-Aryan" was a valid target of a racial cleaning program, which resulted in the Shoah. It is important to be conceptually clear about these two distinct operative concepts: The T4 program was one of mass murder; the Shoah was an act of genocide. One may not immediately appreciate the difference, but let me assure you, the difference matters both in legal and in ethico-political terms. This is a controversial perspective in what is considered "Holocaust Studies", but it is, in my opinion, a distinction to be aware of.
Second, the notion of "evil" that you impute to AI is rather imprecise. It is so because it is likely based on an imaginary and speculative notion of AI. Perhaps a more productive way to approach this problem would be to look through the lens of what Gernot Böhme refers to as "invasive technification". There is a lot of work that is being done on the ethical issues surrounding this notion of progressive technification given some of the problems that are arising as a consequence of this emergent and evolving process. The Robodebt problem is a classic example. As Prof. van den Hengen (quoted in the article) points out
>Automation of some administrative social security functions is a very good idea, and inevitable. The problem with Robodebt was the policy, not the technology. The technology did what it was asked very effectively. The problem is that it was asked to do something daft.
This is, generally speaking, also true about most other computerized systems including the "AI systems" that are driving military and combat systems.
Thus, I'd argue that the ethico-moral concern needs to be targeted towards the designers of the systems, the users of the system and only secondarily to the technologies involved. Some, of course, disagree with this. They contend that we should be looking to (and here they slip into a kind of speculative and futuristic mode) design "artificial moral machines", that is to say, machines that are intrinsically capable of engaging in moral behaviour. This is a longer and more detailed treatment of the subject of "moral machines". I have serious reservations about this, but that is irrelevant in this context.
In conclusion, I would like to say that while I am empathetic to your personal situation, but the article that you have shared, while appreciated, is not really on the mark. This kind of a discussion requires a more nuanced and carefully thought out approach, and an awareness of the work that has been done and which is being done in the field currently.
AndreasRaaskov OP t1_j28acyk wrote
Thank you for the extra sources I will check them out. And hopefully include them in further work.
In the meantime, I hope you have some understanding of the fact that the article was written by a master's student and is freely available, thus not do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
I hope one day to get better
robothistorian t1_j28b3m5 wrote
>do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
If you are going to put something out in public with your name on it (in other words publish) and want it to be taken seriously, then it is necessary to ensure that it is carefully thought through and argued persuasively. This accounts for the "nuance and quality". References are important, but in a relatively informal (non-academic) setting, not mandatory.
Further, professors (and other less senior academics) usually only get editorial support after their work has been accepted for publication, which also means it has been through a number of rounds of peer review.
>I hope one day to get better
I am sure if you put in the effort, you will.
Fmatosqg t1_j29622s wrote
Thx for putting in the effort and starting such conversations. Internet is a tough place and there is value in your output before you have the experience to write a professional level article.
Indigo_Sunset t1_j25hlk9 wrote
If the goal is to morals gate ANI, then the process is limited to the rule construction methodology of instruction writers. This would be the banality of evil within such a system, culpability. It's furthered by the apathy of iteration where a narrowed optimization ai obfuscates instruction sets to greyscale through black box, thereby enabling a loss of complete understanding while denying culpability as 'wasn't me' while pointing at a blackish box they built themselves.
In the case of facebook, the obviousness of the effect has no bearing. It has virtually no consequence without a culpability the current justice system is capable of attending to. Whether due to a lack of applicable laws, or the adver$arial nature of the system, or the expectation of 'free market' corrections by 'rational people', the end product is highly representative of the banality that has no impetus to change.
Viewing a single comment thread. View all comments