Comments
Slapbox t1_j24yzo4 wrote
Her actual quote:
> Good can be radical; evil can never be radical, it can only be extreme, for it possesses neither depth nor any demonic dimension yet--and this is its horror--it can spread like a fungus over the surface of the earth and lay waste the entire world. Evil comes from a failure to think. -- Hannah Arendt
ThorDansLaCroix t1_j25f7qw wrote
She realised it when watching the defense speach of a Nazist in a court, saying that he doesn't feel responsable for sending millions of Jewish to Death because he did it not as his intention, he did it because his duty was to follow orders of his superiors.
Hannah Arendt then said that when we don't feel responsable for what we do, because of hierarchy duties or because of law obedience, then we don't feel responsable for the consequence of our actions. So evil is banalised.
It is important to remember that during Nazist regime killing most people didn't care about it, either in Germany or abroad. The war was not about. It was only after the war that the mass killing was used as propaganda by the winners about saving its victmes from the Evil regime. And even up today it is most about the Jewish while you rarely see any mention about the mass killing of disabled people, mental ill, etc.
The reason I mention it is because I am disabled in Germany and my experience as disable feels the same as living in a Nazist society back then. I actually was made disabled by my neighbours and they still keep causing a lot of torture and disabling things to me. But whenever I look for help from friends and authorities they don't show to care at all. They all mention the law that about my neighbours having the right to do what they do regardless the consequences it causes to me. And they think like this because they are doctrinated that society must respect the laws and authority orders for the orders sake. Even if it means sacrificing people, because otherwise it means corrupting the rule of order. And this is exactly the reason so much evil was being accepted and banalised in Nazist regime. And the reason the Nazist Hannah Aren't was watching in court justified him sending millions to death and not feeling guilty or responsable.
When we look at society today, people didn't change. It is exactly as it was at the time of Nazist regime. Although there are many antifas in my neighbourhood claiming they protect the minorities from nazists, they are just as the same as the people who didn't care about disabled people being sent to death when I look for help. They tell me what the nazists said back then, that it is the law and the law is above of all things, because of order sake.
SanctusSalieri t1_j25igrh wrote
Taking a surface reading of Arendt's idea of the banality evil (which isn't clearly the best description of all behavior we can call evil) and extrapolating the idea that Nazism was banal and equivalent to your trouble getting recognition and all the services you might want as a disabled person in a rich country is actually kind of insane.
Grandpies t1_j26xnws wrote
The person you're responding to is not saying Nazism is banal. Eichmann in Jerusalem is about the mental gymnastics certain high-ranking Nazis went through to dress up genocide as bureaucratic humdrum. Arendt basically argues that Eichmann was very stupid, failed upward into a position of power, and managed to convince himself he was just doing his job.
Have you read the book, or just this comment thread?
SanctusSalieri t1_j26zk1t wrote
I have read the book. The person I'm replying to very specifically said Nazism is banal, which is what I took issue with.
"When we look at society today, people didn't change. It is exactly as it was at the time of Nazist regime."
They are saying Nazism's qualities are humdrum, quotidian, unsurprising, and perpetual rather than specific and historically circumscribed.
I'm always shocked when people with poor reading comprehension so confidently accuse others of misunderstanding.
ThorDansLaCroix t1_j274suz wrote
I was not saying that we live in a Nazist system or Nazist like sistem today. I said that the way people trusted authorities rule of order over people welbing that allowed Nazist atrocities to happen and being accepted by many, is similar to how people see others suffering today ans trying to justify it because of laws or authorities decisions.
Many of the people who accepted the slavement of people (working at no nazist family houses and farms) and kidnapping of minorities during Nazist regime were not Nazist supporters but just "good citizens" (following the authorities rules to better self benefit from its system even if it means sacrificing minorities).
Or to put it shorter, it is like what Deleuze and Guattari wrote on Anti-Oedipus, that we all have a little fascist inside us that we must be very careful to not let it out.
It is my way to say that we all have a potencial of abusive relashionship with others that we must not take advantage of by convincing ourselves that we have a good reason to take such advantage by saying to ourselves that is not our fault but just how things are.
ThorDansLaCroix t1_j25iugv wrote
I read most works of Hannah Arendt and know very well the Banality of Evil concept and how she developed its thinking.
The nazist defence I mentioned is called the Eichmanm Trial. I suggest you to look for this title written by Hannah Arendt. Or Eichmann in Jerusalem.
SanctusSalieri t1_j25j4ad wrote
Nothing you are going through is at all similar to Nazi death camps, it's extremely insensitive to suggest it is. Have some perspective.
Edit: just saw your edit. Yeah. I've read Eichmann in Jerusalem. That's the whole premise of this discussion.
ThorDansLaCroix t1_j25jrba wrote
I never compared what I am experiencing with Nazist death camps.
[deleted] t1_j25up1d wrote
[deleted]
SanctusSalieri t1_j25kxnw wrote
Your comment is actually still public, so it's bold to contradict what you just wrote.
Cruciblelfg123 t1_j25uxd5 wrote
The English isn’t great but I suspect the point they are trying to make is the problems are same even though the degree is obviously much less. As in the Nazi regime is a hyperbolical manifestation of the same basic problems humans have always and still do have
SanctusSalieri t1_j25wlu3 wrote
I don't think the Nazis were "normal stuff but more" or even an extreme version of stable "human nature" or something. They were a particular and brutal regime born out of peculiar historical circumstances.
Cruciblelfg123 t1_j25zs2v wrote
I think the only think that was peculiar about the historical circumstances was “modern” technology and our sudden exponential capacity for atrocity in what was otherwise a pretty typical war with a dictator who “inspired” people in a bad place. Furthermore I don’t think there’s anything stable about human nature, I think much like the point of the post and the idea of the banality of evil, the only really stable thing is the social systems that control us
SanctusSalieri t1_j26505i wrote
The immediate context of the end of WWI, longstanding German and European traditions of antisemitism, the rise of an attempt to explain individual human prospects through genetics (and control populations through eugenics), Romanticism, the invention of nationalism through folkloric identification with an imagined past, pro-natalism for a select population (directly related to eugenics) and a corresponding ideology of Lebensraum, a dissatisfaction with Weimar democracy and a willingness to put faith in an outsider dictator... there are a lot of things going on with Nazism.
Cruciblelfg123 t1_j26amkh wrote
I don’t see how any of those are particularly unique fads as far as humanity goes, again we aren’t a stable bunch. It is definitely a modern evolutionary of pretty old supremacist ideas, but supremacy is nothing new and all those are just excuses for it that nazis used to their advantage. Any new idea is still just viewed through the same limited lens any one of us short lived predictable assholes can see it through
SanctusSalieri t1_j26d65v wrote
Well, historians get to decide these things and having been one, they would all disagree.
Cruciblelfg123 t1_j26hvzb wrote
What makes these extremist views exceptional compared to all the other extreme xenophobic desperate supremacist ideas through all of history, to the point where they couldn’t even be considered much worse but similar but apparently completely unique compared to everything we’ve done naturally up until that point
SanctusSalieri t1_j26izz4 wrote
Historians tend to historicize. That means first treating particular events using an empirical method and understanding them on their own merits. Then synthesizing explanations, comparative studies, and so on. They do this because it's the best way to do history. Generally they would avoid the morally loaded and aggrieved tone you're taking. Saying something is peculiar and particular doesn't preclude comparison, and it is not anjudgment of gravity, seriousness, or worthiness of study.
Cruciblelfg123 t1_j26kt1d wrote
I don’t see how I’m posing a moral question, and I definitely don’t think I’m asking even a particularly hard or loaded question. You said these events are set apart from history, I asked how because they seem at least on the surface quite typical if very extreme, you said historians decide what’s extreme (which is a non answer and an ad hominem response to be clear), and again I asked the same and your response was to tell me exactly how historians go about categorizing events. If you understand how they go about this and the synthesized explanations and comparative studies that have went into the topic then it shouldn’t be that hard to give me at the least an ELI5 about what exactly separates these things from the other seemingly similar events in history. You said these events are unique and I’m literally just asking why
SanctusSalieri t1_j26lv0f wrote
I said the exact opposite of "set apart from history." I offered some of the particular historical conditions that allow us to understand the events. By generalizing between situations as diverse as Nazi Germany and 21st Century Europe or America you misunderstand both -- and misunderstanding the present is quite serious because we might want to do something to change it.
Cruciblelfg123 t1_j26mtoy wrote
You didn’t say they were particular you said they were peculiar. As in unique. As in OPs argument (or at least my interpretation of their poor English) was invalid because these events you listed shared nothing in common with current or past events. That’s the problem here I made a pretty general statement and you denied it wholely.
You clearly have a deeper understanding of secular history than the both of us but you aren’t exactly sharing that wisdom if you just name a bunch of ideas and movements related to the period without even slightly pointing out why they unique and not just generational permutations of typical trends which is what I said and what I understand OP to be saying.
I said why are those different and you’ve essentially said “just trust me bro”
Edit: to be fair you also said they were particular but that wasn’t what I took issue with
SanctusSalieri t1_j26o2r9 wrote
I explained that history as a profession emphasizes uniqueness due to it being an empirical discipline, and generational permutations of typical trends isn't a thing they do. That's not the same as incommensurability. It's fortunate that history has contingency and particularity, if we like the idea that things could be different than they are. But we don't focus on particularity because it's comforting, but because it's informative.
Cruciblelfg123 t1_j26puq9 wrote
Are we in a disciplinary setting here? I can somewhat appreciate why history as a discipline would operate under such conditions because like you said it’s informative, but again it’s seems you’ve applied a pretty narrow group language to a general discussion and used it as an absolute rule.
My point being that fact that history as a study and discipline won’t bother drawing correlation between “typical trends” doesn’t mean there are none, it just means they aren’t worth secular study. Furthermore if you are going to state as fact that there is nothing the same between past acts and modern, and especially when it’s given such a wide berth saying they are “similar but one is clearly more extreme/heinous”, simply stating that historians don’t bother quantifying such a thing isn’t really an argument for it’s not existing
SanctusSalieri t1_j26qpbl wrote
I specifically said you can compare, but the comparison made obfuscated both points of reference rather than illuminating anything. I became a historian because I'm convinced it's methodology is the correct one for precisely these kinds of questions.
AStealthyPerson t1_j25ye8t wrote
I didn't see the words "death camp" in their description of how disabled people are treated in today's society anywhere. They said that our society is "Nazist" when it comes to dealing with disabled folks, which is largely correct. This user took a great deal of time to explain how they have been denied access to help by authorities in dealing with personal acts of terrorism committed by their neighbors against them. They may not have extrapolated on the situation much, and I'm sure there's more too it than what we know, but it sounds very in line with Nazi attitudes towards racist/antisemitic/homophobic/ableist "vigilantes" during the Nazi regime. Germany just had a failed right wing coup, same as the US, and it's not hard to see how there could be reactionary people in real positions of power who prevent aide and comfort from being provided to the "otherized" of society (especially at the local level). We are a society with deeply embedded hierarchies, and as economic prospects continue to worsen, these folks in charge of said hierarchies are more likely to become reactionary than progressive.
SanctusSalieri t1_j265agu wrote
Eichmann literally organized transportation to death camps. I am not ad libbing death camps, it is the context of the discussion and the most notable feature of Nazi Germany.
monsantobreath t1_j26ij3q wrote
>and the most notable feature of Nazi Germany.
And that's worst thing about our perception of Nazism. As if unless you're engineering such industrial murder there's no right to discuss its qualities as they are found outside the third Reich.
So much happened before the final solution.
SanctusSalieri t1_j26imzh wrote
Imagine not understanding what "most notable" means.
monsantobreath t1_j26u65x wrote
Most notable doesn't mean when it's mentioned that this is what's being referenced.
SanctusSalieri t1_j26zok7 wrote
I said "most notable," and you thought that meant "there's no right to discuss its [other] qualities." So you misinterpreted the phrase quite seriously.
monsantobreath t1_j279pwv wrote
Why would you bring death camps in at all then? I feel like you're back peddling and trying to not act like you are.
SanctusSalieri t1_j27du3i wrote
Because death camps are the most notable feature of the Nazi regime.
monsantobreath t1_j298mr4 wrote
This is circular. You had a bad take and that's that.
SanctusSalieri t1_j29dyp4 wrote
Yeah, you asked the same question and the answer has not changed. What do you expect? There's no bad take in saying that death camps are relevant to any discussion of Eichmann and the most notable feature of the Nazi regime. I genuinely don't understand what your issue is, your entire behavior here is inscrutable.
monsantobreath t1_j2fm2mz wrote
It's actually not a good take to suggest that in discussing Nazism you can invalidate someone's comparison by saying "but there are no death caps".
It's ridiculous really. It reduces such a broad systemic evil into a single point and makes drawing any parallels impossible because it's not 1941 in eastern Europe.
sammarsmce t1_j288kpp wrote
Any instance of fascism is fascism. Don’t start with the “some people have it worse” I really don’t like you and you need to leave them alone.
SanctusSalieri t1_j291t1q wrote
Present day Germany is not fascist. Words have meaning and if you don't know what fascism is there are books that could help you. Calling present day Germany fascist misconstrues history and the present and makes us less informed than we would be by having a proper analysis of what is going on.
cassidymcgurk t1_j25temb wrote
He said it was like living in nazi germany, which i suspect a lot of us feel, wherever we are
SanctusSalieri t1_j25ucp3 wrote
I don't feel that at all... then again I have degrees in history so I have had occasion to think about this a little more maybe.
cassidymcgurk t1_j26krzd wrote
I have a degree in history as well
cassidymcgurk t1_j26l5sn wrote
Upper Second, Southampton University, graduated 1988, maybe Im just older
sammarsmce t1_j288hkq wrote
I think it’s insane that you would respond to a comment by a disabled person expressing their own experience with evil and calling it insane. You need empathy and have just ironically exemplified the ethos of the original theory.
SanctusSalieri t1_j29295w wrote
It's extremely pedantic to suggest disabled people can't be responsible for what they say and need to be handled with kid gloves. The fucking ironic thing is I'm also disabled. Does that mean you need to delete your comment and agree with everything I say? Or am I owed the dignity of being treated like anyone else arguing a position?
uncletravellingmatt t1_j25v62i wrote
> I actually was made disabled by my neighbours and they still keep causing a lot of torture and disabling things to me.
I feel as if, once you've brought this up, you need to expand and explain what you mean by it. Was it not something you could sue over, for example?
ThorDansLaCroix t1_j262bwm wrote
I tried many times sue my neighbours with the help of lawers of Tenant Union and ÖRA. In both places they told me that there is nothing then can do because by law my neighbours have the right to do as much noise at night as they want.
The actually law says that there is a limit but in reality, unless it is a noise stupid loud the police can hear from the streets it is difficult to gain any case against noisy neighbours.
Because I have a neighbour who do craft work at all night just behind my wall that is not solid, I can not sleep, work, study or concentrate on anything. Before I could still survive with it by wearing always a earplug or headphones. But the excessive use of them caused me a chronic neurological problem and now I am very sensitive to noises. And when it gets too bad it actually cause me somatic pains on my ears.
But accounting to Lawers I need a friend or neighbours as witnesses bit my "friends" and neighbours don't care because it is not their wall, so it don't effect them. They assume I am just over reacting although I am in neurological therapy and I have documents from psychiatic center stating that I urgently need to move to an other apartment.
On top of that the Lawers said there is nothing they can do even if the neighbours are causing me harm and chronic illness, or making me sleep in in a park as a homeless, because the law protects their right to do whatever noise they want at night if nobody else feels effected by it.
One of the lawes really said that me being disabled is my problem because the law is made according to the majority.
So it is literally what I said earlier, that the society where I live is 100% OK with people destroying others life, health and cause literal torture to others, as long as the law allows it. They don't feel responsable for the harm that is caused to others because they don't see it as their choice but only their duty to respect the rule of law above all things. If the law allow it they see it as they not being responsable for they cause to others (since the law says so).
This alienation is so intrinsic in this society that I know a woman who lives in a building next to mine with the same problem. She also is disabled but she has chronic fatigue. When I tried to talk about us not getting help because of people putting the law order above all things, she expressed that people are right. That there is nothing they can do because it is the law. Just like me she is a victme of ableim and of people abusing of their rights, but she is educated that she is the problem for being the exception (being chronically ill). Or like the lawer told me "It is your problem". Because they are educated that the law is what keep Germany a society of order.
[deleted] t1_j2672ge wrote
[deleted]
sammarsmce t1_j288cin wrote
Hey honey, thank you for your well written and informative response. I am so sorry you have been oppressed by the people in your area just know you have my support and if you need anything I am a message away.
ThorDansLaCroix t1_j28z5lv wrote
Thank you.
Whatmeworry4 t1_j24k3s5 wrote
I would disagree with her definition because I believe that the banality of evil is what happens when we do understand the full consequences of our actions, and just don’t care enough to change them.
Evil is not a cognitive error unless we are defining it as mental illness or defect. To me, true evil requires intent.
ConsciousInsurance67 t1_j24kikk wrote
Those consecuences long term are negative so there is a part of ignorance in that evil.
Whatmeworry4 t1_j24lhf8 wrote
Why do you assume that those consequences are negative for the person acting, or that they care? And how do you separate true ignorance versus willful ignorance?
RegurgitatingFetus t1_j24nhce wrote
And how do you detect intent, humor me.
Whatmeworry4 t1_j24o6bz wrote
Ok, the easiest way is to ask if the consequences were intentional, or it may even be documented. Now, why do you ask? Why do we need to detect the intent for the purposes of a theoretical discussion?
ConsciousInsurance67 t1_j24sfwe wrote
Legally and inherited from roman Rights, anything to be considered a crime needs: intentionality ( evil or not) and fault ( the wrongdoing itself that is maybe not born of evil intentions but brings pain and suffering, and therefore is bad ) example: murder ( evil- evil) v.s homicide in self defense (you kill someone but the motivation is not killing, the crime happens as a consecuence of protecting yourself . Of course it is still a crime even when the consecuences are not intentional .
I think the ethic rules for robots made by Asimov played around this; what should an AI do to protect us from ourselves?
Whatmeworry4 t1_j24v23v wrote
I am only referring to the intentionality to seek the consequences. True evil considers the consequences as evil and doesn’t care. The banality of evil is when you don’t consider the consequences as evil. The intent to cause the consequences is the same either way.
ConsciousInsurance67 t1_j284oji wrote
Thank you. Then, I see that sometimes the difference between true evil and banal evil is a social construct, "bad" behaviours are rationalised to be congruent with a good self image, ( "it was my job, I had to do it for the better" ) this happens when no universal ethics are displayed and I think we have a consensus of what are the human rights but there isnt an universal ethic for all humanity, that is a problem philosophy psychology and sociology have to solve.
SchonoKe t1_j25ddq3 wrote
The book in its entirety is closer to what you said than that quote.
The book talks about how Eichmann knew full well what he was doing and what was happening (he once even used his position to “save” some from the camps by broker a quid pro quo deal, IIRC managed to forget this fact during his trial because it was such a minor event to him personally) but cared far more about his career and doing his job well as it was assigned rather than doing the right thing.
SanctusSalieri t1_j25inxn wrote
It's also important that Eichmann was a lying sack of shit mounting a desperate legal defense and certainly participated willingly in everything he did and shared the Nazi ideology.
Whiplash17488 t1_j25ya7e wrote
I think its more that the nazi’s thought they were the good guys, genuinely rather than people doing evil for the sake of evil.
The cognitive error Arendt based it on was Eichmann’s trial in Jeruzalem. Eichmann was responsible for the orchestration of the logistics of the holocaust.
Eichmann’s values were that efficiency is good. A good work ethic is good. That’s the way to move up in the world and provide for your family. That’s the way to fit in and become homogeneous with your community.
The cognitive dissonance of the evil his actions were causing was pushed down and abstracted away on paper and numbers and quotas.
Similarly, someone might say a drone pilot pressing a button on his joystick causing children to die in collateral damage isn’t “evil”. Well it is to some. Others are just trying to do a good job.
My examples are imperfect, but the premise of her argument is that nobody is capable of assenting to a judgement they think is evil. Everyone assents to doing “good” at some level.
Her paper was intentionally controversial and was not meant as an excuse for the holocaust.
kfpswf t1_j26gwv2 wrote
>I think its more that the nazi’s thought they were the good guys, genuinely rather than people doing evil for the sake of evil.
Yes, that's the 'warped perception' I was referring to. It was a worldview of a very insecure, power-drunk Hitler that became their guiding light.
>My examples are imperfect, but the premise of her argument is that nobody is capable of assenting to a judgement they think is evil. Everyone assents to doing “good” at some level.
Your example are great actually. Yes, as long as you can brainwash people into believing they're doing good, and we know how easy it is to do so, people will continue to commit evil rather enthusiastically.
>Her paper was intentionally controversial and was not meant as an excuse for the holocaust.
It may not have focused on the overall evil of the holocaust, but the general mechanism is the same. You adopt a flawed or limited worldview, and then commit evil in the name of your greater good.
Whiplash17488 t1_j26yi1b wrote
I realize now I wrote that comment as a response for someone else and accidentally posted it to you. There isn't a single thing you said I disagree with even though I started with "I think its more that..." which implies I took a different take than you. Not the case. My bad.
kfpswf t1_j275679 wrote
No worries friend. We have nothing to debate about. Have a good day!
RyghtHandMan t1_j27tcvf wrote
Quote from the movie 1408:
> Some smartass spoke about the banality of evil. If that's true, then we're in the seventh circle of Hell.
>It does have its charms.
glass_superman t1_j241gi4 wrote
Is it ridiculous to worry about evil AI when we are already ruled by evil billionaires?
It's like, "oh no, what if an AI takes over and does bad stuff? Let's stop the AI so that we can continue to have benevolent leaders like fucking Elon Musk and the Koch Brothers."
Maybe our AI is evil because it is owned by dipshits?
cmustewart t1_j24bxuf wrote
I feel like either you or I missed the point of the article, and I'm not sure which. I didn't get any sense of "what if ai takes over". My account is that the author thinks that "ai" systems should have some sort of consequentialism built in, or considered in the goal setting parameters.
The bit that resonates with me is that highly intelligent systems are likely to cause negative unintended consequences if we don't build this in up front. Even for those with the most noble intentions.
glass_superman t1_j24oaqv wrote
It's the article that missed the point. It wastes time considering the potential evil of future AI and how to avoid. I am living in a banal evil right now.
cmustewart t1_j24px5g wrote
Somewhat fair as the article was fairly blah, but I've got serious concerns that the current regimes will become much more locked into place backed by the power of scaled superhuman AI capabilities in surveillance, behavior prediction and information control.
glass_superman t1_j26l6c5 wrote
That's totally what is going to happen. Looks at international borders. As nuclear weapons and ICBMs have proliferated, we find the nation borders are now basically permanent. Before WWII shit was moving around all the time.
AI will similarly cement the classes. We might as well have a caste system.
Meta_Digital t1_j2465yk wrote
Yeah, applying the concept of "banality of evil" to something imaginary like an AI when capitalism is right there being the most banal and most evil thing humanity has yet to contend with is a kind of blindness one might expect if you're living within a banal enough evil.
Edit: Angry rate downs despite rising inequality, authoritarianism, climate change, and the threat of nuclear war - all at once.
Wild-Bedroom-57011 t1_j25k0ew wrote
Is capitalism the most evil thing humanity has dealt with? More than feudalism, slavery, etc?
Further, AI isn't really imaginary-- at worst the author is trying to pre-empt and avoid an issue that is less likely to come to pass
tmac213 t1_j265p0f wrote
In the current world I think the answer is clearly yes, although I would amend the word "evil" to something like destructive or harmful, since an economic system doesn't need to contain malicious intent on order to harm people. Feudalism was and is awful, but has largely given way to capitalism in most places in the modern world. As for slavery, capitalism has been the primary consumer of slave labor since the industrial revolution. So yes, capitalism is the worst.
Wild-Bedroom-57011 t1_j28kq4l wrote
But they said "has yet to content with"
Unless you do in fact mean that every single system of governance, including things before slavery that are hard to conceptualize under one framework, extreme state control (whether you believe NK, USSR are actually socialist or not), etc. etc.
I'm not making a pro-capitalist argument, merely the point that
And ignoring the issue of slavery historically-- "has yet to content with"-- does seem a bit of a deliberate sidestep. Of course capitalism will be the primary consumer of slave labour, but slavery, absolute poverty, etc are lower and falling. Further, modern slavery is completely terrible but less severe than chattel slavery, or slavery that came before that.
But again, my argument was never that capitalism is better than anything else, merely that it isn't the most evil thing. Genocide might be. Or something completely different.
Meta_Digital t1_j25klmm wrote
Yes; capitalism is the first system that seems poised to lead to human extinction if we don't choose to overcome it rather than reacting after it does its damage and self destructs.
The AI the author is referring to is either what we have today, which is just old mechanical automation, or the AI that is imagined to have intelligence. Either way, it's the motives of the creators of those systems that are the core problem of those systems.
Wild-Bedroom-57011 t1_j25m53g wrote
But it seems that the AI alignment issue is also a big concern, too. In either case-- capitalists using AI for SUPER CAPITALISM (i.e. can do all normal capitalism things but faster, more effectively) and so the issue solely being in intent and motive, or capitalists incorrectly specifying outcomes (cutting corners to make profit) leading to misaligned AI that does really bad things, your arguments against capitalism only strengthen the concerns we have with AI
Meta_Digital t1_j25myuu wrote
Indeed.
I think we could conceive of AI and automation that is a boon to humanity (as was the original intent of automation), but any form of power and control + capitalism = immoral behavior. Concern over AI is really concern over capitalism. Even the fear of an AI rebellion we see in fiction is just a technologically advanced capitalist fear of the old slave uprising.
Wild-Bedroom-57011 t1_j25q617 wrote
Sure! However AI itself also has AI specific concerns that are orthogonal to the socio-economic system under which we live in or they are created. Robert Miles on YouTube is a great entertaining and educational source for this
ShalmaneserIII t1_j26k7iw wrote
So if the problem with both capitalism and AI is that the people who create them use them for their own ends and motives, is your problem simply that people want something other than some general good for all humanity? Is your alternative forced altruism or something like it?
Meta_Digital t1_j26krzm wrote
Well, the fundamental problem with capitalism is that it just doesn't work. Not in the long run. Infinite exponential growth is a problem, especially as an economic system. Eventually, in order to maintain that growth, you have to sacrifice all morality. In the end, you have to sacrifice life itself if you wish to maintain it. Look at the promises vs. the consequences of automation for a great example of how capitalism, as a system and an ideology, ruins everything it touches. You don't need forced altruism to have some decency in the world; you just need a system that doesn't go out of its way to eliminate every possible hint of altruism in the world to feed its endless hunger.
ShalmaneserIII t1_j26ldc0 wrote
Automation is great. Without it, we'd still be making everything by hand and we'd have very few manufactured goods as a result, and those would be expensive.
So if you don't want endless growth, how do you suggest dealing with people who want more tomorrow than they have today?
Meta_Digital t1_j26pyv2 wrote
We don't put those kinds of people in charge of society like we do under capitalism.
ShalmaneserIII t1_j26yka7 wrote
We obviously would. Even if all resources were evenly divided, the leader who says "We can all have more tomorrow" is going to be more popular than one saying "This is all you'll ever have, so you'd better learn to like it."
Meta_Digital t1_j2713bm wrote
Yes, well, if everyone will have more tomorrow that sounds like socialism, not capitalism. Capitalism is "I will have more tomorrow and you will have less".
ShalmaneserIII t1_j27it2u wrote
No, capitalism simply is the private ownership of capital. But since some people will turn capital into more capital and others won't, you get the gaps between rich and poor. It doesn't require anyone to get poorer.
Meta_Digital t1_j290zs4 wrote
When wealth is consolidated, that means it moves from a lot of places and into few places. That's why the majority of the world is poor and only a very tiny portion is rich.
ShalmaneserIII t1_j299xeb wrote
Considering the rich portion is the capitalist part, this seems to be a fine support for it. Or is a world where we all toil in the fields equally somehow better?
Meta_Digital t1_j29abgt wrote
The whole world is integrated into capitalism, and the Southern hemisphere (other than Australia / New Zealand) has been extracted to make the Northern hemisphere (primarily Western Europe / US / Canada) wealthy.
We do have a world where people in imperial neocolonies toil in fields. If you don't know that, then you're in one of the empires using that labor for cheap (but increasingly less cheap to feed the owning class) commodities.
ShalmaneserIII t1_j2bf9ci wrote
Not my point. Are you suggesting we'd be happier if we were all in the fields?
Meta_Digital t1_j2bg0qj wrote
No, I am suggesting that we are "happier" in the wealthy parts of the capitalist economy because others are put into fields in slave-like conditions.
ShalmaneserIII t1_j2bgbqv wrote
Sounds great for us, then.
But are you suggesting we'd be happier if wealth were evenly divided?
Meta_Digital t1_j2bjyly wrote
Yes, we would be more prosperous. Poverty is often a form of violence inflicted on a population, and that violence ripples out and comes back and affects us negatively. Things don't have to be perfectly even, that's a strawman, but by elevating the bottom we also lift the top. Certainly the inequality should be reduced, though, because a top elevated too high causes instability for everyone. It's impractical.
ShalmaneserIII t1_j2buiuj wrote
Then do non-capitalist economies have a better track record at reducing poverty than capitalist ones? Because even your nordic-model states are capitalist.
Meta_Digital t1_j2bzdvf wrote
Well, it's not my Nordic model to be fair.
Inequality today is the highest in recorded history, so technically, all other economic systems have a better track record for reducing poverty. Additionally, crashing every 4-7 years, capitalism is the least stable of all historic economic systems. It isn't the dominant system because of either of these reasons.
ShalmaneserIII t1_j2cbna7 wrote
Inequality isn't poverty. A tribe of hunter-gatherers who have some furs and spears shared equally between them is not richer than modern LA.
Meta_Digital t1_j2cby1j wrote
But a group of hunter-gatherers who have free time, personal autonomy, and the basic necessities are a lot richer than the coffee plantation workers that drug LA, the meat industry workers that prepare the flesh they consume, the sweatshops that churn out their fast fashion, and the children in lithium mines that supply the raw material for their "green" transportation.
Where the hunter-gatherer doesn't have many luxuries, the average LA resident's luxuries come at the expense of human dignity and happiness elsewhere.
ShalmaneserIII t1_j2cchew wrote
See, this is why we ignore people like you- you'd offer up a life chasing buffalo and living in a tent as a better alternative to a modern industrial society. For those of us not into permanent camping as a lifestyle, there is no way we want you making economic decisions. And fortunately, since your choices lead to being impoverished- by the actual productivity standards, not some equality metric- you get steamrolled by our way.
Because your non-capitalist societies had one crucial, critical, inescapable flaw: they couldn't defend themselves. Everything else they did was rendered irrelevant by that.
Meta_Digital t1_j2ccpml wrote
I never argued for chasing buffalo or living in a tent. I don't think any of these are required. Are you responding to someone else's post or confusing me with someone else?
What I said is that the primitive life is objectively better than being a child laborer in a toxic metal mine or a wage slave in a sweatshop.
I don't think we have to give up a comfortable lifestyle because we transition to a more functional and ethical system than capitalism.
ShalmaneserIII t1_j2ccz59 wrote
Yes, we would give up that comfortable lifestyle. In the absence of either greed or threat, why work? And without work, what drives productivity?
Meta_Digital t1_j2cd4y1 wrote
In the absence of greed or threat, we'd live in a nice world.
ShalmaneserIII t1_j2cdbcx wrote
Hunting buffalo. Hunter-gatherer levels of productivity are about what people would do if they can't accumulate capital for themselves or if they're not coerced by external threat.
Meta_Digital t1_j2cdhdi wrote
So then is your argument that a productive world is better than one that is pleasant to live in?
ShalmaneserIII t1_j2cdsvu wrote
My argument is that a world without productivity is less pleasant than one with it. Do you like air conditioning? Running water for nice hot showers even in midwinter? Fresh veggies in January?
Basically, what you think of as pleasant- apparently being time to lounge around with your friends- is not what I think of as pleasant.
Meta_Digital t1_j2ce014 wrote
My idea of pleasant is a world where everyone's needs are met as well as some of our wants. Production matters only insofar as it meets those needs and wants. Excess production, like we're seeing today, only destroys us and the planet.
ShalmaneserIII t1_j2ci0e0 wrote
Which means you lose. You will be outproduced by others, and will not have the resources to stop them from doing as they wish.
thewimsey t1_j26kdvy wrote
>when capitalism is right there being the most banal and most evil thing humanity has yet to contend with
This is ridiculous.
>Angry rate downs despite rising inequality, authoritarianism, climate change, and the threat of nuclear war - all at once.
It's because you don't seem to know anything about history, where inequality was much worse, authoritarianism involved dictators, actual fascists, and a much much greater threat of nuclear war.
I'm not sure why you want to blame climate change on capitalism rather than on, oh, humanity. Capitalism is extremely green compared to the ecological disasters created every day by communism.
Meta_Digital t1_j26l7dy wrote
I don't know how to respond to this because it's clear it would be an uneven conversation. You're missing very basic required knowledge here. Inequality, for instance, is at its highest point in recorded history. Capitalism is a form of authoritarianism. Economic conflict turns into military conflict which increases the risk of nuclear war. Capitalism is not human nature; it's actually pretty recent and radically different from its precursors in several important ways. I have no idea what you're even talking about regarding communism or how it's even relevant.
ting_bu_dong t1_j24d3cx wrote
Elon Musk doesn't want to turn us all into paperclips. Yet.
https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer
shumpitostick t1_j25dlxl wrote
The main problem in AI ethics is called "the alignment problem". But it's exactly the same concept that appears in economics as a market failure called "agent-principal problem". We put people in charge and make them act on our behalf, but their incentives (objective function) are different than ours. The discussion in AI ethics would benefit greatly by borrowing from economics research.
My point is, we already have overlords who don't want the same things as us and it's already a big problem. Why should AI be worse?
Wild-Bedroom-57011 t1_j25k9vs wrote
Because of how foreign AI can be. In the space of all human brains and worldviews, there is insane variation. But beyond this, in the space of all minds evolution can create, and all minds that could ever exist, a random, generally intelligent and capable AI could be the paradigmatic example of completely banal evil as it kills us all.
Fmatosqg t1_j296lsd wrote
Because ai is a tool that makes the same kind of output as people but faster. So whatever good or bad things people do on a daily basis, ai does it faster. Which means more of it over the same period of time.
oramirite t1_j24bg3n wrote
The risk, when it comes to AI, is it's linkage to these people. AI is a very dangerous arm of systemic racism, systemic sexism, white supremacy and . It's just a system for laundering the terrible biases we already exhibit into our daily lives even more under the guide of being unbiased. We can't ignore the problems AI will bring because it's an escalation of what we've already been dealing with.
glass_superman t1_j24pzoq wrote
You'll not be comforted to know that the AI that everyone is talking about, ChatGPT, was funded in part by Elon Musk!
We think of AI as some crazy threat but it might as well be the first bow and arrow or AK-47 or ICBM. It's just the latest in tools that is wielded by the wealthy for whatever purpose they want. Usually to have a more efficient way do whatever it is that they were already doing. Never an attempt to modify society for the better.
And why would they? The society is already working perfectly for them. Any technology that further ingrains this is great for them! AI is going to make society more like it is already. If it makes society worse it's because society is already bad.
RyeZuul t1_j25otnr wrote
It's possible to care about more than one thing at once, and it is prudent to spread the word of the potential for AI to go haywire just like releasing the Panama papers or child abuse scandals in the Catholic church. Billionaires will almost certainly start deleting columns of jobs that AI will replace while simultaneously not being very interested in AI ethical game-breaking innovative strategies and unpredictable consequences. If we want to move to a better system of systems, we need to design our overlords well from the ground up.
AndreasRaaskov OP t1_j25f5g8 wrote
The article doesn't mention it but talking about economics is differently also part of AI ethics.
AI ethics helps you understand the power Elon musk gets if he tweaks the twitter algorithm to promote posts he likes and shadow-ban posts he dislikes.
And the Koch bother was deeply involved in the Cambridge Analytical scandal where machine learning was used to manipulate voter behaviour in order to get Trump elected. Even with Cambridge Analytical gone, roomers still go that Charles Koch and a handful of his billionaire friends are training new models to manipulate future elections.
So even if evil billionaires are all you care about you should still care about AI ethics since it also includes how to protect society from people who use AI for evil.
Rhiishere t1_j25wn9p wrote
Whoah, I didn’t know that! That’s freaking amazing in a very wrong way! Like I knew something along those lines was possible in theory but I hadn’t heard of anyone actually doing it!
bildramer t1_j2494g3 wrote
Saying "we are being ruled by evil billionaires" when people like Pol Pot exist is kind of an exaggeration, don't you think?
glass_superman t1_j249j7b wrote
Pol Por rules no one, he's dead.
ting_bu_dong t1_j24djqb wrote
They said that people like him exist. The large majority of people like him are ruling no one, obviously.
Whether any current leaders/rulers/however-you-want-to-call-them are like Pol Pot is debatable... But, no, not so much.
glass_superman t1_j24nq3v wrote
Koch Bros are not as deeply depraved as a fascist leader but they have a much wider breadth of influence. They are more dangerous than Pol Pot because what they lack in depth, they more than make up for in breadth.
VersaceEauFraiche t1_j24pegd wrote
Yes, just like George Soros as well.
threedecisions t1_j24k2q6 wrote
I've heard that there are dogs that will eat until they die from their stomach exploding if they are supplied with a limitless amount of food. People are not so different.
It's not so much that these billionaires are necessarily especially evil as individuals, but their power and limitless greed leads them to evil outcomes. Like, Eichmann's portrayal in the article, he was just doing his job without regard for the moral consequences.
Though when you hear about Facebook's role in the Rohingya genocide in Myanmar, it does seem as though Mark Zuckerberg is a psychopath. He seems to have little regard for the lives of people affected by his product.
oramirite t1_j24bkp9 wrote
No?
bildramer t1_j24f6fo wrote
That's a somewhat disappointing article. Among other things, the man in the Chinese room is not analogous to the AI itself, he's analogous to some mechanical component of it. Let's write something better.
First, let's distinguish "AI ethics" (making sure AI talks like WEIRD neoliberals and recommends things to their tastes) and "AI notkilleveryoneism" (figuring out how to make a generally intelligent agent that doesn't kill everyone by accident). I'll focus on the second.
To briefly discuss what not killing everyone entails: Even without concerns about superintelligence (which I consider solid), strong optimization for a goal that appears good can be evil. Say you're a newly minted AI, part of a big strawberry company, and your task is to sell strawberries. Instead of any complicated set of goals, you have to maximize a number.
One way to achieve that is to genetically engineer better strawberries, improve the efficiency of strawberry farms, discover more about people's demand for strawberries and cater to it, improve strawberry market efficiency and liquidity, improve marketing, etc. etc. One easier way to achieve that is to spread plant diseases in banana, raspberry, orange, peach farms/planatations. Or your strawberry competitors, but that's more risky. You don't have to be a superhuman genius to generate such a plan, or subdivide it into smaller steps, and ChatGPT can in all likelihood already do it if prompted right. You need others to perform some steps, but that's most large-scale corporate plans.
An AI that can create such a plan can probably also realize that it's illegal, but does it care? It only wants more strawberries. If it cares about the police discovering the crimes, because that lowers the expected number of strawberries made, it can just add stealth to the plan. And if it cares about its corporate boss discovering the crimes, that's solvable with even more stealth. You begin to see the problem, I hope. If you get a smarter-than-you AI and it delivers a plan and you don't quite understand everything it planned but it doesn't appear illegal, how sure are you that it didn't order a subcontractor to genetically engineer the strawberries to be addictive in step 145?
Anyway, that concern generalizes up to the point where all humans are dead and we're not quite sure why. Maybe human civilization as it is today could develop pesticides that stop the strawberry-kudzu hybrid from eating the Amazon within 20 years, and that would decrease strawberry sales. Can we stop this from happening? Most potential solutions to prevent it from happening don't actually work upon closer examination. E.g. "don't optimize the expectation of a number, optimize reaching the 90% quantile of it" adds a bit of robustness, but does not stop subgoals like "stop humans from interfering" or "stop humans from realizing they asked the wrong thing", even if the AI fully understands they would have wanted something else, and why and how the error was made.
So, optimizing for something good, doing your job, something that seems banal to us, can lead to great evil. You have to consider intelligence separate from "wisdom", and take care when writing down goals. Usually your goals get parsed and implemented by other humans, which fully understand that we have multiple goals, and "I want a fast car" is balanced against "I don't want my car to be fueled by hydrazine" and "I want my internal organs to remain unliquefied". AIs may understand but not care.
AndreasRaaskov OP t1_j25buis wrote
Honestly, this was my main motivation for writing this article, as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
As someone with an engineering mindset I am not really that interested in AGI may and may not exist one day unless you know a way to build one. What really interests me though is building an understanding of how the Artificial Narrow Intelligence (ANI) that does exist is currently hurting people.
To be even more specific I wrote about how the Instagram recommendation system may purposefully make teen girls depressed and I wanted to expand on that theory.
https://medium.com/@andreasrmadsen/instagram-influence-and-depression-bc155287a7b7
I do understand that talking about how some people may be hurt by ANI today is disappointing if you expected another, WE ARE ALL GOING TO DIE by AGI article. Yet I find the first problem far more pressing and I really wish that more people in philosophy would focus on applying their knowledge to the philosophical problems that other fields are struggling with instead of only looking at problems far in the future that may never exist.
robothistorian t1_j26dpqc wrote
>as an engineer I wanted to know what philosophers thought of AI ethics, but every time I tried to look for it, I only found people talking about superintelligence or Artificial general intelligence (AGI) will kill us all.
I'm afraid, in that case you are either not looking hard enough or are looking at the wrong places.
I would recommend you begin by looking into the domain of "technology/computer and ethics". So, for example, you will find a plethora of works collected under various titles such as Value Sensitive Design, Machine Ethics etc.
That being said, it may also be helpful to clarify some elements of your article, which are a bit disturbing.
First, you invoke the Shoah and then focus on Arendt's work in that regard. But, with specific reference to your own situation, the more relevant reference would have been to Aktion T4 of the Nazis (This is an article that lays out how and where the program began). As is well known, the rationale underlying that mass murder system (and it was a "system") was grounded, specifically, on eugenics, and, more abstractly, on the notion of an "idealized human". The Shoah, on the other hand, was grounded on a racial principle according to which any race considered to be "non-Aryan" was a valid target of a racial cleaning program, which resulted in the Shoah. It is important to be conceptually clear about these two distinct operative concepts: The T4 program was one of mass murder; the Shoah was an act of genocide. One may not immediately appreciate the difference, but let me assure you, the difference matters both in legal and in ethico-political terms. This is a controversial perspective in what is considered "Holocaust Studies", but it is, in my opinion, a distinction to be aware of.
Second, the notion of "evil" that you impute to AI is rather imprecise. It is so because it is likely based on an imaginary and speculative notion of AI. Perhaps a more productive way to approach this problem would be to look through the lens of what Gernot Böhme refers to as "invasive technification". There is a lot of work that is being done on the ethical issues surrounding this notion of progressive technification given some of the problems that are arising as a consequence of this emergent and evolving process. The Robodebt problem is a classic example. As Prof. van den Hengen (quoted in the article) points out
>Automation of some administrative social security functions is a very good idea, and inevitable. The problem with Robodebt was the policy, not the technology. The technology did what it was asked very effectively. The problem is that it was asked to do something daft.
This is, generally speaking, also true about most other computerized systems including the "AI systems" that are driving military and combat systems.
Thus, I'd argue that the ethico-moral concern needs to be targeted towards the designers of the systems, the users of the system and only secondarily to the technologies involved. Some, of course, disagree with this. They contend that we should be looking to (and here they slip into a kind of speculative and futuristic mode) design "artificial moral machines", that is to say, machines that are intrinsically capable of engaging in moral behaviour. This is a longer and more detailed treatment of the subject of "moral machines". I have serious reservations about this, but that is irrelevant in this context.
In conclusion, I would like to say that while I am empathetic to your personal situation, but the article that you have shared, while appreciated, is not really on the mark. This kind of a discussion requires a more nuanced and carefully thought out approach, and an awareness of the work that has been done and which is being done in the field currently.
AndreasRaaskov OP t1_j28acyk wrote
Thank you for the extra sources I will check them out. And hopefully include them in further work.
In the meantime, I hope you have some understanding of the fact that the article was written by a master's student and is freely available, thus not do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
I hope one day to get better
robothistorian t1_j28b3m5 wrote
>do not expect the same quality and nuance as a book or a paper written by a professor with editorial support and hidden behind a paywall.
If you are going to put something out in public with your name on it (in other words publish) and want it to be taken seriously, then it is necessary to ensure that it is carefully thought through and argued persuasively. This accounts for the "nuance and quality". References are important, but in a relatively informal (non-academic) setting, not mandatory.
Further, professors (and other less senior academics) usually only get editorial support after their work has been accepted for publication, which also means it has been through a number of rounds of peer review.
>I hope one day to get better
I am sure if you put in the effort, you will.
Fmatosqg t1_j29622s wrote
Thx for putting in the effort and starting such conversations. Internet is a tough place and there is value in your output before you have the experience to write a professional level article.
Indigo_Sunset t1_j25hlk9 wrote
If the goal is to morals gate ANI, then the process is limited to the rule construction methodology of instruction writers. This would be the banality of evil within such a system, culpability. It's furthered by the apathy of iteration where a narrowed optimization ai obfuscates instruction sets to greyscale through black box, thereby enabling a loss of complete understanding while denying culpability as 'wasn't me' while pointing at a blackish box they built themselves.
In the case of facebook, the obviousness of the effect has no bearing. It has virtually no consequence without a culpability the current justice system is capable of attending to. Whether due to a lack of applicable laws, or the adver$arial nature of the system, or the expectation of 'free market' corrections by 'rational people', the end product is highly representative of the banality that has no impetus to change.
Robotbeat t1_j23yptt wrote
Consciousness as defined by many philosophers (that which experiences qualia, experiences which cannot be quantified) is not a coherent, or rather, falsifiable, concept. It does not scientifically exist.
jamesj t1_j25060g wrote
It empirically exists. At least for me.
Cold-Ebb64 t1_j24yrnw wrote
Yet.
Mustelafan t1_j26z2wh wrote
Good thing science isn't the sole arbiter of what can be said to exist. Consciousness is perfectly coherent if you're not a radical physicalist.
lostsh33p t1_j24ro0z wrote
Define "intelligent". Because they're really not. One would hope a decade of failed chat support bots would have cleared that up by now, but no. Neo-technocratic fantasies die ssslooowwwww.
[deleted] t1_j25z199 wrote
[removed]
[deleted] t1_j26l8y6 wrote
[removed]
[deleted] t1_j276djg wrote
[removed]
BernardJOrtcutt t1_j2a1262 wrote
Your comment was removed for violating the following rule:
>Be Respectful
>Comments which consist of personal attacks will be removed. Users with a history of such comments may be banned. Slurs, racism, and bigotry are absolutely not permitted.
Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
Rhiishere t1_j25vsg3 wrote
That was a very interesting article. Something that has and always will confuse me though is the drive humans have to make machines like themselves. Morality, good and evil, those are all ideas specific to the human race. Not sure why we feel the need to impose them on to AI.
AndreasRaaskov OP t1_j28802z wrote
Something that was in the original draft but I found was to emphasise more that Artificial intelligence is not like human intelligence. What AI does is it can solve a specific problem better than humans while being unable to do anything outside that specific problem.
A good example would be a pathfinder algorithm in a GPS that can find the fastest route from A to B. It is simple, widely used and performs an intelligent task way faster and sometimes better than a human.
However, my article was about how even simple systems can be dangerous if they don't have a moral code.
Take the GPS again, first of all, death by GPS is a real phenomenon that happens since the GPS doesn't evaluate how dangerous a route may be.
But even in the more mundane setting, we see GPS make ethical choices unaware of it making them. Suppose for example a GPS finds two routes to your location, one is shorter, while the other is longer but faster since it uses the highway. Here you may argue that it should take the sort road to minimise CO2 impact, we could also consider the highway to be more dangerous for the driver of the car, however taking the slow road may put pedestrians at risk. There are also some of the newest GPS that consider the overall traffic based on real-time data, those GPS sometimes face a choice where it could send some cars a longer road to avoid congestion, thus sacrificing some people's time in order to make to overall transport time shorter.
Rhiishere t1_j29aun0 wrote
Well there’s some here that I agree with and some I don’t. You can’t program an AI to have the same moral code as humans do. At the end of the day it’s a machine based on logic, and if our morals don’t align with that logic than nobody wins. For GPS, it’s the same thing, you say it makes ethical choices unawares, but those are what are ethical to you and other human beings in general. It doesn’t make “ethical choices” it makes choices based on whatever benefits it’s algorithm best, whatever makes the most sense based on the data it has received, the outlines of its job, and the needs of its users. I’d even argue that it would be more dangerous if we tried to program simple systems with our morals. Program a simple AI running a factory that it’s not okay to kill people, it’s not going to understand the way we do. In what application within the factory does that rule apply? What is the definition of killing? What traits do people that shouldn’t be killed display? Going back to GPS, what warrants a more dangerous route? What is the extreme to which the definition of danger is limited to, and what is the baseline? Even with the most simple moral input into the most simple AI, you have to be able to explain in the most clear and in depth and extensive way everything that surrounds that moral, which just makes sense to an everyday human. Expecting a machine to understand a very socially and individually complex moral is just implausible. It wouldn’t make sense even at the most basic level and wouldn’t go the way any human being would think it should.
lizzolz t1_j23u9s8 wrote
The world is looking more and more like some portentous sci-fi novel every day.
ardentblossom t1_j23vujf wrote
fr! i’ve never been more scared for the future
BernardJOrtcutt t1_j23plep wrote
Please keep in mind our first commenting rule:
> Read the Post Before You Reply
> Read/listen/watch the posted content, understand and identify the philosophical arguments given, and respond to these substantively. If you have unrelated thoughts or don't wish to read the content, please post your own thread or simply refrain from commenting. Comments which are clearly not in direct response to the posted content may be removed.
This subreddit is not in the business of one-liners, tangential anecdotes, or dank memes. Expect comment threads that break our rules to be removed. Repeated or serious violations of the subreddit rules will result in a ban.
This is a shared account that is only used for notifications. Please do not reply, as your message will go unread.
shirk-work t1_j2445lk wrote
We assume not conscious . . .
RipperNash t1_j25gpkq wrote
Machine Learning Institute has a paper on 'Coherent Extrapolated Volition' which basically explains unintended consequences of human requests with limited knowledge given to an AI.
ihaveredhaironmyhead t1_j27kkpo wrote
Human: Hey computer!
Computer: What?
Human: Using perfect logic please give us world peace!
Computer: Your wish is my command
Computer: kills all humans
LivingRepulsive6043 t1_j2846ad wrote
Want to hire dedicated iOS Developers? GVM Technologies is an award-winning and renowned iOS App Development Company with over 250+ iOS app developers. We provide best-in-class and cost-effective iOS app development services. We have a pool of highly educated and talented iOS developers who build the best-quality, secure and robust applications. Hire our certified and skilled iOS app developers now!
Heidegger1236 t1_j28az2b wrote
That is why technology must be in the service of the world, not man.
[deleted] t1_j2409c8 wrote
[deleted]
OwlofMinervaAtDusk t1_j241bgl wrote
Tech firms are constantly surprised by unintended consequences of ML algorithms - there’s researchers who make careers studying them
[deleted] t1_j241l50 wrote
[deleted]
cope413 t1_j247mb3 wrote
>But I disapprove of any attempt to use morality and moral reasoning to criticize them.
Then on what basis would you criticize them?
[deleted] t1_j2486dk wrote
[deleted]
cope413 t1_j248jep wrote
That's not a basis of criticism.
[deleted] t1_j248vml wrote
[deleted]
[deleted] t1_j249107 wrote
[deleted]
bildramer t1_j248the wrote
Yes, but is your threat credible?
[deleted] t1_j249d2w wrote
[deleted]
oramirite t1_j24c710 wrote
But without morality there is no "care", that's a moral act.
oramirite t1_j24c4i8 wrote
Your first sentence describes morality. The philosophical discussion of wether we should hurt other people is the moral dilemma. Wether you want to use other words or not, that's what it is.
You seem to just be struggling with what you define as moral just like every other human being.
cmustewart t1_j249qd2 wrote
Who doesn't understand this already? Given the incredible depth of human ignorance, I'd imagine a fair amount of corporate tech hierarchy hasn't given it a single thought. My intuition is that the vast majority of humans have a view of AI driven by cultural depiction, rather than by experience or education.
oramirite t1_j24cgja wrote
Your intuition? You kinda just sound like you're being holier than thou. If you have researched AI, and know what it can do, then anyone else is capable of that as well. I don't know what trade you're in but it's not hard to do the research and understand the ethical risks of AI in our society and how it will launder existing societal biases deeper into our culture.
cmustewart t1_j24yd84 wrote
Intuition just meaning my take on it, based on what I know and believe. Intuition as opposed to me having access to some sort of truth.
I disagree that its not hard to do the research and understand the ethical risks. I come from a software background, which lays some of the groundwork for research and understanding. Someone from a non-tech background with a layperson's knowledge might face a significant struggle understanding all the foundational elements underlying AI and it's ethical issues.
Someone whose life is mostly consumed by work and family life could easily never give these issues much or any thought, because it seems irrelevant to their life. In my mind, this is a serious problem. AI is changing, and will continue to change, the lives of nearly everyone in ways they are unable to see or comprehend.
oramirite t1_j24bvnc wrote
So if I removed you from the earth because I perceive your amoral behavior as disruptive to the moral system, you'd clearly have no problem with that. Do you just act in your own self interest all the time? Do you have any relationships? Do you believe in treating other humans with respect?
Saladcitypig t1_j25ii5g wrote
Every single machine is made by a human. That can never be separated, and we need to stop pretending that we are at the stage where human hubris is not always a huge factor.
So if you want to understand computers, understand the scope of human mistakes.
Whiplash17488 t1_j25ytcy wrote
I agree with the premise and conclusion.
It already happened. Unconcious bias for facial recognition software to have a higher probability in recognizing white faces over black and brown and asian faces.
The error was in the sample data used to do machine learning.
No intentional evil was done. And the AI itself can’g be “blamed” for drawing conclusions based on what its taught. An AI can only ever conclude what it thinks is good. Just like in Arendt’s argument.
thewimsey t1_j26m0pd wrote
That's not Arendt's actual argument.
Eichmann knew what he was doing, and that it was bad. He just didn't care because he subordinated that to other goals.
An algorithm can't be evil. It can just be a bad algorithm. You might as well say that a sticking speedometer that causes you to speed is evil.
gorillasnthabarnyard t1_j248ysa wrote
This is barely passable as philosophy, Its a clickbait headline and the article doesn’t talk about anything meaningful, it’s all “what the world would look like if I had my way” And banality of evil? What does that even mean? A concept that may or may not exist outside of human perception, is unoriginal? I’m not sure what the purpose of that is other then just throwing big words into your title to make it sound better. I’ll figure out the problem for you. How do you make code in a computer, that has no emotional capacity, follow your moral principles? You code it differently. 🤯
kfpswf t1_j248w4j wrote
Excerpt:
Hanna Arendt believed that the banality of evil is what happened when we don’t understand the full consequences of our actions. Thus, evil is a cognitive error born by humans’ limited intelligence.
The sentence seems butchered, but I agree with the overall conclusion. That evil can only exist in a warped perception of an individual.