Viewing a single comment thread. View all comments

PinguinGirl03 t1_j1lqmlz wrote

Did you actually read your AI generated answer? It incorrectly applied nearly all those fallacies.

edit: scrap "nearly", I think it got them all wrong.

13

AndromedaAnimated t1_j1lrbhb wrote

The „appeal to fear“ is the one that is correct in my opinion. Which other fallacies would you see as applied correctly here?

Regardless, it’s sweet and fascinating that an AI writes such a list so eloquently.

4

PinguinGirl03 t1_j1lw7nc wrote

No, appeal to fear is not displayed here.

Appeal to fear would have to take the form:

> Either P or Q is true.

> Q is frightening.

> Therefore, P is true.

There is no appeal to fear here, just a listening of possible negative effects.

Looking closer, I don't think ANY of those examples are actually the listed fallacy.

4

AndromedaAnimated t1_j1m0h13 wrote

Thank you for explaining your view on that!

I have understood it being the fallacy as following:

Q being propaganda and fake news and P being use of AI - despite fake news and propaganda being doable completely without AI as well and happening all the time already plus AI also being usable to distinguish between fake and real news too and as such not necessarily leading to an increase of propaganda/fake news.

Or: IF you accept AI as good, THEN you will be victim of fake news and propaganda.

Of course IF we assume a causal relationship between AI and fake news/propaganda, THEN it would not be a fallacy anymore.

1

PinguinGirl03 t1_j1mpnnp wrote

Logical fallacies can only describe flaws in the structure of an argument, not whether the axioms themselves hold true.

3

AndromedaAnimated t1_j1ncsoj wrote

But is it still a fallacy if there is an actual causal relationship? As in - if there is time/temporal precedence and covariation and other factors cannot explain it „casual relationship“.

Wouldn’t that mean that one argument could be implied with the other correctly? This would be not an error in reasoning (structure) anymore then, or would it still be?

Isn’t that what you said by „listing consequences“?

Sorry for asking you again, but it is a field with which I only partly have experience with (I‘m the „empirical science“ type… the only fallacy that has been interesting to me previously was artefacts in statistical analysis) and your explanations are short and understandable and to the point and help me understand it. Thank you!

1

LoneRedWolf24 t1_j1lzdob wrote

Yeah this reply hurt to read. It's not the wrong us of AI, but it's a poor execution lol.

2