XiphosAletheria t1_jecb5sw wrote

In addition to what other people have posted, you could also get deja vu simply because you have in fact done something like what you are currently doing before, even if you don't remember precisely when. I mean, if you walk down a street and see a church, and get a feeling of deja vu, well, you almost certainly have walked down streets and seen churches before. The brain makes the connection but fails to pull up a specific example, so you get that feeling.


XiphosAletheria t1_jecad29 wrote

Outside of some tricks with language ("this statement is a lie"), you can't have actual paradoxes. By definition, a real paradox is impossible. However, you can get things that seem paradoxical. Normally, these situations involve reasoning that seems good, based on common assumptions, that leads to a result that contradicts what we know to be true.

So, we know that a racer can overtake another racer that has a headstart on him. Zeno's reasoning seems solid, but indicates that overtaking should be impossible. The common assumption that turns out to be wrong is the idea that space is infinitely divisible. We're in a simulation, and you can't have half-a-pixel, so you have to always move across the screen in discrete units.


XiphosAletheria t1_jdjbz03 wrote

>Just pointing out a fallacy is not enough, you also have to be able to show how that fallacy discredits the argument as it is used.

By definition the fallacy discredits the argument it was a part of. It does not, however, disprove the conclusion. If you say "the sky is blue because Joe Biden says so", I can point out that this is an argument from authority, and doing so immediately discredits your argument. The sky remains blue, however.


XiphosAletheria t1_jcc6wsn wrote

But only if you're still hung up on trying to decide whether or not things are true in the first place. Something being useful doesn't have to be a truth statement. I might find something very useful that you find utterly useless. That is, utility is subjective in a way that truth isn't, which is the best reason for thinking in terms of utility rather than truth. Because most things people believe they believe because they are useful in some way. And recognizing that makes it easierto accept people holding beliefs you personally disagree with. Mostly fighting over whether something is true or not is pointless, especially because when it comes to things like controversial political beliefs, most are rooted in subjective values anyway.


XiphosAletheria t1_jc5pd56 wrote

> subjective awareness can never be described in logical, rational terms, or

Well, given that all we have is our subjective awareness of things, and that we came up with logic and reason, that is sort of self-evidently false. What subjective awareness can't be described in is scientific terms, which is different. Science requires empirical observation, but we can't observe someone else's subjective awareness by directly. It also requires repeatability, but subjective awareness is too malleable for that.


XiphosAletheria t1_jbow333 wrote

I find arguing over free will to be a little pointless, because determinists tend to be people who simply don't understand the concept of emergent properties. It's why things like "the ability to do otherwise" that you mention don't really matter to them - if the universe is at base random and chaotic rather than deterministic, it's still not something that you have control over. Basically, free will is not a property of the universe at a basic level - it just emerges in certain complex systems. It's like "life" or "consciousness" in that respect. But emergent properties are difficult to explain, and a lot of people would rather disbelieve in them rather than admitbto the reality of something they don't understand. Hell, I've seen people argue that consciousness and even life are illusions rather than face having to admit that the world contains things science can't easily explain.


XiphosAletheria t1_jbg8ifr wrote

I am not sure that Truth with a big T even makes sense. Unless you are religious maybe, but even then the truth about the existence of God, although very important, would still be a truth about a specific thing, and so a "little" truth in that sense.


XiphosAletheria t1_jbflb1p wrote

>Do you mean, the inability to separate (uncertain, you admit) truth from (likewise uncertain) myth is meaningful, as borne out through a society's technological advancement? This seems like a nod to choosing a truth-position mostly based on its utility...

Sure, yes. That's largely why we care about the truth, after all. We believe operating on truth will result in better outcomes than operating on lies (in general, I'm sure you could come up with specific tortured examples in which that isn't the case). But generally, we care whether X is true because what we should do to get the outcomes we want changes depending on if it is or not. If "easy access to guns leads to more homicides" is true, then banning guns will lower the homicide rate (useful). If it is false, then doing so won't impact the homicide rate while driving up resentment among those affected (the opposite of useful). So knowing whether the statement is true lets us pick the better policy.

And your comment, like OPs, seems to imply a false dichotomy between "certain" and "uncertain". But we have degrees of certainty, and saying something is "true" has only ever meant that we have a high degree of certainty about something, and that is still a meaningful statement.

Basically, just because you can be wrong about what is true doesn't mean that truth should be dismissed as unimportant.


XiphosAletheria t1_jbf9mus wrote

I agree with the premises but not the conclusion. It is true that we tend to define "true" as those beliefs we hold that are both useful and cohesive with our other beliefs. It is further true that those criteria aren't definitive - we can never be certain about truth, and spend roughly a third of our lives in dreams that are pure illusion. Nevertheless, the inability to be certain of the truth doesn't mean that separating truth from fiction has no benefit or isn't meaningful. There is a reason a society based on the scientific method ends up much more advanced than one based purely in myth.


XiphosAletheria t1_j9qinie wrote

I think of morality as being a complex system emerging from the interplay between the demands of individual self-interest and societal self-interest.

The parts of morality that emerge from individual self-interest are mostly fixed and not very controversial, based on common human desires - I would prefer not to be robbed, raped, or killed, and enough other people share those preferences that we can make moral rules against them and generally enforce them.

The parts of morality that arise from societal self-interest are more highly variarble, since what is good for a given society is very context dependent, and more controversial, since what is good for one part of society may be bad for another. In Aztec culture, human sacrifice was morally permissible, and even required, because it was a way of putting an end to tribal conflicts (the leader of the losing tribe would be executed, but in a way viewed as bringing them great honor, minimizing the chances of relatives seeking vengeance). In the American South, slavery used to be moral acceptable (because their plantation-based economy really benefited from it) whereas it was morally reprehensible in the North (because their industrialized economy required workers with levels of skill and education incompatible with slavery). Even with modern America, you see vast difference in moral views over guns, falling out along geographic lines (in rural areas gun ownership is fine, because guns are useful tools; whereas in urban areas gun ownership is suspect, because there's not much use for them except as weapons used against other people).


XiphosAletheria t1_j9n2j56 wrote

I think my main issue here is that I don't think "generalizable" is the same as "useful". I think learning to articulate your moral assumptions, then to interrogate them and resolve any contradictions as they arise are all useful, and really the whole point of philosophy.

Beyond that, I think a lot of the factors people come up with are in fact generalizable, at least for them. That is, once people have resolved the trolley problem to their own satisfaction, the factors they have identified as morally relevant will remain relevant across a range of issues. The trolley problem doesn't reveal much that is generalizable for people as a group, but because morality is inherently subjective, we wouldn't really expect it to.


XiphosAletheria t1_j9m3q8e wrote

I think the point of the thought experiment is to help people discover what their intuitions are, what the reasoning is behind them, and where that leads to contradictions. What's important about the trolley problem isn't that people say you should flip the lever. It's that when asked "why?" the answer is almost always "because it is better to save five lives than one". But then when it comes to pushing the fat man or cutting someone up for organs, they say you shouldn't do it, even though the math is the same. At which point people have to work to resolve the contradiction. There's a bunch of ways to do it, but hashing out which one you prefer is absolutely worthwhile and teaches you about yourself.


XiphosAletheria t1_j9lzdxm wrote

I think the response there is that the apparent lack of generalizability means only that you have failed to analyze the situation correctly. What the trolley problem teaches us is that those running a closed system should run it so as to minimize the loss of life within it. That is, if I am entering into a transit system, and a trolley problemish situation arise in it, I should rationally want the people running the system to flip levers and push buttons such that fewer people die, because I am statistically more likely to be one of the five than the one.

Whereas we shouldn't want people using others as means to an end in an open scenario. Again, because the number of people who might want an organ from me at any given moment is really much higher than my odds of needing one myself.

In both cases, the trolley problem shows is that our moral impulses are rooted in rational self-interest, rather than, say, simple utilitarianism.


XiphosAletheria t1_j9id7c3 wrote

> No way would anyone reasonably agree to be enslaved, sacrificed, or raped...whereas the death penalty (given certain evidence) may be morally excused.

You see the contradiction there, right? No way would anyone reasonably agree to be executed. For that matter, if we hadn't been raised in a society where involuntary taxation was the norm, I doubt many reasonable people would agree to it. That is, just because I wouldn't reasonably agree to have X happen to me doesn't mean society might not morally do X to me anyway under certain circumstances.

And I don't see the point of your argument anyway. Let's say there is some set of moral norms that we all agree to be true. That doesn't help us. What we need is a guide for when we have moral disagreements between reasonable people. At best, you'll end up stating something glaringly obvious (since we all apparently agree with it anyway). At worst, and this seems far more likely, you'll have people using your idea as way to simply dismiss anyone who disagrees with them as both unreasonable and immoral, which is the opposite of the mindset any thoughtful person, and especially a philosopher, ought to have.


XiphosAletheria t1_j9gqepu wrote

> There will be agreement on morals claims the same way there is agreement on objective reality.

Except there won't be. Just off-hand you can find reasonable people who disagree about the morality of, say, the death penalty, abortion, eating meat, etc. And that's within one culture. If you look at other cultures, you'll see reasonable people disagreeing about things we agree on here - such as slavery, human sacrifice, marital rape, etc.

And anyway, "objective" is not the same as "subjective, but a bunch of people agree with me".


XiphosAletheria t1_j9fo6gm wrote

I mean, this is just silly. "Morality is what 'reasonable people' would agree to" might work if you want to say morality is subjective, because of course reasonable people across different times and places have had very different views of what is moral. But to define morality as that while claiming it is objective falls flat.


XiphosAletheria t1_j6x8i39 wrote

>I really don't believe that at all.

You don't believe that if people are repeatedly lied to, they eventually begin to mistrust those who lied to them? Because that's a basic psychological truth, really.


XiphosAletheria t1_j6uervp wrote

Or perhaps they are questioning the authorities that have a track record of openly lying to them, as when, early the pandemic, the experts all said masks weren't very useful against Covid, not because they didn't know better but because supplies of decent masks were limited and they didn't want a run on them.

And you see a lot of lies like that from governments, the establishment, etc. Often their core supporters don't even experience the lies as lies, because the lies aren't meant to fool them. As when then primary-candidate Obama promised blue collar workers he'd tear up NAFTA, while sending messages to the Canadian government assuring them this was just a campaign lie. When it came out, it didn't kill Obama's political fortunes, because his core supporters all knew he was lying any way, to get those low information idiots onboard.

But the penalty for lying repeatedly to people you don't respect is that they eventually start to just assume everything you are saying is a lie. And it seems like a lot of Americans have reached that point with the government and its associated experts.


XiphosAletheria t1_j6udfh7 wrote

I think the point people are making is that the process as it currently exists often lacks repeatability, in the sense that many published studies don't actually have anyone trying to repeat the results. Like, sure, you have grasped how the scientific process is supposed to work in theory, but no one is naive enough to think science is like that in the real world.


XiphosAletheria t1_j6ny64n wrote

I think it depends on what you think the role of philosophy is.

If you think it aims at finding truth, then the article makes a good point. You don't really study Becher and phlogiston theory in chemistry or Lamarck's view of evolution in biology, except as historical curiosities. If philosophy, like the hard sciences, aims at truth, then most of the old "great" philosophers shouldn't really be taught anymore, because they got almost everything wrong.

Now, if you think philosophy is more about learning how to think consistently about a variety of ultimately subjective topics, then of course the "great" philosophers are worth studying for the reasons you outlined, much as older literature is worth studying because it is the beginning of a very long and ongoing cultural conversation.

The issue, I think, is that most of the ancient philosophers, especially back before the hard sciences split off from natural philosophy, explicitly claim philosophy is the first type of thing rather than the second. And even today you'll get some philosophers who'll prattle nonsense about objective moral facts and whatnot. Philosophy is sort of an odd humanity in that way.


XiphosAletheria t1_j60hce5 wrote

Too much to comment on here, but this stood out to me

>Tomasello (2016) characterises altruism with the moral formula, "you > me" ; i.e., “I place your interests above my own”.  

That is true of pure altruism, but I am not sure there are many people over the age of six or who believe in that as a moral precept. Rather, much more common is a belief in reciprocal altruism, which is much more like enlightened self-interest, a willingness to help out without any immediate payoff in the expectation of help later on.