Viewing a single comment thread. View all comments

peer-reviewed-myopia t1_itj4r2m wrote

Wow. That was one of the most laughable articles I've ever read. I'm fluctuating between disgust and awe. Feels strange. Intentions of the author aside, this article is pure art — an absolute satirical masterpiece.

​

>According to the meta-charity GiveWell, the most effective charities can save a child’s life for between 3 and 5,000 US dollars. One way of understanding this figure is that whenever you consider spending that amount of money, one of the things you would be choosing not to spend it on is saving a child’s life. Take the median of the GiveWell figures: $4,000. I propose that prices for all goods and services should be listed in the universal alternative currency of percentage of a Child’s Life Not Saved (%CLNS), as well as their regular prices in Euros, dollars, or whatever

​

Funny, because same Givewell, opposes an “explicit expected-value” (EEV) approach to giving/donation, and believe it to be intuitively problematic. Their conclusion states that "Any approach to decision-making that relies only on rough estimates of expected value – and does not incorporate preferences for better-grounded estimates over shakier estimates – is flawed."

Some of their points:

>- There seems to be nothing in EEV that penalizes relative ignorance or relatively poorly grounded estimates, or rewards investigation and the forming of particularly well grounded estimates. If I can literally save a child I see drowning by ruining a $1000 suit, but in the same moment I make a wild guess that this $1000 could save 2 lives if put toward medical research, EEV seems to indicate that I should opt for the latter. >- Because of this, a world in which people acted based on EEV would seem to be problematic in various ways. > - In such a world, it seems that nearly all altruists would put nearly all of their resources toward helping people they knew little about, rather than helping themselves, their families and their communities. I believe that the world would be worse off if people behaved in this way, or at least if they took it to an extreme. (There are always more people you know little about than people you know well, and EEV estimates of how much good you can do for people you don’t know seem likely to have higher variance than EEV estimates of how much good you can do for people you do know. Therefore, it seems likely that the highest-EEV action directed at people you don’t know will have higher EEV than the highest-EEV action directed at people you do know.) > - In such a world, when people decided that a particular endeavor/action had outstandingly high EEV, there would (too often) be no justification for costly skeptical inquiry of this endeavor/action. For example, say that people were trying to manipulate the weather; that someone hypothesized that they had no power for such manipulation; and that the EEV of trying to manipulate the weather was much higher than the EEV of other things that could be done with the same resources. It would be difficult to justify a costly investigation of the “trying to manipulate the weather is a waste of time” hypothesis in this framework. Yet it seems that when people are valuing one action far above others, based on thin information, this is the time when skeptical inquiry is needed most. And more generally, it seems that challenging and investigating our most firmly held, “high-estimated-probability” beliefs – even when doing so has been costly – has been quite beneficial to society. >- Related: giving based on EEV seems to create bad incentives. EEV doesn’t seem to allow rewarding charities for transparency or penalizing them for opacity: it simply recommends giving to the charity with the highest estimated expected value, regardless of how well-grounded the estimate is. Therefore, in a world in which most donors used EEV to give, charities would have every incentive to announce that they were focusing on the highest expected-value programs, without disclosing any details of their operations that might show they were achieving less value than theoretical estimates said they ought to be.

For their full reasoning and objections, here's the article: Why we can’t take expected value estimates literally (even when they’re unbiased).

​

Back to the posted article...

​

>The justification for this would be to fix a gap in the way the price system functions. Normally we make our consumption decisions entirely in terms of a consideration of how much we want something and how much we can afford, a matter of prudence only. As economists have analysed, such exercises in constrained maximisation are all we need do to enjoy a flourishing economy since by responding to prices we automatically take into account the social cost to others of resources being used for what we want rather than for something else (so long as some wise and non-self-interested government steps in to correct for externalities).

​

Ah, the rational choice theory of economics. The theory that assumes people always act rationally, are emotionally exempt, culturally homogenous, identical in values, and remain in a state of conscious logical processing unaffected by unconscious impulses or natural biases. Presented as a fact that underlies consumption, and not just a gross simplification used to create economic models of questionable real-world value.

​

>Lots of people have nice-sounding ideas about what we should do or care about to make the world better. Unfortunately many of their proposals display a lack of quantitative thinking, which makes their proposals very hard to evaluate.

​

Yeah, this idea doesn't sound nice at all, and it also displays a lack of quantitative thinking. However, I will say, it is very very easy to evaluate.

9