#
**owlthatissuperb**
OP
t1_iyxpix4 wrote

Reply to comment by **ward8620** in **Causal Explanations Considered Harmful: On the logical fallacy of causal projection** by **owlthatissuperb**

Thanks!

Yeah I agree with you. Typically, when you get into academic research, the domain experts fully appreciate how complicated the situation is, and know how to properly interpret causal claims.

> Econometrics, the economic sub-discipline of statistics, is almost chiefly concerned with understanding when we can say that statistical estimates can be interpreted as causality

IMO (and this is controversial), you can never infer causality from looking passively at data--data alone can't discern between causation and correlation. It can only lend support to a working theory (i.e. if you already have a proposed causal mechanism).

The only way to infer causality is to reach into a system and modify it. If you can turn the "cause" knob and consistently observe the effect, you can infer causality. But passively peering in and seeing "when A changes, the B tends to change too" doesn't get you there (even if e.g. there's a time delay).

But I do think others would disagree with me.

#
**ward8620**
t1_iyxsjny wrote

I completely agree that you can’t infer causality by “passively” looking at data, in the sense that it sounds like what you’re describing is naively looking at a scatter plot or running a regression of Y on X.

The key insight of causal econometrics is exactly the point you’re making, that in order to understand causality we have to somehow approximate the environment that is present in a lab, where we can randomly assign individuals to treatment and control groups, ensuring that people in both groups are on average the same and thus the only difference in expectation between these groups is the treatment of interest. Of course, we can’t do this with observational data, so we look for natural experiments, or environments where random distribution of treatment may occur among some population by chance.

There are a lot of specific methods, but the essence of them all is that, as long as there is some feature that is as-good-as randomly distributed between people, and that feature is correlated with the treatment we care about, we can use variations in that random factor to estimate the causal effect of changing treatment for those individuals who shift their behavior because of the random variable. An early example in economics is using variation in military participation driven by the Vietnam draft lottery to estimate the causal effect of military participation on lifetime earnings. So in that way, economists really do try to estimate causality by looking for situations in which we might think the “cause” knob is being turned due to historical or institutional quirks.

I’m just skimming the surface, but if you’re interested you should check out Mostly Harmless Econometrics by Angrist and Pischke (the former of whom won the Nobel Prize last year for these findings) or Causal Inference: The Mixtape. Our capability of being very confident about causality in data is definitely limited to when we can find these “natural experiments,” but researchers have been able to find quite a lot and it really forms the basis of modern empirical economics research.

#
**DarkSkyKnight**
t1_iyz6pft wrote

Put more simply certain mathematical assumptions required for causality cannot be justified from the data alone; it has to be argued.

#
**My3rstAccount**
t1_iyy2hk9 wrote

Oh my god, people are experiments, and it's in our money and religions.

#
**YoungXanto**
t1_iz07hoe wrote

>you can never infer causality from looking passively at data

In this view, causal inference is relegated only to a single observation. Extrapolating results to any other similar expiremental set-up (even identical) is just that. To quote Hume,

>I say, then, that, even after we have experience of the operation of cause and effect, our conclusions from that experience are not founded on any reasoning, or any process of the understanding

There is an epistemological limit of the concept of causation. In statistical inference, based on probability theory, a good professor will use this limit to routinely used to smack undergrads upside the head- be it regression or p-values.

We assume distributions of underlying samples, along with central limit theorem to do statistics that support causal inference. We can attempt to control for type 1 error via our set-up, but even when our assumptions are not violated we still can never claim a result with 100% certainty.

Carefully controlled experimentation is better than using some observation set, but it suffers two drawbacks- it is expensive to obtain and it's uses beyond the experiment are quite limited, necessarily requiring extrapolation. So I argue pragmatically that we should use latent data and the statistical tools at our disposal to understand causation (to the extent it actually exists) with the appropriate limiting caveats.

#
**owlthatissuperb**
OP
t1_iz0gzc7 wrote

I agree with you. If we can start with a reasonable hypothesis, looking back at historical data is a valuable way to gather evidence for that hypothesis.

#
**passingconcierge**
t1_iyzm86f wrote

> The only way to infer causality is to reach into a system and modify it.

This seems, to me, to be an unfounded strong claim about inference that entails *causality* always being obliged to be empirical. Which, essentially, reduces econometrics, as it exists, to being entirely *correlative* knowledge *because it is composed entirely of historical data*.

What if there is no "cause knob" but, also, the set of data, C, at time 0 always results in the specific set of data, E, at time x>0, but a random set of data, Rn, at any time x where n<>x? There is nothing to modify since Modifying C changes the set and so there is no transition C->E. Which, essentially, means you have frustrated, prevented, blocked - *essentially interrupted* - the causal connection between C & E. This might not seem to be clearly expressed but it does actually require that causality is holistically considered: you have to take all the nodes and arcs of the graph into account.

You might say, that is simply a description of correlation and always was, and your claim might seem convincing. But how do you *exclude* causality. Even at a vanishingly small probability, the statement "that C causes E" is a fact; and a legitimate claim to make, even if you must qualify it by saying *but only once in a billion*. You might say *one in a billion means it will never happen*. Which is not a great claim. The probability of winning the lottery is, say, one in a billion - *or tens of billions* - yet there is more than one lottery winner since it started. The point being that, just because something has a low probability of happening does not *forbid* it happening.

> "when A changes, the B tends to change too" doesn't get you there (even if e.g. there's a time delay).

So, the idea here is not proven by your claims. You can infer causality by passively looking at data. Econometrics does it all the time. The deeper problem being that we live in a Universe that is deeply causal. Which suggests that starting from an assumption that there is "no causality involved" is a flawed premise. A flawed premise that is easily rejected because the data was *created by a person not a random process* and, therefore, you need good reason to reject the notion that the data "has" causality locked into it.

The idea of causality as being purely mechanistic, which is what it seems you are supposing here, is not the only way you can reason about causality.

#
**owlthatissuperb**
OP
t1_iz0gk1h wrote

Yeah I mostly agree with you. Here's the distinction I'll make:

If you have a starting hypothesis (e.g. an increase in the money supply will cause inflation), you can very much go back and look at historical data to find support for your hypothesis.

But if you have a completely unlabeled dataset (just a bunch of variables labeled x, y, z, ...), and can see how those variables change over time, there's no way to look at the data and say with any confidence that "x has a causal impact on z"

#
**passingconcierge**
t1_iz22jo3 wrote

> If you have a starting hypothesis (e.g. an increase in the money supply will cause inflation), you can very much go back and look at historical data to find support for your hypothesis.

You can express "increase in money supply" and "inflation" as "just a bunch of variable labels". So the two scenarios you sketch are identical in every sense apart from the first having named variables and the second having anonymous variables. Which gives the appearance that you are attributing causality on the basis of some pre-exisiting theory about "money supply" and "inflation". Which runs the risk of creating a circular definition. In essentials, you are ignoring the insights of Hume and the response of Kant regarding the insights of Hume.

I am happy to agree that if we have two columns of numbers

```
1 1
2 4
3 9
: :
99 9,801
```

we could agree that the *relationship* between the first column and the second is that the second is the square of the first. That establishes that there is a mathematical relationship but that mathematical relationship does not guarantee any kind of causality. Although, if you take the position of Tegmark - the Mathematical Universe Hypothesis - the existence of a mathematical relationship guarantees reality but not necessarily causality. Which leaves you in the same situation: data sets, labelled or not, do not reveal causality. For that you need a theory of knowledge that gives warrant to the knowledge that x=9 therefore y=81 is a causal relationship and simply labelling the numbers with "money supply equals nine therefore inflation equals eighty one" does not establish that.

Which largely points to there being no "causal knobs" inside data sets. There may be something about a data set that has some kind of "establishes causality" about it, but it is not simply doing mathematical manipulations or matching variable labelling. There is something rhetorical going on that you really are not making clear.

#
**owlthatissuperb**
OP
t1_iz2pjrl wrote

When I'm talking about labeled vs unlabeled, what I really mean is that we have some intuition for how the labeled dataset *might* behave. E.g. "an increase in money supply causes an increase in inflation" is a better causal hypothesis than "an increase the president's body temperature causes an increase in inflation". We can make that judgement having *never* seen data, based on our understanding of the system.

Having made that hypothesis, we can look back to see if the data support it. The combination of a reasonable causal mechanism, plus correlated data, is typically seen as evidence of causation.

If you don't have any intuition for how the system works, you don't have the same benefit. All you can see are the correlations.

E.g. in your x->x^2 example, if all you had were a list of Xs and Ys, you couldn't tell if the operation was y=x^2 or x=sqrt(y). Without any knowledge of what the Xs and Ys refer to, you're stuck.

#
**passingconcierge**
t1_iz480o8 wrote

> When I'm talking about labeled vs unlabeled, what I really mean is that we have some intuition for how the labeled dataset might behave. E.g. "an increase in money supply causes an increase in inflation" is a better causal hypothesis than "an increase the president's body temperature causes an increase in inflation". We can make that judgement having never seen data, based on our understanding of the system.

What you have here is a circular argument. You are arguing that we can label variables with labels that are theory driven and so we can infer causality between those labels. You have already theorised causality without the data. So, the data is not the source of explanation it is merely a means to, rhetorically, assert that causality is the explanation. You have a causal explanation in mind and label the data informed by that causal explanation and then you carry out a mathematical operation on the numbers labelled and so, *because you have labelled them* you infer a causal explanation.

So you are correct: you can make a *judgement* without seeing the data. The data adds nothing to your understanding of the system *because* you have started from a theory, a model, and your activities with the causal relationship in mind. The data does not "contain causal knobs".

> Having made that hypothesis, we can look back to see if the data support it. The combination of a reasonable causal mechanism, plus correlated data, is typically seen as evidence of causation.

I would argue that what you are doing here is establishing rules for a rhetoric. Let us assume that we both accept mathematics is a kind of unbiased source of knowledge. This is a broad and possibly unwarranted assumption that would need refining, but accept it, broadly, for now.

You have a set of data which you *recognise* as x and y values. You have no theoretical labels to add them. But you list them and you are lazy. So you use a spreadsheet to tell you that the y column can be derived from the x column by

```
f(x) = x^2 with R^2 = 1
```

So you are happy. The coefficient of determination ( R^2 ) tells you that the data "100% supports" the y=x^2 hypothesis. You are happy until someone comes along and says, have you considered

```
f(x) = x * x
f(x) = sqrt(g(x)), g(x) = x * x
f(x) = (x * x * x) / x
f(x) = (x * x * x * x) / (x * x)
f(x) = (x^n) / (x^n-1) forall(n)>2
```

You object that this is all just messing about with variations on squaring things. I agree. But I point out that all I am doing is *showing* that there is more than one way to express a *relationship of x to y* but, generally, avoiding the use of y as a label.

So when you have f(x) = sqrt(g(x)), g(x) = x * x it is an awful circumlocution but it demonstrates that you can have a whole range of things "happening" to avoid using y. Which raises an interesting point about your notions of labelling data.

For a moment, pretend x can be relabelled "money supply" and y can be relabelled "inflation". We have the data set, as before {(1,1),(2,4),(3,9), ..., ( n,n^2 )} and we are supposing that the relationship is f(x) = sqrt(g(x)), g(x) = x * x or it is f(x) x * x. The first things first,

```
f(x) is clearly to be relabelled as inflation.
g(x) is also inflation (see your point^1 below)
sqrt(g(x)) is money supply
```

Your point is that labelling clarifies *causality*. So, in mathematics it is permissible to rearrange a formula. But you are inferring *causality* so the only symbol in common in all of the *formulations* is the equals symbol. Which you might be holding in place of "causes". Which does correspond to your notion of Directed Acyclic Graphs but then places a huge constraint onto what you can actually say with labels.

So, because we have two formulations that you definitely agree on - the ones in the footnote - you can, rhetorically, say that we cannot tell if the causal case is

```
y=x^2 .................... y is caused by x^2
x=sqrt(y) ................ x is caused by sqrt(y)
```

which is then translated into

```
inflation is caused by squaring the money supply
the money supply is caused by square rooting inflation
```

What this highlights is that you now actually need, back in the labels, some meaningful understanding of what "squaring the money supply" is and what "square rooting inflation" is. Because, to be causally coherent, these cannot just be vacuous utterances. This example is incredibly simple.

Just imagine what would happen if your chosen econometric methodology dictates the use of linear regression. You then have a philosopical need to explain x and y in terms of a lot of mathematical structuring around squares, roots, differences, and so on.

Which might boil down to me saying, "I do not think that the equals sign is a synonym for causality". But it might also be saying that "data adds nothing to causal explanation in economics".

Quite literally, you have show two possible formulae for a simple relationship. Which suggests that, at best, a 1 in 2 chance (50% probability p=0.5) that you randomly select the "correct" relationship - where, here, "correct" requires that the relationship expresses something causal. This becomes worse when you realise that it is possible to express x^2 in an infinite variety of ways (rendering p=0, effectively true). This means that you are never really talking about causation.

Which leaves you in the position that econometrics is a good source of rhetorical support for causation but only really provides evidence of correlation: that there is, indeed, a pattern in the data. That pattern in the data does not, in any way, vouchsafe your theoretical causal explanation with certainty. Even if you label it.

^1 E.g. in your x->x2 example, if all you had were a list of Xs and Ys, you couldn't tell if the operation was y=x2 or x=sqrt(y). Without any knowledge of what the Xs and Ys refer to, you're stuck.

#
**bildramer**
t1_iyzm2m7 wrote

Obviously you can infer causation from raw "passive" data. What else could our brains possibly be doing when they learn? We don't affect most things.

One way imagine how it's possible is to contrast the DAGs A->C, A->D, B->C, B->D, C->E, C->F, D->E, D->F and the one with arrows flipped. Then think about conditional dependence, P(C|D,A,B) = P(C|A,B) vs. P(C|D,E,F) != P(C|E,F). Knowing everything about effects can increase mutual information between C and D; knowing everything about causes can't. That's how you can distinguish between this DAG and the backwards one using only correlations. No need to intervene anywhere.

#
**owlthatissuperb**
OP
t1_iz0fsy9 wrote

I haven't followed your technical example yet but I plan on it. Thanks for that!

> What else could our brains possibly be doing when they learn?

I don't think this argument says much--our brains use fuzzy heuristics all the time, and people were really bad at understanding causality (see things like raindances and voodoo) before experimental science came along (which manipulates the world to see how it reacts).

Viewing a single comment thread. View all comments