By causal direction (X causes Y) I mean that in the induced graph of the structural causal model, X appears before Y. The graph induced by a structural causal model has an edge from X to Y if there is a structural equation Y = F(U_i, X, ... ).
It refers to Example 9.3 and the associated lemmas.
This example is a general rephrasing of Example 1 of finite factored sets in terms of random variables and structural causal models.
Example 9.3
We observe two nonconstant, (real valued) random variables and . We assume that they are part of an unobserved true underlying causal model (𝑈, 𝑉 , 𝐹, ℙ), where 𝑉 = and {1, 2} ⊆ 𝐼. We only observe the distribution of 𝑉 . Let . Suppose that . Furthermore, we assume that ℙ is in ‘general position’. This means that this independence is not due to the specific choice of ℙ. Formally, for all causal models (𝑈, 𝑉 , 𝐹, 𝑃) s.t. 𝑃 ∼ ℙ.
It says that any structural causal model that models the independence of and Z structurally (it is not a coincidence of precise numerical values of ℙ), then in this structural causal model has , i.e. in the graph induced by the causal model through it's equation, always is an ancestor of .
The linked paper introduces the key concept of factored spaced models / finite factored sets, structural independence, in a fully general setting using families of random elements. The key contribution is a general definition of the history object and a theorem that the history fully characterizes the semantic implications of the assumption that a family of random elements is independent. This is analogous to how d-separation precisely characterizes which nodal variables are independent given some nodal variables in any probability distribution which fulfills the markov property on the graph.
Abstract: Structural independence is the (conditional) independence that arises from the structure rather than the precise numerical values of a distribution. We develop this concept and relate it to d-separation and structural causal models.
Formally, let U=(Ui)i∈I be an independent family of random elements on a probability space (Ω,A,P). Let X, Y, and Z be arbitrary σ(U)-measurable random elements. We characterize all independences X⊥⊥Y∣Z implied by the independence of U and call these independences structural. Formally, these are the independences which hold in all probability measures P that render U independent and are absolutely continuous with respect to P, i.e., for all such P, it must hold that X⊥PY∣Z.
We introduce the history history(X∣Z):Ω→P(I), a combinatorial object that measures the dependence of X on Ui for each i∈I given Z. The independence of X and Y given Z is implied by the independence of U if and only if history(X∣Z)∩history(Y∣Z)=∅ almost surely with respect to P.
Finally, we apply this d-separation-like criterion in structural causal models to discover a causal direction in a toy setting.