diff --git a/-NFLT4oBgHgl3EQfCi71/content/tmp_files/2301.11976v1.pdf.txt b/-NFLT4oBgHgl3EQfCi71/content/tmp_files/2301.11976v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..df1ed70b4ff5ace8609a89bc945200f01ed76f40 --- /dev/null +++ b/-NFLT4oBgHgl3EQfCi71/content/tmp_files/2301.11976v1.pdf.txt @@ -0,0 +1,778 @@ +arXiv:2301.11976v1 [stat.ME] 27 Jan 2023 +A. Philip Dawid* and Stephen Senn +Personalised Decision-Making without +Counterfactuals +Keywords: decision theory, counterfactual, potential response, intention to treat +This article is a response to recent proposals by Pearl and others for a new approach to personalised +treatment decisions, in contrast to the traditional one based on statistical decision theory. We argue that +this approach is dangerously misguided and should not be used in practice. +1 Introduction +In recent works [1–4], Judea Pearl and collaborators have set out an approach to personalised treatment +that is radically different from that based on traditional statistical decision theory. It is based on the +conception that we should care, not only about the outcome that actually materialises, but also about +the (necessarily unobserved, counterfactual) outcome that, it is supposed, would have occurred under the +treatment that was not applied. A similar conception forms the basis of other recent work [5–7]. +We consider this approach to be dangerously misguided, and believe that real harm will ensue if it +is applied in practice. We argue our case from a number of different viewpoints, and explain why this +approach should not be regarded as a viable alternative to standard statistical decision theory. +1.1 Basic set-up +The context is that of a “target” patient suffering from a disease, for which a treatment is available. The +treatment is far from perfect, so that not all treated patients recover, while some untreated patients may +recover anyway. There is information available on recovery rates for treated and untreated patients; both +these rates may depend on measured individual patient characteristics. The basic problem is to decide, on +the basis of the target patient’s own characteristics, whether or not to treat him. A variation is how to +prioritise patients for treatment when there are limited doses available. +We introduce notation as follows: +Treatment decision Binary decision variable X, coded 1 for treat, 0 for don’t treat +Response Binary stochastic variable Y , coded 1 for recovery, 0 for no recovery +Individual background characteristics Stochastic variable L, potentially multivariate, unaffected by +the treatment decision +We suppose that there are available substantial data on (L, X, Y ), from either experimental or uncon- +founded observational studies on patients we can regard as similar to the target1, from which we can +estimate, essentially perfectly2, the distribution of Y , conditional on L, under either treatment interven- +1 See § 7 for further discussion of this point. +2 In a Bayesian setting it is straightforward to relax this condition, using predictive distributions based on finite +samples. However the main issues are most clearly expressed in the case of essentially known probabilities. +*Corresponding Author: A. Philip Dawid: University of Cambridge: apd@statslab.cam.ac.uk +Stephen Senn: Statistical Consultant, Edinburgh: stephen@senns.uk + +2 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +tion. That is, we know the probability Pr(Y = 1 | L = l, X ← x), for any value l of L and x = 0 or 1. +(Here X ← x denotes an external intervention to set X to x.) +1.2 Outline +In § 2 we recall the straightforward decision-theoretic analysis of this problem. Then in § 3 we briefly outline +the approach proposed by Pearl et al., followed by some critical comments in § 4. Section 5 describes this +approach in more detail, following a logical path that relates it to other problems, in particular the use +of general covariate information to strengthen conclusions, and the specific case of an “intention to treat” +covariate, whose properties can be identified by combining experimental and observational data. In § 6 we +give critical consideration to some examples from Mueller and Pearl [2]. Section 7 notes some important +assumptions that are implicitly made in the analysis, and points out that they are unlikely to hold in +practice. Section 8 summarises our analysis and conclusions. +2 Decision-theoretic approach +We first describe the standard decision-theoretic (DT) approach to treatment selection. +2.1 Single patient decision problem +Consider first the case of the single target patient. We have to decide whether to offer this patient treatment, +or not. +Having access only to the target patient’s value L = l, our objective is to choose the treatment that will +maximise the probability of recovery. We should thus treat this patient if p := Pr(Y = 1 | L = l, X ← 1) > +q := Pr(Y = 1 | L = l, X ← 0). That is, we should treat just when CATE(l) > 0, where CATE(l) = p − q +is the “conditional average treatment effect”. +If the outcome Y is not necessarily binary, for example a survival time, we need to associate a utility +U(y) with the outcome y, and treat the patient just when E{U(Y ) | L = l, X ← 1)} > E{U(Y ) | L = +l, X ← 0)}. Applied to the binary case this reduces to the prescription above (so long as U(1) > U(0)). +In [8], a companion paper to this one which treats utilities explicitly, the above is termed the inter- +ventionist utility and approach, and contrasted with the counterfactual utility and approach implicit in [2] +and explicit in [6, 7]. Here we restrict to the binary case and do not use utilities. +Faced with a large collection G of patients to treat, and unlimited supplies of the treatment, managing +each patient (each with their own value l of L) according to the above rule will maximise the number of +recoveries. That is, any other (deterministic or randomised) decision rule that uses only the information +on L would lead to fewer recoveries. +Example 1. Consider a case where +Pr(Y = 1 | L = l, X ← 1) += +0.49 +Pr(Y = 1 | L = l, X ← 0) += +0.21. +The conditional average treatment effect is CATE(l) = 0.49−0.21 = 0.28. Since CATE(l) > 0, the optimal +action is to treat this patient. If we have a large collection of similar patients, with the same value L = l, +they should all be treated—in which case the overall proportion of recovered patients will be 49%. This is +the best outcome that can be achieved by any treatment strategy. + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +3 +2.1.1 Missing information +It may happen that, while we have full information on (L, X, Y ) for the study individuals, the value l of L +for the target patient is not available. In that case we can not condition on L = l, and we have no option but +to base the management of the patient on the unconditional probabilities Pr(Y = 1 | X ← x) (x = 1, 0). +Nothing is gained by, for example, trying to impute the unknown value of L. If this is not obvious (as it +should be), suppose we tried to do so. The recovery probabilities, conditional on a hypothesised value l for +L, are Pr(Y = 1 | L = l, X ← x) (x = 1, 0). But as we do not know l, we need to take the expectation +of Pr(Y = 1 | L, X ← x) over the distribution of L, when setting X ← x (which is the known marginal +distribution of L, unaffected by the intervention). And this is just the unconditional recovery probability +Pr(Y = 1 | X ← x). 3 +2.2 Unit selection +Again consider a large collection G of patients i = 1, . . . , N, with individual recovery probabilities pi = +Pr(Yi = 1 | Li = li, Xi ← 1), qi = Pr(Yi = 1 | Li = li, Xi ← 0). If we treat just those in a subset S, the +expected number of recoveries will be � +i∈S pi + � +i∈G\S qi = � +i∈G qi + � +i∈S CATEi. Consequently, to +maximise this expected number, we should choose S, subject to any constraints, to maximise � +i∈S CATEi. +If we have limited treatments available, we should thus prioritise individuals in decreasing order of their +CATE (while of course not treating any one for whom CATE < 0.) Again, any other policy (subject to the +same constraints) will have a smaller number of recoveries. +2.3 Potential outcomes? +The “potential outcome” approach to causal inference [9] conceives of the existence, even prior to treatment +choice, of the pair of variables Y = (Y (1), Y (0)), where Y (x) denotes the value that, it is supposed, Y will +take if intervention X ← x is applied. The pretreatment variables (Y (1), Y (0), L) are supposed to have a +joint distribution, unaffected by treatment. With this interpretation, we have +Pr(Y = y, L = l | X ← x) = Pr(Y (x) = y, L = l), +(1) +and p = E{Y (1) | L = l}, q = E{Y (0) | L = l}, +In this approach, inference is ideally desired for the “individual treatment effect”, ITE := Y (1) − Y (0), +which can take values +1 (treatment benefits the patient), −1 (treatment harms the patient) or 0 (treatment +has no effect). Then CATE = E(ITE | L = l). However, typically ITE is unobservable, since it is impossible +simultaneously both to treat and not to treat the same patient. In particular, no information can be gained +about the dependence between Y (1) and Y (0), nor about the distribution (marginal, or conditional on L) +of ITE. All that can be inferred is the (conditional) expectation, as above, of ITE, depending as this does +only on the individual distributions of Y (1) and Y (0), which can be identified from experimental data. +In certain very special and atypical cases, essentially those where we have a fully deterministic and +completely understood mechanistic system, it may be that the background knowledge L is detailed enough +to support perfect prediction of the eventual response, under either intervention. Then we will know, in +advance of treatment choice, both potential outcome variables. In this case p = Y (1), q = Y (0), and +CATE = ITE. Clearly we should treat as many of those who will (we know for sure) benefit from the +treatment as we can. +3 With finite data, on taking account of known structure in the interventional distributions of (L, Y ) it may be possible +to improve the estimation of Pr(Y = 1 | X ← x). But this still remains what we need to focus on. + +4 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +However, in typical cases perfect prediction is impossible, and then it is arguable whether the potential +responses even have any meaningful existence. In any case, there is nothing to be gained by trying to impute +potential responses: as in § 2.1.1 above (taking L = Y), we should again simply focus on CATE = p − q. +In summary, consideration of potential responses (even if regarded as meaningful) does not add any +value to the decision-theoretic approach. +3 The approach of Mueller and Pearl [2] +In contrast to the above decision-theoretic approach, Mueller amd Pearl [2] (henceforth MP) opt to take +potential outcomes seriously, and focus attention on ITE = Y (1)−Y (0). They argue that we should ideally +aim to treat those patients having ITE = 1, for whom the treatment made a difference: they would not +have recovered without it. It would be wasteful to treat a patient with ITE = 0, for whome the treatment +made no difference, and positively harmful to treat a patient with ITE = −1, who would have recovered if +untreated, but not if treated. +However, this ideal is unattainable, as we will not know a patient’s ITE before treatment. Concern is +therefore transferred to the “probability of benefit”, PB = Pr(Y (1) = 1, Y (0) = 0) = Pr(ITE = 1), and +the “probability of harm”, PH = Pr(Y (1) = 0, Y (0) = 1) = Pr(ITE = −1), which are now regarded as the +criteria by which to assess any treatment strategy. +But not only can we not know a patient’s ITE before the treatment decision is made, we can not even +know it later, when the outcome Y is observed. For if we treat the patient we will observe Y = Y (1), +but can not then observe the counterfactual outcome Y (0) relevant when we don’t treat; similarly, for an +untreated patient we can observe Y (0), but not Y (1). So ITE is always unobservable. This means that, +even with extensive data on other patients, it will not be possible fully to identify PB and PH. Such data +can, however, be used to set interval bounds on these quantities. MP [2] further show how combining +experimental and observational data can narrow these bounds. In certain very special cases the bounds +narrow to a single point, leading to full identification of PB and PH. +4 Comments on the approach +Our comments on the MP programme are arranged along several dimensions. +4.1 Philosophy +Potential responses such as Y (0) and Y (1), first introduced by Neyman [10], have been considered as fun- +damental to the conduct of causal inference ever since reintroduced by Rubin [9]. However this conception +was challenged by Dawid [11, 12], who pointed out that, so far from being fundamental, they are entirely +unnecessary, and that a fully satisfactory theory can be based on standard decision-theoretic elements. +Indeed, there are serious philosophical objections to regarding potential responses as having real existence. +Only if we take a fully Laplacean view of the universe, in which the future of the universe is entirely +determined by its present state and the laws of Physics, does this make any sense at all—and even then, it +is difficult to incorporate the whims of an unconstrained external agent who decides whether or not to give +treatment, or to account for the effect of external conditions arising after treatment. Even under Laplacean +determinism, our ignorance of the information needed to predict the future means that we are unable to +make use of it. Whether or not we believe in a deep-down deterministic universe, our predictions of the +future can only be based on the limited information we do have at our disposal, and must necessarily be + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +5 +probabilistic.4 Imagining what we could know or do, if only we had more information than we actually do +have, is just pointless. +4.2 Applicability +Another important dimension of criticism is that the strong conditions needed for application of the MP +theory will almost never obtain in practice. See § 7 below for details. +4.3 Helpfulness +The output of an MP analysis will, at very best, be point estimates of the probabilities of benefit and of +harm—in most cases, we won’t even get these, but can only bound these quantities within an interval. But +even when we have these quantities, it is far from clear how they help to inform treatment decisions. +4.4 Ethics +Our final criticism is the simplest, but most incisive. The treatment decisions made using the DT approach +are guaranteed to be better than those made by any other decision rule, in the sense that they will maximise +the number of recoveries in the population. So whenever the MP approach leads to different decisions, it +will produce a decrease in the number of recoveries. We find it hard to construe this as ethical. +5 Analysis +Here we provide a deconstruction of the analysis of MP [2]—which should not, however, be taken as +agreement with their arguments and interpretations. There are a number of crucial assumptions required, +but to avoid cluttering the argument we leave these implicit, postponing specification and discussion of +them to § 7. +We develop the story-line in a number of stages. +In § 5.1 we consider the case where we have access to experimental data on treatment X and response +Y , and show how this can be used to bound the probabilities of benefit and of harm. We also discuss the +special circumstances in which these interval bounds shrink to a point. +In § 5.2 we further suppose that we can measure additional covariate information L on individuals. If +we have these values in the experimental data, this additional information can lead to a narrowing of the +bounds for PB and PH for the target case, even when L for that case is unobserved. +Section 5.3 introduces a particular, potentially useful, covariate, “intention to treat”, X∗—the treat- +ment that a patient (or their doctor) would like to choose, if unconstrained. This may well be informative +about their state of health, and thus their outcome. In some experiments it may be possible to obtain +information about X∗, and this can then be used as L in § 5.2. However it will often not be possible to +observe X∗ in the experiment. Section 7.2 considers how this problem can be overcome by the incorpora- +tion of observational data, if we can assume that, in such data, the desired treatment was the one actually +applied, so that X∗ = X becomes observable. The combination of experimental and observational data +allows us to identify the distribution of X∗ (together with the other variables), and so once again allows +us to apply the theory of § 5.2 to obtain improved bounds for PB and PH, which are detailed in § 5.5. +4 See Dawid [13] for an approach to understanding non-extreme probabilities based on imperfect information about a +deterministic world. + +6 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +5.1 Simplest case +We start by presenting the basis of the approach in the simplest case, where the data are experimental, and +there is no additional covariate information. We thus have access to the interventional response probabilities +Pr(Y = y | X ← x) = Pr(Y (x) = y), (x, y = 0, 1). What can be inferred, from these, about the probabilities +of benefit and of harm? +As described by Dawid and Musio [14], it is helpful to express the interventional probabilities in terms +of parameters τ and ρ, where +τ +:= +Pr(Y = 1 | X ← 1) − Pr(Y = 1 | X ← 0) +(2) +ρ +:= +Pr(Y = 1 | X ← 1) − Pr(Y = 0 | X ← 0). +(3) +Then τ is the average treatment effect, ATE, of X on Y , while ρ = Pr(Y = 1 | X ← 1) + Pr(Y = 1 | X ← +0) − 1 is a measure of how common the outcome is. +The transition matrix (Pr(Y = y | X ← x)) from X to Y is +P = P(τ, ρ) := +� +1 +2(1 + τ + ρ) +1 +2(1 − τ − ρ) +1 +2(1 − τ + ρ) +1 +2(1 + τ − ρ) +� +, +(4) +where the row and column labels are implicitly 1 and 0 in that order. The necessary and sufficient condition +for all the transition probabilities to be non-negative is +|τ| + |ρ| ≤ 1. +(5) +We have equality in (5) only in the degenerate case that one of the entries of P is 0. +We can express the joint distribution for Y = (Y (1), Y (0)) as in Table 1. The margins are determined +Y (0) = 1 +Y (0) = 0 +Y (1) = 1 +1 +2 (1 + ρ − ξ) +1 +2 (ξ + τ) +1 +2 (1 + τ + ρ) +Y (1) = 0 +1 +2(ξ − τ) +1 +2 (1 − ρ − ξ) +1 +2 (1 − τ − ρ) +1 +2(1 − τ + ρ) +1 +2(1 + τ − ρ) +1 +Table 1. Joint probability distribution of (Y (1), Y (0) +by (1) (with L absent) and (4); but the internal entries are indeterminate, having one degree of freedom +crystallised in the unspecified “slack variable” ξ, which is not identified by the experimental data. The only +constraint on ξ is the logical one that all internal entries of Table 1 be non-negative. This holds if and only +if +|τ| ≤ ξ ≤ 1 − |ρ|. +(6) +This interval information is all that can be concluded about the joint distribution for Y when we have data +on the behaviour of Y under intervention on X, and no additional information. +Remark 1. The interval (6) shrinks to a point, so that the joint distribution of Y is fully determined by +the experimental data, if and only if we have equality in (5), i.e., just when P is degenerate, so that, for +some x, y = 0, 1, Pr(Y = y | X ← x) = 0. That is to say, for at least one of the interventions, the resulting +outcome Y can be predicted with certainty—a most unusual state of affairs. In this case Pr(Y (x) = y) = 0, +so that both joint events (Y (x) = y, Y (x) = 0) and (Y (x) = y, Y (x) = 1) (where x = 1−x) have probability +0. +5.1.1 Benefit and harm +The probability of benefit PB is the upper right entry of Table 1, PB = Pr(Y (1) = 1, Y (0) = 0) = 1 +2(ξ +τ), +which by (6) is bounded between PB− := 1 +2(|τ|+τ) = max{τ, 0} and PB+ := 1 +2(1−|ρ|+τ) = min{Pr(Y = + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +7 +1 | X ← 1), Pr(Y = 0 | X ← 0)}. The probability of harm is the lower left entry of Table 1, PH = +Pr(Y (1) = 0, Y (0) = 1) = 1 +2(ξ − τ) = PB − τ. +For the case of Example 1, we have τ = 0.28, ρ = −0.3. Without any further information, we can only +infer 0.28 ≤ PB ≤ 0.49, and correspondingly 0 ≤ PH ≤ 0.21. +5.2 Covariate information +Now suppose that, again with experimental data, we can obtain additional information on some pre- +treatment covariate information L (for simplicity assumed discrete), unaffected by intervention (so Pr(L = +l | X ← x) = Pr(L = l), assumed known and positive). We thus have access to the conditional interventional +probabilities Pr(Y = y | L = l, X ← x). +Let τ(l), ρ(l) be defined as in (2) and (3), but with probabilities further conditioned on L = l. If, +for the target case, we observe L = l, then we simply apply the above analysis, conditional on L = l. In +particular, the joint distribution for Y, given L = l, will be as in Table 1, with ρ, τ, ξ replaced, respectively, +by ρ(l), τ(l), ξ(l), where ξ(l) is subject only to +|τ(l)| ≤ ξ(l) ≤ 1 − |ρ(l)|. +(7) +Finally, suppose that, while having access, from the experimental data, to the probabilities Pr(Y = +y | L = l, X ← x), we do not observe L for the target patient. In this case (and unlike the situation +for decision theory) the additional background knowledge can make a difference. In Table 1 we now have +ξ = � +s ξ(l) × Pr(L = l), and we get the new interval bound +L := +� +s +|τ(l)| × Pr(L = l) ≤ ξ ≤ 1 − +� +s +|ρ(l)| × Pr(L = l) =: U. +(8) +Since τ = � +s τ(l) × Pr(L = l), ρ = 1 − � +s ρ(l) × Pr(L = l), this interval will be strictly contained in +that of (6) so long as not all the (τ(l)), or not all the (ρ(l)), have the same sign. +The probability of benefit is now bounded below by � +l PB−(l) Pr(L = l) and above by � +l PB+(l) Pr(L = +l), where PB−(l) and PB+(l) can be computed as in § 5.1.1 with τ and ρ replaced by τ(l) and ρ(l), re- +spectively. +Remark 2. Applying Remark 1, and noting |τ(l)| ≤ 1 − |ρ(l)|, all l, we see that the interval (8) will reduce +to a point, yielding full identification of the joint distribution of Y, if and only if |τ(l)| = 1 − |ρ(l)|, all l, +so that, for each l, at least one of Pr(Y = y | L = l, X ← x), for x, y = 0, 1, is zero. In this case, both +Pr(Y (x) = y, Y (x) = 0 | L = l) and Pr(Y (x) = y, Y (x) = 1 | L = l) will be 0. Knowing the value of L +will then always allow us to predict at least one of the interventional outcomes with certainty. However, +the relevant x and y may vary with l, in which case such certainty will not be possible in the absence of +knowledge of L. +5.2.1 Observational data +Consider now the case that our data are observational, rather than experimental. Suppose we can observe +a “sufficient covariate”: a covariate L such that, conditional on L, we can assume there is no residual +confounding. That is to say, the observational probability Pr(Y = y | L = l, X = x) can be equated with +the interventional probability Pr(Y = y | L = l, X ← x). To ensure meaningful conditioning, we further +need the positivity condition: in the observational setting, +Pr(L = l, X = x) > 0 +all l, and x = 0 or 1. +(9) +We can then proceed exactly as in § 5.2 above. + +8 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +5.3 Intention to treat +Allocation of treatment to patients can be usefully decomposed into two steps: +Intention The patient, or their doctor, decides on which treatment they would ideally want. This decision +will typically be related to their health status and other background information that could be predic- +tive of recovery, so that we cannot regard those who desire, and those who reject, active treatment as +comparing like with like. This is the genesis of confounding. +We introduce a binary stochastic “intention to treat” (ITT) variable X∗ to denote the treatment +desired. +Application A treatment X is imposed on the patient. +It is important to distinguish X and X∗.5 The ITT variable X∗ exists prior to application of treatment, +and can thus be regarded as independent of it: +Pr(X∗ = x∗ | X ← x) = Pr(X∗ = x∗). +(10) +This expresses the covariate nature of X∗. +We assume that, in an observational setting, the desired treatment is the one that is actually admin- +istered (there being no reason to do otherwise). Thus the received treatment X will be the same as the +desired treatment X∗. In particular, since we observe X, we can infer the value of X∗. +In an experiment, however, the treatment X will be imposed (e.g., by randomization), in a way that +will typically take no account of X∗. Even though we can still conceive of the ITT variable X∗ as existing, +it may or—more usually—may not be possible to observe it. When X∗ is observable, it can be used, just +like any other covariate, to improve decision-making, as in § 2 (when X∗ is observed for the target patient), +or, in the approach of MP, to narrow the bounds on PB and PH, as in § 5.2. +5.3.1 ITT as a sufficient covariate +In an observational setting, where X∗ = X is observed, it is natural to assume “distributional consistency” +[12]: the distribution of Y given intended treatment X∗ = x—and so, also, given received treatment +X = x—is the same as that of Y , given X∗ = x, under an imposed intervention X ← x that happens to +coincide with the treatment that would have been chosen anyway: +Pr(Y = y | X∗ = x, X = x) = Pr(Y = y | X∗ = x, X ← x). +(11) +For x∗ ̸= x, the event (X∗ = x∗, X = x) does not occur in the observational regime, so we can interpret +Pr(Y = y | X∗ = x∗, X = x) however we want, in particular as +Pr(Y = y | X∗ = x∗, X = x) = Pr(Y = y | X∗ = x∗, X ← x), +(12) +and then (11) implies that (12) holds for all x, x∗. +Properties (10) and (12) imply that X∗, which is observed in the observational setting, behaves as a +sufficient covariate. +5.4 Combination of data +It would be nice if, with observational data, we could profit from the fact that X∗ is a sufficient covariate, +as in § 5.2.1. However, this is not straightforward, since the positivity condition (9) fails: for x∗ ̸= x, even +5 We should further distinguish between imposed treatment and received treatment, as in Dawid [12]. Here we notate +both as X, hoping this will cause no confusion. We write X ← x when X refers to the imposed treatment, and X = x +when X refers to the received treatment. + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +9 +though we may assume Pr(Y = y | X∗ = x∗, X ← x) = Pr(Y = y | X∗ = x∗, X = x), we have no +data to estimate the latter term. Again, when our data are experimental but we can not directly observe +X∗, we can not identify Pr(Y = y | X∗ = x∗, X ← x). However, it turns out that we can do so if we +can also obtain observational data: the combination of both types of data allows us, after all, to identify +Pr(Y = y | X∗ = x∗, X ← x), even for x ̸= x∗. This we show in the following theorem. +Theorem 1. Suppose we can identify the joint distribution of X and Y in the observational context, where +0 < Pr(X = 1) < 1, and can also identify the distribution of Y under either intervention X ← x (x = 0, 1). +Then, under conditions (10) and (11), all the probabilities Pr(Y = y | X∗ = x∗, X ← x) (x, x∗ = 0, 1) are +identified. Specifically, +Pr(Y = y | X∗ = 1, X ← 1) += +Pr(Y = y | X = 1) +(13) +Pr(Y = y | X∗ = 0, X ← 0) += +Pr(Y = y | X = 0) +(14) +Pr(Y = y | X∗ = 1, X ← 0) += +Pr(Y = y | X ← 0) − Pr(Y = y, X = 0) +Pr(X = 1) +(15) +Pr(Y = y | X∗ = 0, X ← 1) += +Pr(Y = y | X ← 1) − Pr(Y = y, X = 1) +Pr(X = 0) +. +(16) +Proof. (13) and (14) follow from (11). +To identify Pr(Y = y | X∗ = 0, X ← 1), we argue as follows. We have +Pr(Y = y | X ← 0) += +Pr(Y = y | X∗ = 0, X ← 0) × Pr(X∗ = 0 | X ← 0) ++ Pr(Y = y | X∗ = 1, X ← 0) × Pr(X∗ = 1 | X ← 0) += +Pr(Y = y | X = 0) × Pr(X = 0) ++ Pr(Y = y | X∗ = 1, X ← 0) × Pr(X = 1), +(17) +on using (10) and (11), and the fact that X∗ = X in the observational setting. Since all the other terms in +(17) are identifiable in either the observational or the experimental context, and Pr(X = 1) ̸= 0, we can +solve for Pr(Y = y | X∗ = 1, X ← 0), obtaining (15). Then (16) follows similarly. +The above proof relies on X (and so X∗) being binary, but Y need not be. Versions of this argument have +appeared in [14–17]. +Corollary 1. The joint distribution of (X∗, Y ) under the intervention X ← x is then identified. +Proof. Follows since, by (10), Pr(X∗ = x∗ | X ← x) = Pr(X = x∗) is identified in the observational +context. +Remark 3. Since Pr(Y = y | X∗ = 1, X ← 0) ≥ 0, etc., we deduce from (15) and (16) the consistency +constraint Pr(Y = y | X ← x) ≥ Pr(Y = y, X = x), all x, y. When this fails, and that failure can not +be ascribed to sampling variation or bias, that is evidence of violation of the conditions of § 7 below, that +have, implicitly, been used to justify the above argument. +Theorem 1 and Corollary 1 express just what the combination of observational and experimental data is +doing for us: it allows us to identify distributions involving the ITT variable X∗. +5.5 Benefit and harm +Taking now X∗ as our sufficient covariate L, we can apply the formulae of (13)–(16) to compute the +quantities τ(x∗), ρ(x∗) required for the analysis of § 5.2. Noting that X∗ = X in the observational regime, + +10 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +so that Pr(X = x) = Pr(X∗ = x), we obtain +Pr(X∗ = 1) × τ(1) += +Pr(Y = 1) − Pr(Y = 1 | X ← 0) +Pr(X∗ = 0) × τ(0) += +Pr(Y = 1 | X ← 1) − Pr(Y = 1) +Pr(X∗ = 1) × ρ(1) += +K − Pr(Y = 0 | X ← 0) +Pr(X∗ = 0) × ρ(0) += +Pr(Y = 1 | X ← 1) − K +where K = Pr(Y = 1, X = 1) + Pr(Y = 0, X = 0). Then from (8) we bound ξ within (L, U), where +L += +| Pr(Y = 1) − Pr(Y = 1 | X ← 0)| + | Pr(Y = 1) − Pr(Y = 1 | X ← 1)| +1 − U += +| Pr(Y = 0 | X ← 0) − K| + | Pr(Y = 1 | X ← 1) − K| +Then PB lies in ( 1 +2(L + τ), 1 +2(U + τ)), and PH = PB − τ lies in ( 1 +2(L − τ), 1 +2(U − τ)). Although expressed +differently, these results agree with those of MP [2]. +By Remark 2, the joint distribution of Y, and in particular PB, PH, will be point identified just when, +for both x∗ = 0 and x∗ = 1, there exist x, y such that Pr(Y = y | X∗ = x∗, X ← x) = 0. In non-trivial +cases we will have Pr(Y = y | X = x) ̸= 0, in which case, by (13) and (14), this would need to happen +with x ̸= x∗. For that, by (15) and (16), we require +Pr(Y = y | X ← 0) = Pr(Y = y, X = 0) +(18) +for either y = 1 or y = 0; as well as +Pr(Y = y | X ← 1) = Pr(Y = y, X = 1) +(19) +for either y = 1 or y = 0. +6 Examples +MP [2, Table 1] consider two cases, in both of which the interventional probabilities of recovery are as in +our Example 1, having Pr(Y = 1 | X ← 1) = 0.49, Pr(Y = 1 | X ← 0) = 0.21, and so ATE = 0.28. +However, they have different observational data. We now analyse these in detail. +Example 2. This example relates to females, for whom the observational joint probabilities are as in +Table 2. +Y = 1 +Y = 0 +X = 1 +0.19 +0.51 +0.70 +X = 0 +0.21 +0.09 +0.30 +0.40 +0.60 +1 +Table 2. Joint observational distribution of (X, Y ) for females +Applying the formulae of § 5.5 we find: +0.7 × τ(1) += +0.19 +0.3 × τ(0) += +0.09 +0.7 × ρ(1) += +−0.51 +0.3 × ρ(0) += +0.11 +It follows that PB−(1) = τ(1) = 19/70. Also, PB+(1) = Pr(Y = 1 | X∗ = 1, X ← 1) = Pr(Y = 1 | +X = 1) = 19/70. Hence, given X∗ = 1, we have exact identification: PB(1) = 19/70. This occurs because + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +11 +Pr(Y = 1, X = 0) = 0.21 = Pr(Y = 1 | X ← 0), implying the deterministic property Pr(Y = 1 | X∗ = +1, X ← 0) = 0: a female who desires treatment will never recover if untreated. Consequently such a female +should be treated. +Also, PB−(0) = τ(0) = 0.3, while PB+(0) = Pr(Y = 0 | X∗ = 0, X ← 0) = Pr(Y = 0 | X = 0) = 0.3. +Given X∗ = 0, we again have exact identification: PB−(0) = 0.3. This occurs because Pr(Y = 0, X = +1) = 0.51 = Pr(Y = 0 | X ← 1), so that Pr(Y = 0 | X∗ = 0, X ← 1) = 0: a female who does not desire +treatment will always recover if treated. Again, such a female should be treated. +Finally we obtain exact identification marginally: PB = 0.28. Correspondingly, PH = PB − τ = 0. As +there is no possibility of harm, any female should be treated. +All the above conclusions agree with the DT prescription, based on the experimental data alone: since +ATE > 0, a female should be treated. +Example 3. For males, the observational joint probabilities are as in Table 3. Proceeding similarly to +Y = 1 +Y = 0 +X = 1 +0.49 +0.21 +0.70 +X = 0 +0.21 +0.09 +0.30 +0.70 +0.30 +1 +Table 3. Joint observational distribution of (X, Y ) for males +Example 2, we find that PB and PH are again identified exactly: PB = 0.49, PH = 0.21. Indeed, Pr(Y = +1, X = 1) = 0.49 = Pr(Y = 1 | X ← 1), implying the deterministic property Pr(Y = 1 | X∗ = 0, X ← 1) = +0: a male who does not desire treatment will never recover if treated. Consequently such a male should not +be treated. Also, Pr(Y = 1, X = 0) = 0.21 − Pr(Y = 1 | X ← 0), so that Pr(Y = 1 | X∗ = 1, X ← 0) = 0: +a male who desires treatment will never recover if untreated, so that such a male should be treated. +However, if we do not observe X∗ for the target male patient, the above does not tell us how to proceed. +We might try to balance PH (= 0.49) and PB (= 0.21) somehow: for example, treat just when PB > λPH +for some chosen value of λ. In the light of the clinical maxim primum non nocere, a value λ = 3 might be +chosen—in which case the target male would not be treated.6 +By contrast, in the absence of knowledge of X∗ for the target male, the DT approach would take no +account of the observational data, again focusing simply on ATE = 0.28—and so decide to treat. In a large +population of similar cases, this would lead to an overall recovery rate of 49%, the maximum possible; +whereas the above strategy based on balancing PB and PH would only have a 21% recovery rate. It is +difficult to see how this could be regarded as ethical. +Example 4. Consider another case. Again, the interventional probabilities are Pr(Y = 1 | X ← 1) = 0.49, +Pr(Y = 1 | X ← 0) = 0.21, with τ = 0.28. Now the observational joint probabilities are as in Table 4. We +Y = 1 +Y = 0 +X = 1 +0.2 +0.5 +0.7 +X = 0 +0.1 +0.2 +0.3 +0.3 +0.7 +1 +Table 4. Another joint observational distribution of (X, Y ) +6 This argument parallels one in MP [1], having different numbers, and λ = 2. + +12 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +compute +0.7 × τ(1) += +0.09 +0.3 × τ(0) += +0.19 +0.7 × ρ(1) += +−0.39 +0.3 × ρ(0) += +0.09. +Using (8) we find 0.28 ≤ ξ ≤ 0.52, whence PB = 1 +2(ξ + τ) is bounded between 0.28 and 0.40—and so +PH = PB − τ lies between 0 and 0.12. So, even with the aid of the additional observational data, we have +not been able to identify these probabilities exactly. And even if we were to resolve the ambiguity somehow, +for example by taking the midpoints of these intervals as suggested by Li and Pearl [3], we would be no +better off than we were in Example 3, where trying to balance PB against PH could lead to a decision +opposite to the rcommendation of the simple DT analysis, so leading to fewer recoveries. +7 Assumptions and critical comments +Here we identify and discuss some of the assumptions underlying the foregoing analyses. +7.1 Representative data +A fundamental assumption underlying both the decision-theoretic analysis of § 2 and the alternative ap- +proach of § 3 is that the data available for estimating the interventional probabilities Pr(Y = y, L = l | +X ← x) are on individuals who can be regarded as “similar to” (“exchangeable with”) the target case, so +that these estimated probabilities are applicable to the target.7 In reality this is highly implausible. For +example, a clinical trial will have entry criteria and processes that make its subjects quite untypical of +the population from which they are drawn, or indeed of the individuals recruited into another such trial. +In any case, despite the name, entry criteria govern who does not get into a trial: they cannot guarantee +that those who enter are representative even of a target individual meeting the same criteria. A clinical +trial gains its value, not from representativeness, but from the internal randomisation that ensures that a +comparison between its treated and untreated groups is indeed a comparision of like with like, and that +valid probability statements can be made about likely differences, so enforcing internal validity. Because +of unrepresentativeness it would not be appropriate to regard Pr(Y = y, L = l | X ← x), estimated from +the data, as being directly relevant to the target case—the problem of external validity. (One cheating way +round this is to focus on a hypothetical target individual who can be regarded as exchangeable with those +in the study.) Nevertheless, it may still be reasonable to regard the estimated ATE or CATE as applying +to the target—if not in its exact numerical value, at least in its sign, which is what is required, for DT +application, to solve the single patient treatment problem; or in its ordering of the CATEi, as required to +solve the DT unit selection problem. +To underline how unreasonable the representative assumption is, it should be noted that even when +clinical trials with similar protocols are compared this assumption is not made. A striking example of its +failure for nearly identical protocols is given by the TARGET study [18], in which osteoarthritis patients in +some centres were randomised to receive either lumiracoxib or naproxen, and patients in other centres either +lumiracoxib or ibuprofen. The degree of comparability in design of the two sub-studies thus defined was +7 For application to the MP arguments of § 3, the representativeness assumption should apparently be extended to +the (typically unidentifiable) bivariate distribution, along with the other variables, of the pair of potential responses +(Y (1), Y (0)). For the interval-valued inferences made, however, this is not crucial, since these allow for arbitrary de- +pendence in this bivariate distribution. + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +13 +greater than one would typically expect between two randomised controlled trials (RCTs), and a fortiori +than between an RCT and an observational study, such as MP consider. Nevertheless, very important +differences at baseline were seen between the two sub-studies, even though within-sub-study treatment +arms were comparable. Furthermore, it was possible to demonstrate differences at outcome between the +two studies using lumiracoxib data only, a striking illustration of a study effect. It is generally accepted by +sponsors and regulators that as soon as concurrent control is abandoned the greatest of care must be taken +in drawing inferences. Modern work on using data on historical controls to try and improve the efficiency +of clinical trials takes such study-to-study variation as a given that must be allowed for [19]. +7.2 Combination of data +An essential requirement for the application of Theorem 1 is that the observational and experimental +datasets comprise similar individuals, so that the same probabilities for X, X∗, Y apply to both groups. +This is even more implausible than the representativeness of either group. In particular, the assumption of +a common distribution for the desired treatment X∗, in both datasets and in the target patient, is vital but +highly questionable. Even if we were to accept the arguments of MP [2] based on combining observational +and experimental data, without this property they are simply irrelevant. +7.3 What do clinical trialists do in practice? +The key to using RCTs is to identify reasonable assumptions, and use theory to transfer results from trial +to practice. A striking example is given by bioequivalence studies. The subjects are usually young healthy +volunteers, frequently male. However, the results will be used to decide on appropriate treatments for +elderly frail patients, some female. There is no pretence of representativeness. Instead, tight control and +sensible scales of analysis are used. The purpose of such studies is to compare two formulations in terms of +bioavailability, and this is typically done using a cross-over trial in which each subject is their own control, +the order of administration being randomised. On separate days, concentration of the test and reference +pharmaceuticals are measured, and the ratio of the areas under the two concentration time curves (AUCs) +are calculated for each subject, then analysed over all subjects, typically after log-transformation. What is +relevant for treating an individual patient is their own AUC: too low and efficacy may be disappointing, +too high and the drug may not be tolerated. However, no inference is made from a bioequivalence study +in terms of AUCs alone, since they would be quite different in healthy volunteers and patients. Instead, +the idea is that the ratio between test and reference ought to be the same in volunteers and patients, and +this ratio can be used to make predictions as to how the test drug will behave in clinical practice. An +interesting example of such a study is reported by Shumaker and Metzler [20]. They used a more elaborate +design in which test and reference drugs were given in a double cross-over, thus permitting them to analyse +the formulation-by-subject interaction. They were able to demonstrate that there was no evidence of an +individual bioequivalence effect: although you could estimate the individual relative bioavailability, using +the average over all subjects would be superior than any such naïve estimate. This raises a further issue +with MP, who assume that individual causal effects are stable over time. Moreover, typical causal analysis +assumes an infinite sample size, but no infinities are available for individual subjects, and estimating +individual causal effects requires close attention to components of variance. Bioequivalence studies are an +extreme example, but the general idea of transferring results using a suitable scale for analysis, and back- +transforming to a scale suitable for decision analysis, is commonplace: see [21] for a general discussion and +[22] in the specific context of vaccine efficacy. Of course, as the COVID-19 pandemic has reminded us, there +are no guarantees. Things that work at one time may not do so at another. It behoves all those proposing +solutions to be cautious and humble. + +14 +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +8 Summary +We have given careful accounts of the DT and MP approaches to individualised treatment choice. The DT +approach is simple in the extreme, and selects the treatment strategy that maximises the number of recov- +eries. In contrast, the MP approach fixates on philosophically questionable and unknowable counterfactual +concepts, and when its recommendations differ from those of DT will lead to fewer recoveries. This has +been illustrated in a number of examples. +One feature of the MP approach is the combination of experimental and observational data. When +some very strong, and practically implausible, conditions are satisfied, this permits identification of the +distribution of a special covariate, the intention to treat (ITT). As with any other covariate whose distri- +bution is known, this can then feed back to tighten the MP inferences. But it would be better to observe +this—or any other—covariate in the target patient, which would then lead to better results from the DT +point of view. In particular we have shown that, in just those very special cases that use of ITT leads to +point identification of the MP probabilities of benefit and of harm, knowledge of the target patient’s ITT +value allows perfect prediction of the outcome under at least one of the treatment interventions, and so to +a trivial solution to the decision problem. +The DT approach has a long history of fruitful application to an enormous variety of fields, from clinical +trials to rocket science. Attempts to replace it with another approach, based on counterfactuals, are totally +unnecessary and dangerously misguided. This approach should not be used in practice. +Acknowledgments +We have benefited greatly from discussions with Mats Stensrud and Aaron Sarvet. +Conflict of interest: Prof. Philip Dawid is a member of the Editorial Board in the Journal of Causal +Inference but was not involved in the review process of this article. +References +[1] +Scott Mueller and Judea Pearl. +Which patients are in greater need: A counterfactual analysis with reflections on +COVID-19. Blog post, April 2020. +[2] +Scott Mueller and Judea Pearl. Personalized decision making – a conceptual introduction. Technical Report 513, +Department of Computer Science, UCLA, 2022. +[3] +Ang Li and Judea Pearl. Unit selection based on counterfactual logic. In Proceedings of the Twenty-Eighth Inter- +national Joint Conference on Artificial Intelligence, IJCAI-19, pages 1793–1799. International Joint Conferences on +Artificial Intelligence Organization, 7 2019. DOI:10.24963/ijcai.2019/248. +[4] +Ang Li and Judea Pearl. Unit selection: Case study and comparison with A/B test heuristic. Preprint, UCLA, 2022. +[5] +Kosuke Imai, Zhichao Jiang, D. James Greiner, Ryan Halen, and Sooahn Shin. Experimental evaluation of algorithm- +assisted human decision-making: Application to pretrial public safety assessment (with Discussion). Journal of the +Royal Statistical Society, Series A, 2022. To appear. +[6] +Jonathan G. Richens, Rory Beard, and Daniel H. Thompson. Counterfactual harm. In Advances in Neural Information +Processing Systems 35 (NeurIPS 2022), 2022. +https://arxiv.org/abs/2204.12993. +[7] +Eli Ben-Michael, Kosuke Imai, and Zhichao Jiang. Policy learning with asymmetric utilities, 2022. +[8] +Aaron L. Sarvet and Mats J. Stensrud. Perspectives on harm in personalized medicine. Submitted to the American +Journal of Epidemiology, 2022. +[9] +Donald B. Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of +Educational Psychology, 66:688–701, 1974. + +A. Philip Dawid and Stephen Senn, Personalised Decision-Making +15 +[10] Jerzy Neyman. On the application of probability theory to agricultural experiments. Essay on principles (in Polish). +Roczniki Nauk Rolniczych, X:1–51, 1923. English translation of Section 9 (D. M. Dabrowska and T. P. Speed): Sta- +tistical Science 9 (1990), 465–480. +[11] A. Philip Dawid. Causal inference without counterfactuals (with Discussion). Journal of the American Statistical +Association, 95:407–448, 2000. +[12] A. Philip Dawid. Decision-theoretic foundations for statistical causality. Journal of Causal Inference, 9:39–77, 2021. +DOI:10.1515/jci-2020-0008. +[13] A. Philip Dawid. Probability, causality and the empirical world: A Bayes/de Finetti/Popper/Borel synthesis. Statistical +Science, 19:44–57, 2004. +[14] A. Philip Dawid and Monica Musio. What can group level data tell us about individual causality? In A. Carriquiry, +J. Tanur, and W. Eddy, editors, Statistics in the Public Interest: In Memory of Stephen E. Fienberg, pages 235–256. +Springer International Publishing, 2022. +DOI: 10.1007/978-3-030-75460-0_13. +[15] James M. Robins, Tyler J. Vanderweele, and Thomas S. Richardson. Comment on “Causal effects in the presence of +non compliance: A latent variable interpretation” by Antonio Forcina. Metron, LXIV:288–298, 2007. +[16] Sara G. Geneletti and A. Philip Dawid. Defining and identifying the effect of treatment on the treated. In Phyllis M. +Illari, Federica Russo, and Jon Williamson, editors, Causality in the Sciences, pages 728–749. Oxford University Press, +2011. +[17] Mats J. Stensrud and Aaron L. Sarvet. Optimal regimes for algorithm-assisted human decision-making. arXiv preprint +arXiv:2203.03020, 2022. +[18] Stephen Senn. Lessons from TGN1412 and TARGET: Implications for observational studies and meta-analysis. Phar- +maceutical Statistics, 7:294–301, 2008. +[19] Heinz Schmidli, Sandro Gsteiger, Satrajit Roychoudhury, Anthony O’Hagan, David Spiegelhalter, and Beat Neuen- +schwander. Robust meta-analytic-predictive priors in clinical trials with historical control information. Biometrics, +70:1023–1032, 2014. +[20] Robert C. Shumaker and Carol M. Metzler. The phenytoin trial is a case study of “individual bioequivalence”. Drug +Information Journal, 32:1063–1072, 1998. +[21] Jacobus Lubsen and Jan G. Tijssen. Large trials with simple protocols: Indications and contraindications. Controlled +Clinical Trials, 10:151–160, 1989. +[22] Stephen Senn. The design and analysis of vaccine trials for COVID-19 for the purpose of estimating efficacy. Pharma- +ceutical Statistics, 21:790–807, 2022. + diff --git a/-NFLT4oBgHgl3EQfCi71/content/tmp_files/load_file.txt b/-NFLT4oBgHgl3EQfCi71/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..67cd7b073f33c550bd7c5543305875b2d082f5f4 --- /dev/null +++ b/-NFLT4oBgHgl3EQfCi71/content/tmp_files/load_file.txt @@ -0,0 +1,586 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf,len=585 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='11976v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='ME] 27 Jan 2023 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid* and Stephen Senn Personalised Decision-Making without Counterfactuals Keywords: decision theory, counterfactual, potential response, intention to treat This article is a response to recent proposals by Pearl and others for a new approach to personalised treatment decisions, in contrast to the traditional one based on statistical decision theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We argue that this approach is dangerously misguided and should not be used in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 1 Introduction In recent works [1–4], Judea Pearl and collaborators have set out an approach to personalised treatment that is radically different from that based on traditional statistical decision theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' It is based on the conception that we should care, not only about the outcome that actually materialises, but also about the (necessarily unobserved, counterfactual) outcome that, it is supposed, would have occurred under the treatment that was not applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A similar conception forms the basis of other recent work [5–7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We consider this approach to be dangerously misguided, and believe that real harm will ensue if it is applied in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We argue our case from a number of different viewpoints, and explain why this approach should not be regarded as a viable alternative to standard statistical decision theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Basic set-up The context is that of a “target” patient suffering from a disease, for which a treatment is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The treatment is far from perfect, so that not all treated patients recover, while some untreated patients may recover anyway.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' There is information available on recovery rates for treated and untreated patients;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' both these rates may depend on measured individual patient characteristics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The basic problem is to decide, on the basis of the target patient’s own characteristics, whether or not to treat him.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A variation is how to prioritise patients for treatment when there are limited doses available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We introduce notation as follows: Treatment decision Binary decision variable X,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' coded 1 for treat,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 0 for don’t treat Response Binary stochastic variable Y ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' coded 1 for recovery,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 0 for no recovery Individual background characteristics Stochastic variable L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' potentially multivariate,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' unaffected by the treatment decision We suppose that there are available substantial data on (L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' X,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Y ),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' from either experimental or uncon- founded observational studies on patients we can regard as similar to the target1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' from which we can estimate,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' essentially perfectly2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' the distribution of Y ,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' conditional on L,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' under either treatment interven- 1 See § 7 for further discussion of this point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 2 In a Bayesian setting it is straightforward to relax this condition, using predictive distributions based on finite samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However the main issues are most clearly expressed in the case of essentially known probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Corresponding Author: A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid: University of Cambridge: apd@statslab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='cam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='uk Stephen Senn: Statistical Consultant, Edinburgh: stephen@senns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='uk 2 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' That is, we know the probability Pr(Y = 1 | L = l, X ← x), for any value l of L and x = 0 or 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (Here X ← x denotes an external intervention to set X to x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=') 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 Outline In § 2 we recall the straightforward decision-theoretic analysis of this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Then in § 3 we briefly outline the approach proposed by Pearl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=', followed by some critical comments in § 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Section 5 describes this approach in more detail, following a logical path that relates it to other problems, in particular the use of general covariate information to strengthen conclusions, and the specific case of an “intention to treat” covariate, whose properties can be identified by combining experimental and observational data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In § 6 we give critical consideration to some examples from Mueller and Pearl [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Section 7 notes some important assumptions that are implicitly made in the analysis, and points out that they are unlikely to hold in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Section 8 summarises our analysis and conclusions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 2 Decision-theoretic approach We first describe the standard decision-theoretic (DT) approach to treatment selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Single patient decision problem Consider first the case of the single target patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We have to decide whether to offer this patient treatment, or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Having access only to the target patient’s value L = l, our objective is to choose the treatment that will maximise the probability of recovery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We should thus treat this patient if p := Pr(Y = 1 | L = l, X ← 1) > q := Pr(Y = 1 | L = l, X ← 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' That is, we should treat just when CATE(l) > 0, where CATE(l) = p − q is the “conditional average treatment effect”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If the outcome Y is not necessarily binary, for example a survival time, we need to associate a utility U(y) with the outcome y, and treat the patient just when E{U(Y ) | L = l, X ← 1)} > E{U(Y ) | L = l, X ← 0)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Applied to the binary case this reduces to the prescription above (so long as U(1) > U(0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In [8], a companion paper to this one which treats utilities explicitly, the above is termed the inter- ventionist utility and approach, and contrasted with the counterfactual utility and approach implicit in [2] and explicit in [6, 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Here we restrict to the binary case and do not use utilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Faced with a large collection G of patients to treat, and unlimited supplies of the treatment, managing each patient (each with their own value l of L) according to the above rule will maximise the number of recoveries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' That is, any other (deterministic or randomised) decision rule that uses only the information on L would lead to fewer recoveries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Example 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Consider a case where Pr(Y = 1 | L = l, X ← 1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49 Pr(Y = 1 | L = l, X ← 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The conditional average treatment effect is CATE(l) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49−0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Since CATE(l) > 0, the optimal action is to treat this patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If we have a large collection of similar patients, with the same value L = l, they should all be treated—in which case the overall proportion of recovered patients will be 49%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This is the best outcome that can be achieved by any treatment strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Missing information It may happen that, while we have full information on (L, X, Y ) for the study individuals, the value l of L for the target patient is not available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In that case we can not condition on L = l, and we have no option but to base the management of the patient on the unconditional probabilities Pr(Y = 1 | X ← x) (x = 1, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Nothing is gained by, for example, trying to impute the unknown value of L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If this is not obvious (as it should be), suppose we tried to do so.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The recovery probabilities, conditional on a hypothesised value l for L, are Pr(Y = 1 | L = l, X ← x) (x = 1, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' But as we do not know l, we need to take the expectation of Pr(Y = 1 | L, X ← x) over the distribution of L, when setting X ← x (which is the known marginal distribution of L, unaffected by the intervention).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' And this is just the unconditional recovery probability Pr(Y = 1 | X ← x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 Unit selection Again consider a large collection G of patients i = 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' , N, with individual recovery probabilities pi = Pr(Yi = 1 | Li = li, Xi ← 1), qi = Pr(Yi = 1 | Li = li, Xi ← 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If we treat just those in a subset S, the expected number of recoveries will be � i∈S pi + � i∈G\\S qi = � i∈G qi + � i∈S CATEi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Consequently, to maximise this expected number, we should choose S, subject to any constraints, to maximise � i∈S CATEi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If we have limited treatments available, we should thus prioritise individuals in decreasing order of their CATE (while of course not treating any one for whom CATE < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=') Again, any other policy (subject to the same constraints) will have a smaller number of recoveries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 Potential outcomes?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The “potential outcome” approach to causal inference [9] conceives of the existence, even prior to treatment choice, of the pair of variables Y = (Y (1), Y (0)), where Y (x) denotes the value that, it is supposed, Y will take if intervention X ← x is applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The pretreatment variables (Y (1), Y (0), L) are supposed to have a joint distribution, unaffected by treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' With this interpretation, we have Pr(Y = y, L = l | X ← x) = Pr(Y (x) = y, L = l), (1) and p = E{Y (1) | L = l}, q = E{Y (0) | L = l}, In this approach, inference is ideally desired for the “individual treatment effect”, ITE := Y (1) − Y (0), which can take values +1 (treatment benefits the patient), −1 (treatment harms the patient) or 0 (treatment has no effect).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Then CATE = E(ITE | L = l).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, typically ITE is unobservable, since it is impossible simultaneously both to treat and not to treat the same patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In particular, no information can be gained about the dependence between Y (1) and Y (0), nor about the distribution (marginal, or conditional on L) of ITE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' All that can be inferred is the (conditional) expectation, as above, of ITE, depending as this does only on the individual distributions of Y (1) and Y (0), which can be identified from experimental data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In certain very special and atypical cases, essentially those where we have a fully deterministic and completely understood mechanistic system, it may be that the background knowledge L is detailed enough to support perfect prediction of the eventual response, under either intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Then we will know, in advance of treatment choice, both potential outcome variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In this case p = Y (1), q = Y (0), and CATE = ITE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Clearly we should treat as many of those who will (we know for sure) benefit from the treatment as we can.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 3 With finite data, on taking account of known structure in the interventional distributions of (L, Y ) it may be possible to improve the estimation of Pr(Y = 1 | X ← x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' But this still remains what we need to focus on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making However, in typical cases perfect prediction is impossible, and then it is arguable whether the potential responses even have any meaningful existence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In any case, there is nothing to be gained by trying to impute potential responses: as in § 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 above (taking L = Y), we should again simply focus on CATE = p − q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In summary, consideration of potential responses (even if regarded as meaningful) does not add any value to the decision-theoretic approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 3 The approach of Mueller and Pearl [2] In contrast to the above decision-theoretic approach, Mueller amd Pearl [2] (henceforth MP) opt to take potential outcomes seriously, and focus attention on ITE = Y (1)−Y (0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' They argue that we should ideally aim to treat those patients having ITE = 1, for whom the treatment made a difference: they would not have recovered without it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' It would be wasteful to treat a patient with ITE = 0, for whome the treatment made no difference, and positively harmful to treat a patient with ITE = −1, who would have recovered if untreated, but not if treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, this ideal is unattainable, as we will not know a patient’s ITE before treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Concern is therefore transferred to the “probability of benefit”, PB = Pr(Y (1) = 1, Y (0) = 0) = Pr(ITE = 1), and the “probability of harm”, PH = Pr(Y (1) = 0, Y (0) = 1) = Pr(ITE = −1), which are now regarded as the criteria by which to assess any treatment strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' But not only can we not know a patient’s ITE before the treatment decision is made, we can not even know it later, when the outcome Y is observed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' For if we treat the patient we will observe Y = Y (1), but can not then observe the counterfactual outcome Y (0) relevant when we don’t treat;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' similarly, for an untreated patient we can observe Y (0), but not Y (1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' So ITE is always unobservable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This means that, even with extensive data on other patients, it will not be possible fully to identify PB and PH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Such data can, however, be used to set interval bounds on these quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' MP [2] further show how combining experimental and observational data can narrow these bounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In certain very special cases the bounds narrow to a single point, leading to full identification of PB and PH.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4 Comments on the approach Our comments on the MP programme are arranged along several dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Philosophy Potential responses such as Y (0) and Y (1), first introduced by Neyman [10], have been considered as fun- damental to the conduct of causal inference ever since reintroduced by Rubin [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However this conception was challenged by Dawid [11, 12], who pointed out that, so far from being fundamental, they are entirely unnecessary, and that a fully satisfactory theory can be based on standard decision-theoretic elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Indeed, there are serious philosophical objections to regarding potential responses as having real existence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Only if we take a fully Laplacean view of the universe, in which the future of the universe is entirely determined by its present state and the laws of Physics, does this make any sense at all—and even then, it is difficult to incorporate the whims of an unconstrained external agent who decides whether or not to give treatment, or to account for the effect of external conditions arising after treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Even under Laplacean determinism, our ignorance of the information needed to predict the future means that we are unable to make use of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Whether or not we believe in a deep-down deterministic universe, our predictions of the future can only be based on the limited information we do have at our disposal, and must necessarily be A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 5 probabilistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='4 Imagining what we could know or do, if only we had more information than we actually do have, is just pointless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 Applicability Another important dimension of criticism is that the strong conditions needed for application of the MP theory will almost never obtain in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' See § 7 below for details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 Helpfulness The output of an MP analysis will, at very best, be point estimates of the probabilities of benefit and of harm—in most cases, we won’t even get these, but can only bound these quantities within an interval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' But even when we have these quantities, it is far from clear how they help to inform treatment decisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='4 Ethics Our final criticism is the simplest, but most incisive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The treatment decisions made using the DT approach are guaranteed to be better than those made by any other decision rule, in the sense that they will maximise the number of recoveries in the population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' So whenever the MP approach leads to different decisions, it will produce a decrease in the number of recoveries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We find it hard to construe this as ethical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5 Analysis Here we provide a deconstruction of the analysis of MP [2]—which should not, however, be taken as agreement with their arguments and interpretations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' There are a number of crucial assumptions required, but to avoid cluttering the argument we leave these implicit, postponing specification and discussion of them to § 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We develop the story-line in a number of stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 we consider the case where we have access to experimental data on treatment X and response Y , and show how this can be used to bound the probabilities of benefit and of harm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We also discuss the special circumstances in which these interval bounds shrink to a point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 we further suppose that we can measure additional covariate information L on individuals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If we have these values in the experimental data, this additional information can lead to a narrowing of the bounds for PB and PH for the target case, even when L for that case is unobserved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 introduces a particular, potentially useful, covariate, “intention to treat”, X∗—the treat- ment that a patient (or their doctor) would like to choose, if unconstrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This may well be informative about their state of health, and thus their outcome.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In some experiments it may be possible to obtain information about X∗, and this can then be used as L in § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However it will often not be possible to observe X∗ in the experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Section 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 considers how this problem can be overcome by the incorpora- tion of observational data, if we can assume that, in such data, the desired treatment was the one actually applied, so that X∗ = X becomes observable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The combination of experimental and observational data allows us to identify the distribution of X∗ (together with the other variables), and so once again allows us to apply the theory of § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 to obtain improved bounds for PB and PH, which are detailed in § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 4 See Dawid [13] for an approach to understanding non-extreme probabilities based on imperfect information about a deterministic world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 6 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Simplest case We start by presenting the basis of the approach in the simplest case, where the data are experimental, and there is no additional covariate information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We thus have access to the interventional response probabilities Pr(Y = y | X ← x) = Pr(Y (x) = y), (x, y = 0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' What can be inferred, from these, about the probabilities of benefit and of harm?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' As described by Dawid and Musio [14], it is helpful to express the interventional probabilities in terms of parameters τ and ρ, where τ := Pr(Y = 1 | X ← 1) − Pr(Y = 1 | X ← 0) (2) ρ := Pr(Y = 1 | X ← 1) − Pr(Y = 0 | X ← 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (3) Then τ is the average treatment effect, ATE, of X on Y , while ρ = Pr(Y = 1 | X ← 1) + Pr(Y = 1 | X ← 0) − 1 is a measure of how common the outcome is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The transition matrix (Pr(Y = y | X ← x)) from X to Y is P = P(τ, ρ) := � 1 2(1 + τ + ρ) 1 2(1 − τ − ρ) 1 2(1 − τ + ρ) 1 2(1 + τ − ρ) � , (4) where the row and column labels are implicitly 1 and 0 in that order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The necessary and sufficient condition for all the transition probabilities to be non-negative is |τ| + |ρ| ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (5) We have equality in (5) only in the degenerate case that one of the entries of P is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We can express the joint distribution for Y = (Y (1), Y (0)) as in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The margins are determined Y (0) = 1 Y (0) = 0 Y (1) = 1 1 2 (1 + ρ − ξ) 1 2 (ξ + τ) 1 2 (1 + τ + ρ) Y (1) = 0 1 2(ξ − τ) 1 2 (1 − ρ − ξ) 1 2 (1 − τ − ρ) 1 2(1 − τ + ρ) 1 2(1 + τ − ρ) 1 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Joint probability distribution of (Y (1), Y (0) by (1) (with L absent) and (4);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' but the internal entries are indeterminate, having one degree of freedom crystallised in the unspecified “slack variable” ξ, which is not identified by the experimental data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The only constraint on ξ is the logical one that all internal entries of Table 1 be non-negative.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This holds if and only if |τ| ≤ ξ ≤ 1 − |ρ|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (6) This interval information is all that can be concluded about the joint distribution for Y when we have data on the behaviour of Y under intervention on X, and no additional information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The interval (6) shrinks to a point, so that the joint distribution of Y is fully determined by the experimental data, if and only if we have equality in (5), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=', just when P is degenerate, so that, for some x, y = 0, 1, Pr(Y = y | X ← x) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' That is to say, for at least one of the interventions, the resulting outcome Y can be predicted with certainty—a most unusual state of affairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In this case Pr(Y (x) = y) = 0, so that both joint events (Y (x) = y, Y (x) = 0) and (Y (x) = y, Y (x) = 1) (where x = 1−x) have probability 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Benefit and harm The probability of benefit PB is the upper right entry of Table 1, PB = Pr(Y (1) = 1, Y (0) = 0) = 1 2(ξ +τ), which by (6) is bounded between PB− := 1 2(|τ|+τ) = max{τ, 0} and PB+ := 1 2(1−|ρ|+τ) = min{Pr(Y = A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 7 1 | X ← 1), Pr(Y = 0 | X ← 0)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The probability of harm is the lower left entry of Table 1, PH = Pr(Y (1) = 0, Y (0) = 1) = 1 2(ξ − τ) = PB − τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' For the case of Example 1, we have τ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28, ρ = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Without any further information, we can only infer 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28 ≤ PB ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49, and correspondingly 0 ≤ PH ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 Covariate information Now suppose that, again with experimental data, we can obtain additional information on some pre- treatment covariate information L (for simplicity assumed discrete), unaffected by intervention (so Pr(L = l | X ← x) = Pr(L = l), assumed known and positive).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We thus have access to the conditional interventional probabilities Pr(Y = y | L = l, X ← x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Let τ(l), ρ(l) be defined as in (2) and (3), but with probabilities further conditioned on L = l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' If, for the target case, we observe L = l, then we simply apply the above analysis, conditional on L = l.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In particular, the joint distribution for Y, given L = l, will be as in Table 1, with ρ, τ, ξ replaced, respectively, by ρ(l), τ(l), ξ(l), where ξ(l) is subject only to |τ(l)| ≤ ξ(l) ≤ 1 − |ρ(l)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (7) Finally, suppose that, while having access, from the experimental data, to the probabilities Pr(Y = y | L = l, X ← x), we do not observe L for the target patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In this case (and unlike the situation for decision theory) the additional background knowledge can make a difference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In Table 1 we now have ξ = � s ξ(l) × Pr(L = l), and we get the new interval bound L := � s |τ(l)| × Pr(L = l) ≤ ξ ≤ 1 − � s |ρ(l)| × Pr(L = l) =: U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (8) Since τ = � s τ(l) × Pr(L = l), ρ = 1 − � s ρ(l) × Pr(L = l), this interval will be strictly contained in that of (6) so long as not all the (τ(l)), or not all the (ρ(l)), have the same sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The probability of benefit is now bounded below by � l PB−(l) Pr(L = l) and above by � l PB+(l) Pr(L = l), where PB−(l) and PB+(l) can be computed as in § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 with τ and ρ replaced by τ(l) and ρ(l), re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Applying Remark 1, and noting |τ(l)| ≤ 1 − |ρ(l)|, all l, we see that the interval (8) will reduce to a point, yielding full identification of the joint distribution of Y, if and only if |τ(l)| = 1 − |ρ(l)|, all l, so that, for each l, at least one of Pr(Y = y | L = l, X ← x), for x, y = 0, 1, is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In this case, both Pr(Y (x) = y, Y (x) = 0 | L = l) and Pr(Y (x) = y, Y (x) = 1 | L = l) will be 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Knowing the value of L will then always allow us to predict at least one of the interventional outcomes with certainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, the relevant x and y may vary with l, in which case such certainty will not be possible in the absence of knowledge of L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Observational data Consider now the case that our data are observational, rather than experimental.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Suppose we can observe a “sufficient covariate”: a covariate L such that, conditional on L, we can assume there is no residual confounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' That is to say, the observational probability Pr(Y = y | L = l, X = x) can be equated with the interventional probability Pr(Y = y | L = l, X ← x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' To ensure meaningful conditioning, we further need the positivity condition: in the observational setting, Pr(L = l, X = x) > 0 all l, and x = 0 or 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (9) We can then proceed exactly as in § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 8 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 Intention to treat Allocation of treatment to patients can be usefully decomposed into two steps: Intention The patient, or their doctor, decides on which treatment they would ideally want.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This decision will typically be related to their health status and other background information that could be predic- tive of recovery, so that we cannot regard those who desire, and those who reject, active treatment as comparing like with like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This is the genesis of confounding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We introduce a binary stochastic “intention to treat” (ITT) variable X∗ to denote the treatment desired.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Application A treatment X is imposed on the patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' It is important to distinguish X and X∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='5 The ITT variable X∗ exists prior to application of treatment, and can thus be regarded as independent of it: Pr(X∗ = x∗ | X ← x) = Pr(X∗ = x∗).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (10) This expresses the covariate nature of X∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We assume that, in an observational setting, the desired treatment is the one that is actually admin- istered (there being no reason to do otherwise).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Thus the received treatment X will be the same as the desired treatment X∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In particular, since we observe X, we can infer the value of X∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In an experiment, however, the treatment X will be imposed (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=', by randomization), in a way that will typically take no account of X∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Even though we can still conceive of the ITT variable X∗ as existing, it may or—more usually—may not be possible to observe it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' When X∗ is observable, it can be used, just like any other covariate, to improve decision-making, as in § 2 (when X∗ is observed for the target patient), or, in the approach of MP, to narrow the bounds on PB and PH, as in § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 ITT as a sufficient covariate In an observational setting, where X∗ = X is observed, it is natural to assume “distributional consistency” [12]: the distribution of Y given intended treatment X∗ = x—and so, also, given received treatment X = x—is the same as that of Y , given X∗ = x, under an imposed intervention X ← x that happens to coincide with the treatment that would have been chosen anyway: Pr(Y = y | X∗ = x, X = x) = Pr(Y = y | X∗ = x, X ← x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (11) For x∗ ̸= x, the event (X∗ = x∗, X = x) does not occur in the observational regime, so we can interpret Pr(Y = y | X∗ = x∗, X = x) however we want, in particular as Pr(Y = y | X∗ = x∗, X = x) = Pr(Y = y | X∗ = x∗, X ← x), (12) and then (11) implies that (12) holds for all x, x∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Properties (10) and (12) imply that X∗, which is observed in the observational setting, behaves as a sufficient covariate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='4 Combination of data It would be nice if, with observational data, we could profit from the fact that X∗ is a sufficient covariate, as in § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, this is not straightforward, since the positivity condition (9) fails: for x∗ ̸= x, even 5 We should further distinguish between imposed treatment and received treatment, as in Dawid [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Here we notate both as X, hoping this will cause no confusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We write X ← x when X refers to the imposed treatment, and X = x when X refers to the received treatment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 9 though we may assume Pr(Y = y | X∗ = x∗, X ← x) = Pr(Y = y | X∗ = x∗, X = x), we have no data to estimate the latter term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Again, when our data are experimental but we can not directly observe X∗, we can not identify Pr(Y = y | X∗ = x∗, X ← x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, it turns out that we can do so if we can also obtain observational data: the combination of both types of data allows us, after all, to identify Pr(Y = y | X∗ = x∗, X ← x), even for x ̸= x∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This we show in the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Suppose we can identify the joint distribution of X and Y in the observational context, where 0 < Pr(X = 1) < 1, and can also identify the distribution of Y under either intervention X ← x (x = 0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Then, under conditions (10) and (11), all the probabilities Pr(Y = y | X∗ = x∗, X ← x) (x, x∗ = 0, 1) are identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Specifically, Pr(Y = y | X∗ = 1, X ← 1) = Pr(Y = y | X = 1) (13) Pr(Y = y | X∗ = 0, X ← 0) = Pr(Y = y | X = 0) (14) Pr(Y = y | X∗ = 1, X ← 0) = Pr(Y = y | X ← 0) − Pr(Y = y, X = 0) Pr(X = 1) (15) Pr(Y = y | X∗ = 0, X ← 1) = Pr(Y = y | X ← 1) − Pr(Y = y, X = 1) Pr(X = 0) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (16) Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (13) and (14) follow from (11).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' To identify Pr(Y = y | X∗ = 0, X ← 1), we argue as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We have Pr(Y = y | X ← 0) = Pr(Y = y | X∗ = 0, X ← 0) × Pr(X∗ = 0 | X ← 0) + Pr(Y = y | X∗ = 1, X ← 0) × Pr(X∗ = 1 | X ← 0) = Pr(Y = y | X = 0) × Pr(X = 0) + Pr(Y = y | X∗ = 1, X ← 0) × Pr(X = 1), (17) on using (10) and (11), and the fact that X∗ = X in the observational setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Since all the other terms in (17) are identifiable in either the observational or the experimental context, and Pr(X = 1) ̸= 0, we can solve for Pr(Y = y | X∗ = 1, X ← 0), obtaining (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Then (16) follows similarly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The above proof relies on X (and so X∗) being binary, but Y need not be.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Versions of this argument have appeared in [14–17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Corollary 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The joint distribution of (X∗, Y ) under the intervention X ← x is then identified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Follows since, by (10), Pr(X∗ = x∗ | X ← x) = Pr(X = x∗) is identified in the observational context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Remark 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Since Pr(Y = y | X∗ = 1, X ← 0) ≥ 0, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=', we deduce from (15) and (16) the consistency constraint Pr(Y = y | X ← x) ≥ Pr(Y = y, X = x), all x, y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' When this fails, and that failure can not be ascribed to sampling variation or bias, that is evidence of violation of the conditions of § 7 below, that have, implicitly, been used to justify the above argument.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Theorem 1 and Corollary 1 express just what the combination of observational and experimental data is doing for us: it allows us to identify distributions involving the ITT variable X∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='5 Benefit and harm Taking now X∗ as our sufficient covariate L, we can apply the formulae of (13)–(16) to compute the quantities τ(x∗), ρ(x∗) required for the analysis of § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Noting that X∗ = X in the observational regime, 10 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making so that Pr(X = x) = Pr(X∗ = x), we obtain Pr(X∗ = 1) × τ(1) = Pr(Y = 1) − Pr(Y = 1 | X ← 0) Pr(X∗ = 0) × τ(0) = Pr(Y = 1 | X ← 1) − Pr(Y = 1) Pr(X∗ = 1) × ρ(1) = K − Pr(Y = 0 | X ← 0) Pr(X∗ = 0) × ρ(0) = Pr(Y = 1 | X ← 1) − K where K = Pr(Y = 1, X = 1) + Pr(Y = 0, X = 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Then from (8) we bound ξ within (L, U), where L = | Pr(Y = 1) − Pr(Y = 1 | X ← 0)| + | Pr(Y = 1) − Pr(Y = 1 | X ← 1)| 1 − U = | Pr(Y = 0 | X ← 0) − K| + | Pr(Y = 1 | X ← 1) − K| Then PB lies in ( 1 2(L + τ), 1 2(U + τ)), and PH = PB − τ lies in ( 1 2(L − τ), 1 2(U − τ)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Although expressed differently, these results agree with those of MP [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' By Remark 2, the joint distribution of Y, and in particular PB, PH, will be point identified just when, for both x∗ = 0 and x∗ = 1, there exist x, y such that Pr(Y = y | X∗ = x∗, X ← x) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In non-trivial cases we will have Pr(Y = y | X = x) ̸= 0, in which case, by (13) and (14), this would need to happen with x ̸= x∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' For that, by (15) and (16), we require Pr(Y = y | X ← 0) = Pr(Y = y, X = 0) (18) for either y = 1 or y = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' as well as Pr(Y = y | X ← 1) = Pr(Y = y, X = 1) (19) for either y = 1 or y = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 6 Examples MP [2, Table 1] consider two cases, in both of which the interventional probabilities of recovery are as in our Example 1, having Pr(Y = 1 | X ← 1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49, Pr(Y = 1 | X ← 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21, and so ATE = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, they have different observational data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We now analyse these in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Example 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This example relates to females, for whom the observational joint probabilities are as in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Y = 1 Y = 0 X = 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='70 X = 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='40 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='60 1 Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Joint observational distribution of (X, Y ) for females Applying the formulae of § 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='5 we find: 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 × τ(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 × τ(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 × ρ(1) = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 × ρ(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='11 It follows that PB−(1) = τ(1) = 19/70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Also, PB+(1) = Pr(Y = 1 | X∗ = 1, X ← 1) = Pr(Y = 1 | X = 1) = 19/70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Hence, given X∗ = 1, we have exact identification: PB(1) = 19/70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This occurs because A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 11 Pr(Y = 1, X = 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21 = Pr(Y = 1 | X ← 0), implying the deterministic property Pr(Y = 1 | X∗ = 1, X ← 0) = 0: a female who desires treatment will never recover if untreated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Consequently such a female should be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Also, PB−(0) = τ(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3, while PB+(0) = Pr(Y = 0 | X∗ = 0, X ← 0) = Pr(Y = 0 | X = 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Given X∗ = 0, we again have exact identification: PB−(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This occurs because Pr(Y = 0, X = 1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='51 = Pr(Y = 0 | X ← 1), so that Pr(Y = 0 | X∗ = 0, X ← 1) = 0: a female who does not desire treatment will always recover if treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Again, such a female should be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Finally we obtain exact identification marginally: PB = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Correspondingly, PH = PB − τ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' As there is no possibility of harm, any female should be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' All the above conclusions agree with the DT prescription, based on the experimental data alone: since ATE > 0, a female should be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Example 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' For males, the observational joint probabilities are as in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Proceeding similarly to Y = 1 Y = 0 X = 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='70 X = 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='30 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='70 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='30 1 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Joint observational distribution of (X, Y ) for males Example 2, we find that PB and PH are again identified exactly: PB = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49, PH = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Indeed, Pr(Y = 1, X = 1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49 = Pr(Y = 1 | X ← 1), implying the deterministic property Pr(Y = 1 | X∗ = 0, X ← 1) = 0: a male who does not desire treatment will never recover if treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Consequently such a male should not be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Also, Pr(Y = 1, X = 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21 − Pr(Y = 1 | X ← 0), so that Pr(Y = 1 | X∗ = 1, X ← 0) = 0: a male who desires treatment will never recover if untreated, so that such a male should be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, if we do not observe X∗ for the target male patient, the above does not tell us how to proceed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We might try to balance PH (= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49) and PB (= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21) somehow: for example, treat just when PB > λPH for some chosen value of λ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In the light of the clinical maxim primum non nocere, a value λ = 3 might be chosen—in which case the target male would not be treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='6 By contrast, in the absence of knowledge of X∗ for the target male, the DT approach would take no account of the observational data, again focusing simply on ATE = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28—and so decide to treat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In a large population of similar cases, this would lead to an overall recovery rate of 49%, the maximum possible;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' whereas the above strategy based on balancing PB and PH would only have a 21% recovery rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' It is difficult to see how this could be regarded as ethical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Example 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Consider another case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Again, the interventional probabilities are Pr(Y = 1 | X ← 1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='49, Pr(Y = 1 | X ← 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='21, with τ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Now the observational joint probabilities are as in Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' We Y = 1 Y = 0 X = 1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 X = 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 1 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Another joint observational distribution of (X, Y ) 6 This argument parallels one in MP [1], having different numbers, and λ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 12 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making compute 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 × τ(1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 × τ(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 × ρ(1) = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='39 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 × ρ(0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='09.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Using (8) we find 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28 ≤ ξ ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='52, whence PB = 1 2(ξ + τ) is bounded between 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='28 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='40—and so PH = PB − τ lies between 0 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' So, even with the aid of the additional observational data, we have not been able to identify these probabilities exactly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' And even if we were to resolve the ambiguity somehow, for example by taking the midpoints of these intervals as suggested by Li and Pearl [3], we would be no better off than we were in Example 3, where trying to balance PB against PH could lead to a decision opposite to the rcommendation of the simple DT analysis, so leading to fewer recoveries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 7 Assumptions and critical comments Here we identify and discuss some of the assumptions underlying the foregoing analyses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1 Representative data A fundamental assumption underlying both the decision-theoretic analysis of § 2 and the alternative ap- proach of § 3 is that the data available for estimating the interventional probabilities Pr(Y = y, L = l | X ← x) are on individuals who can be regarded as “similar to” (“exchangeable with”) the target case, so that these estimated probabilities are applicable to the target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='7 In reality this is highly implausible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' For example, a clinical trial will have entry criteria and processes that make its subjects quite untypical of the population from which they are drawn, or indeed of the individuals recruited into another such trial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In any case, despite the name, entry criteria govern who does not get into a trial: they cannot guarantee that those who enter are representative even of a target individual meeting the same criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A clinical trial gains its value, not from representativeness, but from the internal randomisation that ensures that a comparison between its treated and untreated groups is indeed a comparision of like with like, and that valid probability statements can be made about likely differences, so enforcing internal validity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Because of unrepresentativeness it would not be appropriate to regard Pr(Y = y, L = l | X ← x), estimated from the data, as being directly relevant to the target case—the problem of external validity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' (One cheating way round this is to focus on a hypothetical target individual who can be regarded as exchangeable with those in the study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=') Nevertheless, it may still be reasonable to regard the estimated ATE or CATE as applying to the target—if not in its exact numerical value, at least in its sign, which is what is required, for DT application, to solve the single patient treatment problem;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' or in its ordering of the CATEi, as required to solve the DT unit selection problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' To underline how unreasonable the representative assumption is, it should be noted that even when clinical trials with similar protocols are compared this assumption is not made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A striking example of its failure for nearly identical protocols is given by the TARGET study [18], in which osteoarthritis patients in some centres were randomised to receive either lumiracoxib or naproxen, and patients in other centres either lumiracoxib or ibuprofen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The degree of comparability in design of the two sub-studies thus defined was 7 For application to the MP arguments of § 3, the representativeness assumption should apparently be extended to the (typically unidentifiable) bivariate distribution, along with the other variables, of the pair of potential responses (Y (1), Y (0)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' For the interval-valued inferences made, however, this is not crucial, since these allow for arbitrary de- pendence in this bivariate distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 13 greater than one would typically expect between two randomised controlled trials (RCTs), and a fortiori than between an RCT and an observational study, such as MP consider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Nevertheless, very important differences at baseline were seen between the two sub-studies, even though within-sub-study treatment arms were comparable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Furthermore, it was possible to demonstrate differences at outcome between the two studies using lumiracoxib data only, a striking illustration of a study effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' It is generally accepted by sponsors and regulators that as soon as concurrent control is abandoned the greatest of care must be taken in drawing inferences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Modern work on using data on historical controls to try and improve the efficiency of clinical trials takes such study-to-study variation as a given that must be allowed for [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2 Combination of data An essential requirement for the application of Theorem 1 is that the observational and experimental datasets comprise similar individuals, so that the same probabilities for X, X∗, Y apply to both groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This is even more implausible than the representativeness of either group.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In particular, the assumption of a common distribution for the desired treatment X∗, in both datasets and in the target patient, is vital but highly questionable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Even if we were to accept the arguments of MP [2] based on combining observational and experimental data, without this property they are simply irrelevant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='3 What do clinical trialists do in practice?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The key to using RCTs is to identify reasonable assumptions, and use theory to transfer results from trial to practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A striking example is given by bioequivalence studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The subjects are usually young healthy volunteers, frequently male.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, the results will be used to decide on appropriate treatments for elderly frail patients, some female.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' There is no pretence of representativeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Instead, tight control and sensible scales of analysis are used.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The purpose of such studies is to compare two formulations in terms of bioavailability, and this is typically done using a cross-over trial in which each subject is their own control, the order of administration being randomised.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' On separate days, concentration of the test and reference pharmaceuticals are measured, and the ratio of the areas under the two concentration time curves (AUCs) are calculated for each subject, then analysed over all subjects, typically after log-transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' What is relevant for treating an individual patient is their own AUC: too low and efficacy may be disappointing, too high and the drug may not be tolerated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' However, no inference is made from a bioequivalence study in terms of AUCs alone, since they would be quite different in healthy volunteers and patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Instead, the idea is that the ratio between test and reference ought to be the same in volunteers and patients, and this ratio can be used to make predictions as to how the test drug will behave in clinical practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' An interesting example of such a study is reported by Shumaker and Metzler [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' They used a more elaborate design in which test and reference drugs were given in a double cross-over, thus permitting them to analyse the formulation-by-subject interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' They were able to demonstrate that there was no evidence of an individual bioequivalence effect: although you could estimate the individual relative bioavailability, using the average over all subjects would be superior than any such naïve estimate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This raises a further issue with MP, who assume that individual causal effects are stable over time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Moreover, typical causal analysis assumes an infinite sample size, but no infinities are available for individual subjects, and estimating individual causal effects requires close attention to components of variance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Bioequivalence studies are an extreme example, but the general idea of transferring results using a suitable scale for analysis, and back- transforming to a scale suitable for decision analysis, is commonplace: see [21] for a general discussion and [22] in the specific context of vaccine efficacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Of course, as the COVID-19 pandemic has reminded us, there are no guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Things that work at one time may not do so at another.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' It behoves all those proposing solutions to be cautious and humble.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' 14 A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 8 Summary We have given careful accounts of the DT and MP approaches to individualised treatment choice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The DT approach is simple in the extreme, and selects the treatment strategy that maximises the number of recov- eries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In contrast, the MP approach fixates on philosophically questionable and unknowable counterfactual concepts, and when its recommendations differ from those of DT will lead to fewer recoveries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This has been illustrated in a number of examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' One feature of the MP approach is the combination of experimental and observational data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' When some very strong, and practically implausible, conditions are satisfied, this permits identification of the distribution of a special covariate, the intention to treat (ITT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' As with any other covariate whose distri- bution is known, this can then feed back to tighten the MP inferences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' But it would be better to observe this—or any other—covariate in the target patient, which would then lead to better results from the DT point of view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In particular we have shown that, in just those very special cases that use of ITT leads to point identification of the MP probabilities of benefit and of harm, knowledge of the target patient’s ITT value allows perfect prediction of the outcome under at least one of the treatment interventions, and so to a trivial solution to the decision problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The DT approach has a long history of fruitful application to an enormous variety of fields, from clinical trials to rocket science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Attempts to replace it with another approach, based on counterfactuals, are totally unnecessary and dangerously misguided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' This approach should not be used in practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Acknowledgments We have benefited greatly from discussions with Mats Stensrud and Aaron Sarvet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Conflict of interest: Prof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid is a member of the Editorial Board in the Journal of Causal Inference but was not involved in the review process of this article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' References [1] Scott Mueller and Judea Pearl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Which patients are in greater need: A counterfactual analysis with reflections on COVID-19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Blog post, April 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [2] Scott Mueller and Judea Pearl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Personalized decision making – a conceptual introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Technical Report 513, Department of Computer Science, UCLA, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [3] Ang Li and Judea Pearl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Unit selection based on counterfactual logic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In Proceedings of the Twenty-Eighth Inter- national Joint Conference on Artificial Intelligence, IJCAI-19, pages 1793–1799.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' International Joint Conferences on Artificial Intelligence Organization, 7 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' DOI:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='24963/ijcai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='2019/248.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [4] Ang Li and Judea Pearl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Unit selection: Case study and comparison with A/B test heuristic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Preprint, UCLA, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [5] Kosuke Imai, Zhichao Jiang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' James Greiner, Ryan Halen, and Sooahn Shin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Experimental evaluation of algorithm- assisted human decision-making: Application to pretrial public safety assessment (with Discussion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Journal of the Royal Statistical Society, Series A, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' To appear.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [6] Jonathan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Richens, Rory Beard, and Daniel H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Thompson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Counterfactual harm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems 35 (NeurIPS 2022), 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='org/abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='12993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [7] Eli Ben-Michael, Kosuke Imai, and Zhichao Jiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Policy learning with asymmetric utilities, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [8] Aaron L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Sarvet and Mats J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Stensrud.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Perspectives on harm in personalized medicine.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Submitted to the American Journal of Epidemiology, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [9] Donald B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Rubin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Estimating causal effects of treatments in randomized and nonrandomized studies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Journal of Educational Psychology, 66:688–701, 1974.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Stephen Senn, Personalised Decision-Making 15 [10] Jerzy Neyman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' On the application of probability theory to agricultural experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Essay on principles (in Polish).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Roczniki Nauk Rolniczych, X:1–51, 1923.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' English translation of Section 9 (D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Dabrowska and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Speed): Sta- tistical Science 9 (1990), 465–480.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [11] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Causal inference without counterfactuals (with Discussion).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Journal of the American Statistical Association, 95:407–448, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [12] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Decision-theoretic foundations for statistical causality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Journal of Causal Inference, 9:39–77, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' DOI:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1515/jci-2020-0008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [13] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Probability, causality and the empirical world: A Bayes/de Finetti/Popper/Borel synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Statistical Science, 19:44–57, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [14] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid and Monica Musio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' What can group level data tell us about individual causality?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Carriquiry, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Tanur, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Eddy, editors, Statistics in the Public Interest: In Memory of Stephen E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Fienberg, pages 235–256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Springer International Publishing, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' DOI: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='1007/978-3-030-75460-0_13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [15] James M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Robins, Tyler J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Vanderweele, and Thomas S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Richardson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Comment on “Causal effects in the presence of non compliance: A latent variable interpretation” by Antonio Forcina.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Metron, LXIV:288–298, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [16] Sara G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Geneletti and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Philip Dawid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Defining and identifying the effect of treatment on the treated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' In Phyllis M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Illari, Federica Russo, and Jon Williamson, editors, Causality in the Sciences, pages 728–749.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Oxford University Press, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [17] Mats J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Stensrud and Aaron L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Sarvet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Optimal regimes for algorithm-assisted human decision-making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content='03020, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [18] Stephen Senn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Lessons from TGN1412 and TARGET: Implications for observational studies and meta-analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Phar- maceutical Statistics, 7:294–301, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [19] Heinz Schmidli, Sandro Gsteiger, Satrajit Roychoudhury, Anthony O’Hagan, David Spiegelhalter, and Beat Neuen- schwander.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Robust meta-analytic-predictive priors in clinical trials with historical control information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Biometrics, 70:1023–1032, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [20] Robert C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Shumaker and Carol M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Metzler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The phenytoin trial is a case study of “individual bioequivalence”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Drug Information Journal, 32:1063–1072, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [21] Jacobus Lubsen and Jan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Tijssen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Large trials with simple protocols: Indications and contraindications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Controlled Clinical Trials, 10:151–160, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' [22] Stephen Senn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' The design and analysis of vaccine trials for COVID-19 for the purpose of estimating efficacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} +page_content=' Pharma- ceutical Statistics, 21:790–807, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-NFLT4oBgHgl3EQfCi71/content/2301.11976v1.pdf'} diff --git a/.gitattributes b/.gitattributes index 600886d2cb22aced49b1ee2a8e7d2c57c59af4c2..d11829397db19b6ccb42cd18d7f6e5e8ffa701b6 100644 --- a/.gitattributes +++ b/.gitattributes @@ -4058,3 +4058,72 @@ stFKT4oBgHgl3EQf1y5_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex NtE3T4oBgHgl3EQfwwva/content/2301.04706v1.pdf filter=lfs diff=lfs merge=lfs -text a9E4T4oBgHgl3EQfoQ0b/content/2301.05182v1.pdf filter=lfs diff=lfs merge=lfs -text 9tFJT4oBgHgl3EQfoyxM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +R9E4T4oBgHgl3EQfLAxE/content/2301.04934v1.pdf filter=lfs diff=lfs merge=lfs -text +stE_T4oBgHgl3EQf8xx6/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ldAyT4oBgHgl3EQfk_iC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +k9AzT4oBgHgl3EQfNfvd/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +RdA0T4oBgHgl3EQfDv9U/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +xdE0T4oBgHgl3EQf-gIr/content/2301.02814v1.pdf filter=lfs diff=lfs merge=lfs -text +1tE4T4oBgHgl3EQfaQwc/content/2301.05062v1.pdf filter=lfs diff=lfs merge=lfs -text +R9E4T4oBgHgl3EQfLAxE/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +O9AyT4oBgHgl3EQftflG/content/2301.00595v1.pdf filter=lfs diff=lfs merge=lfs -text +bNAyT4oBgHgl3EQfivjL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +7NE2T4oBgHgl3EQfPQZw/content/2301.03757v1.pdf filter=lfs diff=lfs merge=lfs -text +rdE1T4oBgHgl3EQf2wXf/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +WNE2T4oBgHgl3EQfDgYF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +C9E4T4oBgHgl3EQfFwy_/content/2301.04889v1.pdf filter=lfs diff=lfs merge=lfs -text +RdE3T4oBgHgl3EQfygvM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +7tAyT4oBgHgl3EQfpvj9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +W9AzT4oBgHgl3EQfmf0I/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ANAyT4oBgHgl3EQfq_kk/content/2301.00551v1.pdf filter=lfs diff=lfs merge=lfs -text +HtAyT4oBgHgl3EQfrvl-/content/2301.00566v1.pdf filter=lfs diff=lfs merge=lfs -text +EdAyT4oBgHgl3EQfSfdI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +1tE4T4oBgHgl3EQfaQwc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +K9E4T4oBgHgl3EQfiQ1n/content/2301.05132v1.pdf filter=lfs diff=lfs merge=lfs -text +bdFST4oBgHgl3EQfCjh5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +rdAzT4oBgHgl3EQfcfxL/content/2301.01403v1.pdf filter=lfs diff=lfs merge=lfs -text +mtFPT4oBgHgl3EQf4zVh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +HtAyT4oBgHgl3EQfrvl-/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +iNE1T4oBgHgl3EQfzgXI/content/2301.03446v1.pdf filter=lfs diff=lfs merge=lfs -text +7NE2T4oBgHgl3EQfPQZw/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +5dE4T4oBgHgl3EQfBQvo/content/2301.04851v1.pdf filter=lfs diff=lfs merge=lfs -text +59FKT4oBgHgl3EQfTC3B/content/2301.11778v1.pdf filter=lfs diff=lfs merge=lfs -text +NNE2T4oBgHgl3EQfBgZ1/content/2301.03603v1.pdf filter=lfs diff=lfs merge=lfs -text +iNE1T4oBgHgl3EQfzgXI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +stE4T4oBgHgl3EQfVwwe/content/2301.05026v1.pdf filter=lfs diff=lfs merge=lfs -text +K9E4T4oBgHgl3EQfiQ1n/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ytFAT4oBgHgl3EQfBBwz/content/2301.08401v1.pdf filter=lfs diff=lfs merge=lfs -text +rdAzT4oBgHgl3EQfcfxL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +FtAzT4oBgHgl3EQfHPtI/content/2301.01041v1.pdf filter=lfs diff=lfs merge=lfs -text +ANE3T4oBgHgl3EQfsQvt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +_NFLT4oBgHgl3EQfDi7Y/content/2301.11980v1.pdf filter=lfs diff=lfs merge=lfs -text +XtFRT4oBgHgl3EQf-zgt/content/2301.13692v1.pdf filter=lfs diff=lfs merge=lfs -text +9tFST4oBgHgl3EQfbTje/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +EdAyT4oBgHgl3EQfSfdI/content/2301.00086v1.pdf filter=lfs diff=lfs merge=lfs -text +_NFLT4oBgHgl3EQfDi7Y/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +5tE5T4oBgHgl3EQfPQ4_/content/2301.05503v1.pdf filter=lfs diff=lfs merge=lfs -text +b9AzT4oBgHgl3EQfnf2W/content/2301.01582v1.pdf filter=lfs diff=lfs merge=lfs -text +MNAzT4oBgHgl3EQfV_xV/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +8dE4T4oBgHgl3EQfdQww/content/2301.05089v1.pdf filter=lfs diff=lfs merge=lfs -text +NNE2T4oBgHgl3EQfBgZ1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ctE3T4oBgHgl3EQfeAqc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +h9A0T4oBgHgl3EQfIP-u/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ANAyT4oBgHgl3EQfq_kk/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +qdE1T4oBgHgl3EQfPgMa/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +qdE1T4oBgHgl3EQfPgMa/content/2301.03027v1.pdf filter=lfs diff=lfs merge=lfs -text +f9E0T4oBgHgl3EQfpAES/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +C9E4T4oBgHgl3EQfFwy_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +PdFRT4oBgHgl3EQf5zjP/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +UNAzT4oBgHgl3EQfJvtM/content/2301.01084v1.pdf filter=lfs diff=lfs merge=lfs -text +O9AyT4oBgHgl3EQftflG/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +FtAzT4oBgHgl3EQfHPtI/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +b9AzT4oBgHgl3EQfnf2W/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +stE4T4oBgHgl3EQfVwwe/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +INA0T4oBgHgl3EQfB__n/content/2301.01985v1.pdf filter=lfs diff=lfs merge=lfs -text +9tAzT4oBgHgl3EQf-_7r/content/2301.01943v1.pdf filter=lfs diff=lfs merge=lfs -text +h9A0T4oBgHgl3EQfIP-u/content/2301.02073v1.pdf filter=lfs diff=lfs merge=lfs -text +5dE4T4oBgHgl3EQfBQvo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +wdFRT4oBgHgl3EQfgjd8/content/2301.13580v1.pdf filter=lfs diff=lfs merge=lfs -text +jtFST4oBgHgl3EQfHTjm/content/2301.13725v1.pdf filter=lfs diff=lfs merge=lfs -text +UNAzT4oBgHgl3EQfJvtM/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +yNFST4oBgHgl3EQfTDh5/content/2301.13768v1.pdf filter=lfs diff=lfs merge=lfs -text diff --git a/19AyT4oBgHgl3EQf1fl-/content/tmp_files/2301.00736v1.pdf.txt b/19AyT4oBgHgl3EQf1fl-/content/tmp_files/2301.00736v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..3705b295e0e49a7dfce04d53dde7f7e06ece65a7 --- /dev/null +++ b/19AyT4oBgHgl3EQf1fl-/content/tmp_files/2301.00736v1.pdf.txt @@ -0,0 +1,5336 @@ +Mixed moving average field guided learning for spatio-temporal +data +Imma Valentina Curato∗ , Orkun Furat† and Bennet Str¨oh ‡ +January 3, 2023 +Abstract +Influenced mixed moving average fields are a versatile modeling class for spatio-temporal data. +However, their predictive distribution is not generally accessible. Under this modeling assumption, +we define a novel theory-guided machine learning approach that employs a generalized Bayesian +algorithm to make predictions. We employ a Lipschitz predictor, for example, a linear model or +a feed-forward neural network, and determine a randomized estimator by minimizing a novel PAC +Bayesian bound for data serially correlated along a spatial and temporal dimension. Performing causal +future predictions is a highlight of our methodology as its potential application to data with short +and long-range dependence. We conclude by showing the performance of the learning methodology +in an example with linear predictors and simulated spatio-temporal data from an STOU process. +MSC 2020: primary 60E07, 60E15, 60G25, 60G60; secondary 62C10. +Keywords: stationary models, weak dependence, oracle inequalities, randomized estimators, causal predic- +tions. +1 +Introduction +Modeling spatio-temporal data representing measurements from a continuous physical system introduces +various methodological challenges. These include finding models that can account for the serial correlation +typically observed along their spatial and temporal dimensions and simultaneously have good prediction +performance. Statistical models used nowadays to analyze spatio-temporal data are Gaussian processes +[5], [23], [53], and [63]; spatio-temporal kriging [19], and [46]; space-time autoregressive moving average +models [30]; point processes [31], and hierarchical models [19]. An important common denominator of +statistical modeling is that they enable predictions once the variogram (covariance) structure or the data +distribution (up to a set of parameters) is carefully chosen in relation to the studied phenomenon and +practitioners’ experience. +This paper aims to define a novel theory-guided (or physics-informed) machine learning methodology +for spatio-temporal data. By this name, go all hybrid procedures that use a stochastic (or deterministic) +∗Ulm +University, +Institute +of +Mathematical +Finance, +Helmholtzstrae +18, +89069 +Ulm, +Germany. +E-mail: +imma.curato@uni-ulm.de. +†Ulm University, Institute of Stochastics, Helmholtzstrae 18, 89069 Ulm, Germany. E-mail: orkun.furat@uni-ulm.de. +‡Imperial College, Department of Mathematics, South Kensington Campus, SW7 2AZ London, United Kingdom. E- +mail: b.stroh@imperial.ac.uk. +1 +arXiv:2301.00736v1 [stat.ML] 2 Jan 2023 + +model in synergy with a specific data science one. Such methodologies have started to gain prominence +in several scientific disciplines such as earth science, quantum chemistry, bio-medical science, climate +science, and hydrology modeling as, for example, described in [41], [51], [50], and [54]. As in the statistical +models cited above, we model the spatial-temporal covariance structure of the observed data. However, +we perform predictions using a generalized Bayesian algorithm. Let us start by introducing the stochastic +model involved in our methodology. +We assume throughout to observe data ( ˜Zt(x))(t,x)∈T×L on a regular lattice L ⊂ Rd for d ≥ 1 across +times T = {1, . . . , N} such that the decomposition +˜Zt(x) = µt(x) + Zt(x) +(1) +holds and no measurement errors are present in the observations. Here, µt(x) is a deterministic function, +and Zt(x) are considered realizations from a zero mean stationary (influenced) mixed moving average field +(in brief, MMAF). When the spatial dimension d = 2, a category of data that falls in our assumptions is +the one of frame images through time (also known as video data or multidimensional raster data). Early +applications of MMAFs in image modeling can be found in [39]. +An MMAF is defined as +Zt(x) = +� +H +� +At(x) +f(A, x − ξ, t − s) Λ(dA, dξ, ds), (t, x) ∈ R × Rd, +(2) +where f is a deterministic function called kernel, H is denoting a non-empty topological space, Λ a L´evy +basis and At(x) is a so-called ambit set [7], we refer the reader to Section 2 for more details on the +definition (2). Examples of such models can be found in [7], [11] [47], and [48]. +A significant feature of MMAFs is that they provide a direct way to specify a model for an observed +physical phenomenon based on a probabilistic understanding of the latter as exemplified by choice of +the L´evy basis and the kernel function appearing in (2). Choosing an opportune distribution Λ allows +us to work with Gaussian and non-Gaussian distributed models. Moreover, the random parameter A in +the kernel function allows us to model short and long-range temporal and spatial dependence. A further +highlight of these models is that their autocovariance functions can be exponential or power decaying by +choosing an exponential kernel function f, an opportune distribution of the random parameter A, and +assuming that the L´evy basis Λ has finite second-order moments. Therefore, such types of autocovariance +functions can be obtained without the need for further assumptions on the distribution of the random +field. MMAFs with such properties are, for example, the spatio-temporal Ornstein-Uhlenbeck process (in +brief, STOU) [47] and its mixed version called the MSTOU process [48]. We also know that the entire +class of MMAFs is θ-lex weakly dependent. This notion of dependence has first been introduced in [22], +and it is more general than α∞,v-mixing for random fields as defined in [24] for v ∈ N∪{∞} and α-mixing +[14] in the particular case of stochastic processes. +Although the MMAFs are versatile models, only few results in the literature are to be found con- +cerning their predictive distribution. To our knowledge, the only explicit result concerns Gaussian STOU +processes defined on cone-shaped ambit sets, see [47, Theorem 13]. We then learn a predictor h ∈ H, for +H the class of the Lipschitz functions, by determining a randomized estimator ˆρ (i.e., a regular condi- +tional probability measure) on H using generalized Bayesian learning, see [33] for a review. We call our +methodology mixed moving average field guided learning. This procedure is applicable, for example, when +2 + +using linear models and feed-forward neural networks. The learning task on which we focus is making a +one-step-ahead prediction of the field Z in a given spatial position x∗. +The methodology starts by computing the θ-lex coefficients of the underlying MMAF. If the analyzed +field has finite second-order moments, then such coefficients can be obtained following the calculations in +Section 2.5. We use this information to select a set of input features from (Zt(x))(t,x)∈T×L and determine +a training set S. We then prove a PAC Bayesian bound for the sampled data S. We ultimately deter- +mine a randomized estimator by minimization of the PAC Bayesian bound. The acronym PAC stands +for Probably Approximately Correct and may be traced back to [66]. A PAC inequality states that with +an arbitrarily high probability (hence ”probably”), the performance (as provided by a loss function) of +a learning algorithm is upper-bounded by a term decaying to an optimal value as more data is collected +(hence ”approximately correct”). PAC-Bayesian bounds have proven over the past two decades to suc- +cessfully address various learning problems such as classification, sequential or batch learning, and deep +learning [28]. Indeed, they are a powerful probabilistic tool to derive theoretical generalization guarantees. +To the best of our knowledge, the PAC Bayesian bounds determined in Section 3.2 are the first results +in the literature obtained for data serially correlated along a spatial and temporal dimension. +It is important to emphasize that using a randomized estimator over a classical supervised learning +methodology has the same advantages as a Bayesian approach, i.e., it allows a deeper understanding of +the uncertainty of each possible h ∈ H. Moreover, we can enable the analysis of aggregate or ensemble +predictors ˆh = ˆρ[h]. Despite these similarities, generalized Bayesian learning substantially differs from +the classical Bayesian learning approach. In the latter, we specify a prior distribution, a statistical model +connecting output-input pairs called likelihood function, and determine the unique posterior distribution +by Bayes’ theorem. When using generalized Bayes to determine a randomized estimator, no assumptions +on the likelihood function are required but just a loss function and a so-called reference distribution. These +ingredients, together with a PAC Bayesian bound, are employed to determine a randomized predictor, +which is unique just under a specific set of assumptions, see Section 3.2. +1.1 +Outline and Contributions +Between the data-science models used to tackle predictions for spatio-temporal data, we find deep learn- +ing, see [4], [54], [61] and [62] for a review, and video frame prediction algorithms as in [45] and [68]. Deep +learning techniques are increasingly popular because they successfully extract spatio-temporal features +and learn the inner law of an observed spatio-temporal system. However, these models lack interpretabil- +ity, i.e., it is not possible to disentangle the causal relationship between variables in different spatial-time +points, and typically no proofs of their generalization (predictive) abilities are available. On the other +hand, [45] and [68] are methodologies retaining a causal interpretation, see discussion below, but do not +have proven generalization (predictive) performance. +Given a model class H, mixed moving average field guided learning selects a predictor h ∈ H that +has the best generalization performance for the analyzed prediction task. Moreover, the MMAF modeling +framework has a causal interpretation when using cone-shaped ambit sets. To explain this point, we +borrow the concept of lightcone from special relativity that describes the possible paths that the light +can make in space-time leading to a point (t, x) and the ones that lie in its future. In the context of our +paper, we use their geometry to identify the points in space-time having causal relationships. Let c > 0, +for a point (t, x), and by using the Euclidean norm to assess the distance between different space-time +3 + +points, we define a lightcone as the set +Alight +t +(x) = +� +(s, ξ) ∈ R × Rd : ∥x − ξ∥ ≤ c|t − s| +� +. +(3) +The set Alight with respect to the point (t, x) can be split into two disjoint sets, namely, At(x) and +At(x)+. The set At(x) is called past lightcone, and its definition corresponds to the one of a cone-shaped +ambit set +At(x) := +� +(s, ξ) ∈ R × Rd : s ≤ t and ∥x − ξ∥ ≤ c|t − s| +� +. +(4) +The set +At(x)+ = {(s, ξ) ∈ R × Rd : s > t and ∥x − ξ∥ ≤ c|t − s|}, +(5) +is called instead the future lightcone. By using an influenced MMAF on a cone-shaped ambit set as the +underlying model, we implicitly assume that the following sets +l−(t, x) = {Zs(ξ) : (s, ξ) ∈ At(x) \ (t, x)} and l+(t, x) = {Zs(ξ) : (s, ξ) ∈ At(x)+} +(6) +are respectively describing the values of the field that have a direct influence on Zt(x) and the future field +values influenced by Zt(x). We can then uncover the causal relationships described above by estimating +the constant c from observed data, called throughout the speed of information propagation in the physical +system under analysis. A similar approach to the modeling of causal relationships can be found in [45], +[58], and [68]. In [45], and [58], the sets (6) are considered and employed to discover coherent structures, see +[37] for a formal definition, in spatio-temporal physical systems and to perform video frame prediction, +respectively. Also, in [68], predictions are performed by embedding spatio-temporal information on a +Minkowski space-time. Hence, the concept of lightcones enters into play in the definition of their algorithm. +In machine learning, we typically have two equivalent approaches towards causality: structural causal +models, which rely on the use of directed acyclical graphs (DAG) [49], and Rubin causal models, which +rely upon the potential outcomes framework [57]. The concept of causality employed in this paper can be +inscribed into the latter. In fact, by using MMAFs on cone-shaped ambit sets, the set l+(t, x) describes +the possible future outcomes that can be observed starting from the spatial position (t, x). +The paper is structured as follows. In Section 2, we introduce the MMAF framework and define +STOU and MSTOU processes. In Section 3, we introduce the notations that allow us to bridge the +MMAF framework (that by definition is continuous in time and space) with a data science one (that +by definition is discrete in time and space). Important theoretical preliminaries and the input-features +extraction method can be found in Section 3.1. We then prove PAC Bayesian bounds (also of oracle type) +in Section 3.2 for Lipschitz predictors, among which we discuss the shape of the bound for linear models +and feed-forward neural networks. We then focus on linear predictors and show in Section 3.3 how to +select the best one to be used in a given prediction task. We give in Section 4 an explicit procedure to +perform one-step ahead casual future predictions in such a framework. In conclusion, we apply our theory- +guided machine learning methodology to simulated data from an STOU process driven by a Gaussian +and a NIG-distributed L´evy basis. Appendix A contains further details on the weak dependence measures +employed in the paper, and a review of the estimation methodologies for STOU and MSTOU processes. +Appendix B contains detailed proofs of the results presented in the paper. +4 + +2 +Mixed moving average fields +2.1 +Notations +Throughout the paper, we indicate with N the set of positive integers and R+ the set of non-negative +real numbers. As usual, we write Lp(Ω) for the space of (equivalence classes of) measurable functions +f : Ω → R with finite Lp-norm ∥f∥p. When Ω = Rn, ∥x∥1 and ∥x∥ denotes the L1-norm and the Euclidean +norm, respectively, and we define ∥x∥∞ = maxj=1,...,n |x(j)|, where x(j) represents the component j of +the vector x. +To ease the notations in the following sections, unless it is important to keep track of both time and +space components separately, we often indicate the index set R × Rd with R1+d. A ⊂ B denotes a not +necessarily proper subset A of a set B, |B| denotes the cardinality of B and dist(A, B) = infi∈A,j∈B∥i − +j∥∞ indicates the distance of two sets A, B ⊂ R1+d. Let n, k ≥ 1, and F : Rn → Rk, we define ∥F∥∞ = +supt∈Rd∥F(t)∥. Let Γ = {i1, . . . , iu} ⊂ R1+d for u ∈ N, we define the random vector ZΓ = (Zi1, . . . , Ziu). +In general, we use bold notations when referring to random elements. +In the following Lipschitz continuous is understood to mean globally Lipschitz. For u, n ∈ N, G∗ +u +is the class of bounded functions from Ru to R and Gu is the class of bounded, Lipschitz continuous +functions from Ru to R with respect to the distance ∥ · ∥1. Moreover, we call L(Ω) the set of all Lipschitz +functions h on Ω with respect to the distance ∥ · ∥1 and define the Lipschitz constant as +Lip(h) = sup +x̸=y +|h(x) − h(y)| +∥x − y∥1 +. +(7) +Hereafter, we often use the lexicographic order on R1+d. Let the pedex t and s be indicating +a temporal and spatial coordinate. For distinct elements y = (y1,t, y1,s . . . , yd,s) ∈ R1+d and z = +(z1,t, z1,s . . . , zd,s) ∈ R1+d we say y 0. The definition of the set +V r +t is also used when referring to the lexicographic order on Z1+d. +2.2 +Definition and properties +Let S = H × R × Rd, where H ⊂ Rq for q ≥ 1, and the Borel σ-algebra of S be denoted by B(S) and let +Bb(S) contain all its Lebesgue bounded sets. +Definition 2.1. A family of R-valued random variables Λ = {Λ(B) : B ∈ Bb(S)} is called a L´evy basis +on (S, Bb(S)) if it is an independently scattered and infinitely divisible random measure. This means that: +(i) For a sequence of pairwise disjoint elements of Bb(S), say {Bi, i ∈ N}: +– Λ(� +n∈N Bn) = � +n∈N Λ(Bn) almost surely when � +n∈N Bn ∈ Bb(S) +– and Λ(Bi) and Λ(Bn) are independent for i ̸= j. +(ii) Let B ∈ Bb(S). Then, the random variable Λ(B) is infinitely divisible, i.e. for any n ∈ N, there +exists a law µn such that the law µΛ(B) can be expressed as µΛ(B) = µ∗n +n , the n-fold convolution of +µn with itself. +5 + +For more details on infinitely divisible distributions, we refer the reader to [59]. In the following, we will +restrict ourselves to L´evy bases which are homogeneous in space and time and factorizable, i.e., L´evy +bases with characteristic function +E +� +eiuΛ(B)� += eΦ(u)Π(B) +(8) +for all u ∈ R and B ∈ Bb(S), where Π = π ×λ1+d is the product measure of the probability measure π on +H and the Lebesgue measure λ1+d on R×Rd. Note that when using a L´evy basis defined on S = R×Rd, +Π = λ1+d. Furthermore, +Φ(u) = iγ u − 1 +2σ2u2 + +� +R +� +eiux − 1 − iux1[0,1](|x|) +� +ν(dx) +(9) +is the cumulant transform of an ID distribution with characteristic triplet (γ, σ2, ν), where γ ∈ R, σ2 ≥ 0 +and ν is a L´evy-measure on R, i.e. +ν({0}) = 0 +and +� +R +� +1 ∧ x2� +ν(dx) < ∞. +The quadruplet (γ, σ2, ν, π) determines the distribution of the L´evy basis entirely, and therefore it +is called its characteristic quadruplet. An important random variable associated with the L´evy basis, is +the so-called L´evy seed, which we define as the random variable Λ′ having as cumulant transform (9), +that is +E +� +eiuΛ′� += eΦ(u). +(10) +By selecting different L´evy seeds, it is easy to compute the distribution of Λ(B) for B ∈ Bb(S) when +S = R × Rd. In the following two examples, we compute the L´evy bases used in generating the data sets +in Section 4.1. +Example 2.2 (Gaussian L´evy basis). Let Λ′ ∼ N(γ, σ2), then its characteristic function is equal to +exp(iγu − 1 +2σ2u2). Because of (8), we have, in turn, that the characteristic function of Λ(B) is equal to +exp(iγuλ1+d(B) − 1 +2σ2λ1+d(B)u2). In conclusion, Λ(B) ∼ N(γλ1+d(B), σ2λ1+d(B)) for any B ∈ Bb(S). +Example 2.3 (Normal Inverse Gaussian L´evy basis). Let K1 denote the modified Bessel function of the +third order and index 1. Then, for x ∈ R, the NIG distribution is defined as +f(x : α, β, µ, δ) = αδ(π2(δ2 + (x − µ)2))− 1 +2 exp(δ +� +α2 − β2 + β(x − µ))K1(α +� +δ2 + (x − µ)2), +where α, β, µ and δ are parameters such that µ ∈ R, δ > 0 and 0 ≤ |β| < α. Let Λ′ ∼ NIG(α, β, µ, δ), +then by (8) we have that Λ(B) ∼ NIG(α, β, µλ1+d(B), δλ1+d(B)) for all B ∈ Bb(S). +We now follow [7], and [10] to formally define ambit sets. +Definition 2.4. A family of ambit sets (At(x))(t,x)∈R×Rd ⊂ R × Rd satisfies the following properties: +� +� +� +� +� +� +� +� +� +At(x) = A0(0) + (t, x), (Translation invariant) +As(x) ⊂ At(x), for s < t +At(x) ∩ (t, ∞) × Rd = ∅. (Non-anticipative). +(11) +6 + +We consider throughout At(x) to be defined as in (4). We further assume that the random fields +in the paper are defined on a given complete probability space (Ω, F, P), equipped with the filtration of +influence (in the sense of Definition 3.8 in [22]) F = (F(t,x))(t,x)∈R×Rd generated by Λ and the family of +ambit sets (At(x))(t,x)∈R×Rd ⊂ R × Rd, i.e., each F(t,x) is the σ-algebra generated by the set of random +variables {Λ(B) : B ∈ Bb(H × At(x))}. We call our fields adapted to the filtration of influence F if +Zt(x) is measurable with respect to the σ-algebra F for each (t, x) ∈ R × Rd. Moreover, we work with +stationary random fields. Moreover, in the following, we use the term stationary instead of spatio-temporal +stationary. +Definition 2.5 (Spatio-temporal stationarity). We say that Zt(x) is spatio-temporal stationary if for ev- +ery n ∈ N, τ ∈ R, u ∈ Rd, t1, . . . , tn ∈ R and x1, . . . , xn ∈ Rd, the joint distribution of (Zt1(x1), . . . , Ztn(xn)) +is the same as that of (Zt1+τ(x1 + u), . . . , Ztn+τ(xn + u)). +Definition 2.6 (MMAF). Let Λ = {Λ(B), B ∈ Bb(S)} a L´evy basis, f : H × R × Rd → R a B(S)- +measurable function and At(x) an ambit set. Then, the stochastic integral (2) is adapted to the filtration F, +stationary, and its distribution is infinitely divisible. We call the R-valued random field Z an (influenced) +mixed moving average field and f its kernel function. +Remark 2.7. On a technical level, we assume all stochastic integrals in this paper to be well defined in +the sense of Rajput and Rosinski [52]. For more details, including sufficient conditions on the existence +of the integral as well as the explicit representation of the characteristic triplet of the MMAF’s infinitely +divisible distribution (which can be directly determined from the characteristic quadruplet of Λ), we refer +to [22, Section 3.1]. In the latter, there can also be found a multivariate definition of a L´evy basis and +an MMAF. +2.3 +Autocovariance Structure +Moment conditions for MMAFs are typically expressed in function of the characteristic quadruplet of its +driving L´evy basis and the kernel function f. +Proposition 2.8. Let Zt(x) be an R-valued MMAF driven by a L´evy basis with characteristic quadruplet +(γ, σ2, ν, π) with kernel function f : H × R × Rd → R and defined on an ambit set At(x) ⊂ R × Rd. +(i) If +� +|x|>1 |x|ν(dx) < ∞ and f ∈ L1(H × R × Rd, π × λ1+d) ∩ L2(H × R × Rd, π × λ1+d) the first +moment of Zt(x) is given by +E[Zt] = E(Λ′) +� +H +� +At(x) +f(A, −s, −ξ)ds dξ π(dA), +where E(Λ′) = γ + +� +|x|≥1 x ν(dx). +7 + +(ii) If +� +R x2 ν(dx) < ∞ and f ∈ L2(H × R × Rd, π × λ1+d), then Zt(x) ∈ L2(Ω) and +V ar(Zt(x)) = V ar(Λ′) +� +H +� +R×Rd f(A, −s, −ξ)2ds dξ π(dA), +Cov(Z0(0), Zt(x)) = V ar(Λ′) +� +H +� +A0(0)∩At(x) +f(A, −s, −ξ)f(A, t − s, x − ξ) ds dξ π(dA) +and +Corr(Z0(0), Zt(x)) = +� +H +� +A0(0)∩At(x) f(A, −s, −ξ)f(A, t − s, x − ξ) ds dξ π(dA) +� +H +� +R×Rd f(A, −s, −ξ)2ds dξ π(dA) +, +(12) +where V ar(Λ′) = σ2 + +� +Rd xx′ν(dx). +(iii) Consider σ2 = 0 and +� +|x|≤1 |x| ν(dx) < ∞. If +� +|x|>1 |x|ν(dx) < ∞ and f ∈ L1(H ×R×Rd, π×λ1+d), +the first moment of Zt(x) is given by +E[Zt(x)] = +� +H +� +At(x) +f(A, −s, −ξ) +� +γ0 + +� +R +xν(dx) +� +ds dξ π(dA), +where +γ0 := γ − +� +|x|≤1 +x ν(dx). +(13) +Proof. Immediate from [59, Section 25] and [22, Theorem 3.3]. +From Proposition 2.8, we can evince that the autocovariance function of an MMAF depends on the +variance of the L´evy seed Λ′, the kernel function f and the distribution π of the random parameter A. +Important examples of MMAFs are the spatio-temporal Ornstein-Uhlenbeck field (STOU) and the +mixed spatio-temporal Ornstein-Uhlenbeck field (MSTOU), whose properties have been thoroughly ana- +lyzed in [47] and [48], respectively. In Definition 2.9 and 2.11, we give the formal definitions of such fields +and the explicit expression of their autocovariance functions. +Definition 2.9. Let Λ = {Λ(B), B ∈ Bb(S)} be a L´evy basis, f : R×Rd → R a B(S)-measurable function +defined as f(s, ξ) = exp(−As) for A > 0, and At(x) be defined as in (4). Then, the STOU defined as +Zt(x) := +� +At(x) +exp(−A(t − s))Λ(ds, dξ) +(14) +is adapted to the filtration F, stationary, Markovian and its distribution is ID. Moreover, let u ∈ Rd, +τ ∈ R, and E[Zt(x)2] ≤ ∞. Then, +Cov(Zt(x), Zt+τ(x + u)) = V ar(Λ′) exp(−Aτ) +� +At(x)∩At+τ (x+u) +exp(−2A(t − s))ds dξ, and +(15) +Corr(Zt(x), Zt+τ(x + u)) = +exp(−Aτ) +� +At(x)∩At+τ (x+u) exp(−2A(t − s)) ds dξ +� +At(x) exp(−2A(t − s)) ds dξ +. +(16) +8 + +Example 2.10. Let d = 1 +ρT (τ) := Corr(Zt(x), Zt+τ(x)) = exp(−A|τ|), +(17) +ρS(u) := Corr(Zt(x), Zt(x + u)) = exp +� +− A|u| +c +� +, +(18) +ρST (τ, u) := Corr(Zt(x), Zt+τ(x + u)) = min +� +exp(−A|τ|), exp +� +− A|u| +c +�� +. +(19) +An STOU exhibits exponential temporal autocorrelation (just like the temporal Ornstein-Uhlenbeck +process) and has a spatial autocorrelation structure determined by the shape of the ambit set. In addition, +this class of fields admits non-separable autocovariances, which are desirable in practice. +An MSTOU process is defined by mixing the parameter A in the definition of an STOU process; +that is, we assume that A is a random variable with support in H = (0, ∞). This modification allows the +determination of random fields with power-decaying autocovariance functions. +Definition 2.11. Let Λ = {Λ(B), B ∈ Bb(S)} be a L´evy basis with characteristic quadruplet (γ, σ2, ν, π), +f : (0, ∞) × R × Rd → R a B(S)-measurable function defined as f(A, s, ξ) = exp(−As), and At(x) be +defined in (4). Moreover, let l(A) be the density of π with respect to the Lebesgue measure such that +� ∞ +0 +1 +Ad+1 l(A)dA ≤ ∞. +Then, the MSTOU defined as +Zt(x) := +� ∞ +0 +� +At(x) +exp(−A(t − s))Λ(dA, ds, dξ) +(20) +is adapted to the filtration F, stationary and its distribution is ID. Moreover, let u ∈ Rd and τ ∈ R, and +E[Zt(x)2] ≤ ∞ +Cov(Zt(x), Zt+τ(x + u)) = V ar(Λ′) exp(−Aτ) +� ∞ +0 +� +At(x)∩At+τ (x+u) +exp(−2A(t − s))ds dξ l(A)dA, +(21) +Corr(Zt(x), Zt+τ(x + u)) = +exp(−Aτ) +� ∞ +0 +� +At(x)∩At+τ (x+u) exp(−2A(t − s)) ds dξ f(A)dA +� ∞ +0 +� +At(x) exp(−2A(t − s)) ds dξ l(A)dA +. +(22) +Example 2.12. Let l(A) = +βα +Γ(α)Aα−1 exp(−βA), the Gamma density with shape and rate parameters, +α > d + 1 and β > 0. For d = 1, u ∈ R and τ ∈ R +V ar(Zt(x)) = +V ar(Λ′)cβ2 +2(α − 2)(α − 1) +(23) +Cov(Zt(x), Zt+τ(x + u)) = +V ar(Λ′)cβα +2(β + max{|τ|, |u|/c})α−2(α − 2)(α − 1), +(24) +ρST (τ, u) := Corr(Zt(x), Zt+τ(x + u)) = +� +β +β + max{|τ|, |u|/c} +�α−2 +. +(25) +9 + +2.4 +Isotropy, short and long range dependence +Definition 2.13 (Isotropy). Let t ∈ R and x ∈ Rd. A spatio-temporal random field (Zt(x))(t,x)∈R×Rd is +called isotropic if its spatial covariance: +Cov(Zt(x), Zt(x + u)) = C(|u|), +for some positive definite function C. +STOU and MSTOU processes defined on cone-shaped ambit sets are isotropic random fields. +Moreover, we consider the following definitions of temporal and spatial short and long-range depen- +dence in the paper, as given in [48]. +Definition 2.14 (Short and long range dependence). A spatio-temporal random field (Zt(x))(t,x)∈R×Rd +is said to have temporal short-range dependence if +� ∞ +0 +Cov(Zt(x), Zt+τ(x)) dτ < ∞, +and long-range temporal dependence if the integral above is infinite. Similarly, an isotropic random field +has short-range spatial dependence if +� ∞ +0 +C(r) dr < ∞, +where Cov(Zt(x), Zt(x + u)) = C(|u|) and r = |u|. It is said to have long-range spatial dependence if +the integral is infinite. +If (Zt(x))(t,x)∈R×Rd is, for example, an STOU process then Z admits temporal and spatial short- +range dependence. When Z is an MSTOU process, short and long-range dependence models can be +obtained by carefully modeling the distribution of the random parameter A. +Example 2.15. Let us consider the model discussed in Example 2.12. Set u = 0, then Z has temporal +short-range dependence for α > 3, because +� ∞ +0 +Cov(Zt(x), Zt+τ(x)) dτ = +cβαV ar(Λ′) +2(α − 2)(α − 1) +� ∞ +0 +(β + τ)−(α−2) dτ += +cβ3V ar(Λ′) +2(α − 1)(α − 2)(α − 3). +This integral is infinite for 2 < α ≤ 3, and the process has long-range temporal dependence. We obtain +spatial short- or long-range dependence for the same choice of parameters. In fact, for r = |u| and τ = 0, +and α > 3 +� ∞ +0 +C(r) dr = +cβαV ar(Λ′) +2(α − 2)(α − 1) +� ∞ +0 +(β + r/c)−(α−2) dτ += +cβ3V ar(Λ′) +2(α − 1)(α − 2)(α − 3), +converges, whereas the integral above diverges for 2 < α ≤ 3. +10 + +2.5 +Weak dependence coefficients +MMAFs are θ-lex weakly dependent random fields. For v ∈ N ∪ {∞}}, the latter is a dependence notion +more general than α∞,v-mixing, see Lemma A.5. +Definition 2.16. Let Z = (Zt)t∈R1+d be an Rn-valued random field. Then, Z is called θ-lex-weakly +dependent if +θlex(r) = sup +u,v∈N +θu,v(r) −→ +r→∞ 0, +where +θu,v(r) = sup +�|Cov(F(ZΓ), G(ZΓ′))| +∥F∥∞vLip(G) +� +and F ∈ G∗ +u, G ∈ Gv; Γ = {ti1, . . . , tiu} ⊂ R1+d, Γ′ = {tj1, . . . , tjv} ⊂ R1+d such that |Γ| = u, |Γ′| = v +and Γ ⊂ V r +Γ′ = �v +l=1 V r +tjl for tjl ∈ Γ′. We call (θlex(r))r∈R+ the θ-lex-coefficients. +In the MMAF modeling framework, we can show general formulas for the computation of upper +bounds of the θ-lex coefficients. The latter are given as a function of the characteristic quadruplet of the +driving L´evy basis Λ and the kernel function f in (2). +Proposition 2.17. Let Λ be an R-valued L´evy basis with characteristic quadruplet (γ, σ2, ν, π), f : +H × R1+d → R a B(H × R1+d)-measurable function and Zt(x) be defined as in (2). +(i) If +� +|x|>1 x2ν(dx) < ∞, γ+ +� +|x|>1 xν(dx) = 0 and f ∈ L2(H ×R1+d, π×λ1+d), then Z is θ-lex-weakly +dependent and +θlex(r) ≤ 2 +� � +H +� ρ(r) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2dsdξπ(dA) +� 1 +2 . +(ii) If +� +|x|>1∥x∥2ν(dx) < ∞ and f ∈ L2(H × R1+d, π × λ1+d) ∩ L1(H × R1+d, π × λ1+d), then Z is +θ-lex-weakly dependent and +θlex(r) ≤ 2 +� � +H +� ρ(r) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s)2 dsdξπ(dA) ++ +���� +� +S +� ρ(r) +−∞ +E(Λ′) +� +∥ξ∥≤cs +f(A, −s) dsdξπ(dA) +���� +2� 1 +2 +. +(iii) If +� +R |x| ν(dx) < ∞, σ2 = 0 and f ∈ L1(H × R1+d, π × λ1+d) with γ0 defined in (13), then Z is +θ-lex-weakly dependent and +θlex(r) ≤ 2 +� � +H +� ρ(r) +−∞ +� +∥ξ∥≤cs +|f(A, −s)γ0| ds dξπ(dA) ++ +� +H +� ρ(r) +−∞ +� +∥ξ∥≤cs +� +R +|f(A, −s)x| ν(dx) dsπ(dA) +� +. +11 + +The results above hold for all r > 0 with +ρ(r) = +−r min(1/c, 1) +� +(d + 1)(c2 + 1) +, +(26) +V ar(Λ′) = σ2 + +� +R x2 ν(dx) and E(Λ′) = γ + +� +|x|≥1 xν(dx). +Given the results of Proposition 2.17, it is then possible to compute upper bounds for the θ-lex- +coefficients of an MSTOU process. +Corollary 2.18. Let Z be an MSTOU process as in Definition 2.11 and (γ, σ2, ν, π) be the characteristic +quadruplet of its driving L´evy basis. Moreover, let the mean reversion parameter A be Gamma(α, β) +distributed with density l(A) = +βα +Γ(α)Aα−1 exp(−βA) where α > d + 1 and β > 0. +(i) If +� +|x|>1 x2 ν(dx) < ∞ and γ + +� +|x|>1 xν(dx) = 0, then Z is θ-lex-weakly dependent. Let c ∈ [0, 1], +then for +d = 1, +θlex(h) ≤ 2 +�cV ar(Λ′)βα +2Γ(α) +� +Γ(α − 2) +(2ψ(h) + β)α−2 + 2ψ(h)Γ(α − 1) +(2ψ(h) + β)α−1 +�� 1 +2 +, +and for +d ≥ 2, +θlex(h) ≤ 2 +� +Vd(c)d!V ar(Λ′)βα +2d+1 +d +� +k=0 +(2ψ(h))k +k!(2ψ(h) + β)α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +� 1 +2 +. +Let c > 1, then for +d ∈ N, θlex(h) ≤ 2 +� +Vd(c)d!V ar(Λ′)βα +2d+1 +d +� +k=0 +� +2ψ(h) +c +�k +k! +� +2ψ(h) +c ++ β +�α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +� 1 +2 +. +The above implies that, in general, θlex(h) = O(h +(d+1)−α +2 +). +(ii) If +� +R |x| ν(dx) < ∞, Σ = 0 and γ0 as defined in (13), then Zt(x) is θ-lex-weakly dependent. Let +c ∈ (0, 1], then for +d ∈ N, +θlex(h) ≤ 2Vd(c)d!βαγabs +d +� +k=0 +ψ(h)k +k!(ψ(h) + β)α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +, +whereas for c > 1 and +d ∈ N, +θlex(h) ≤ 2Vd(c)d!βαγabs +d +� +k=0 +� +ψ(h) +c +�k +k! +� +ψ(h) +c ++ β +�α−d−1+k +Γ(α − d − 1 + k) +Γ(α) +, +where γabs = |γ0| + +� +R |x|ν(dx), and Vd(c) denotes the volume of the d-dimensional ball with radius +c. the above implies that, in general, θlex(h) = O(h(d+1)−α). +Proof. Proof of this corollary can be obtained by modifying the proof of [22, Section 3.7] in line with the +calculations performed in Proposition 2.17. +12 + +For d = 1, 2, when the MMAF has a kernel function with no spatial component, we can compute a +more explicit bound for the θ-lex coefficients. +Proposition 2.19. Let Λ be an R-valued L´evy basis with characteristic triplet (γ, σ2, ν, π) and f : H × +R → R a B(H × R)-measurable function not depending on the spatial dimension, i.e. +Zt(x) = +� +H +� +At(x) +f(A, t − s)Λ(dA, ds, dξ), +(t, x) ∈ R1+d. +(27) +(i) For d = 1, if that +� +|x|>1 x2ν(dx) < ∞ and γ + +� +|x|>1 xν(dx) = 0, then Z is θ-lex weakly dependent +and +θlex(r)≤2 +� +2V ar(Λ′) +� ∞ +0 +� +A0(0)∩A0(r min(2,c)) +f(A, −s)2dsdξπ(dA) +�1/2 +=2 +� +2Cov(Z0(0), Z0(r min(2, c))), +where V ar(Λ′) = σ2 + +� +R x2 ν(dx). +(ii) For d = 2, if +� +|x|>1 x2ν(dx) < ∞ and γ + +� +|x|>1 xν(dx) = 0, then then Z is θ-lex weakly dependent +and +θlex(r) ≤ 2 +� +2Cov +� +Z0(0, 0), Z0 +� +r min +� +1, c +√ +2 +� +, r min +� +1, c +√ +2 +��� ++ 2Cov +� +Z0(0, 0), Z0 +� +r min +� +1, c +√ +2 +� +, −r min +� +1, c +√ +2 +��� �1/2 +. +Assumption 2.20. In general, we indicate the bounds of the θ-lex coefficients determined in Proposition +2.17, Corollary 2.18 or Proposition 2.19 using the sequence (˜θlex(r)))r∈R+ where +θlex(r) ≤ 2˜θlex(r). +Example 2.21. Let d = 1 and Z be an STOU as in Definition 2.9. If +� +|x|>1 x2 ν(dx) ≤ ∞, γ + +� +|x|>1 x ν(dx) = 0, then Z is θ-lex weakly dependent with +˜θlex(r) = +� � +A0(0)∩(A0(ψ)∪A0(−ψ)) +exp(2As)ds dξ +� 1 +2 += +� +V ar(Λ′) +� +A0(0)∩(A0(ψ)∪A0(−ψ)) +exp(2As) ds dξ +� 1 +2 +≤ +� +2V ar(Λ′) +� +A0(0)∩A0(ψ) +exp(2As) ds dξ +� 1 +2 += +� +2V ar(Λ′) +� − ψ +2c +−∞ +� −cs +ψ+cs +exp(2As) ds dξ +� 1 +2 = +� c +A2 V ar(Λ′) exp +�−Aψ +c +�� 1 +2 +13 + += +� c +A2 V ar(Λ′) exp +� +− A min(2, c) +c +� +�� +� +2λ +r +�� 1 +2 += +� +2Cov(Z0(0), Z0(r min(2, c))) := ¯α exp(−λr), +where λ > 0 and ¯α > 0. Because the temporal and spatial autocovariance functions of an STOU are +exponential, see (15), the model admits spatial and temporal short-range dependence. +Example 2.22. Let d = 1 and Z be an MSTOU as in Definition 2.11. Moreover, let us define the density +l(A) as in Example 2.12. If +� +|x|>1 x2 ν(dx) ≤ ∞, γ + +� +|x|>1 x ν(dx) = 0, then Z is θ-lex weakly dependent +with +˜θlex(r) ≤ +� c +A2 V ar(Λ′) +� ∞ +0 +exp +�−Aψ +c +� +π(dA) +� 1 +2 += +� +V ar(Λ′)cβα +(β + ψ/c)α−2(α − 2)(α − 1) +� 1 +2 += +� V ar(Λ′)cβα +(α − 2)(α − 1) +� +β + r min(2, c) +c +�−(α−2)� 1 +2 += +� +2Cov(Z0(0), Z0(r min(2, c))) := ¯αr−λ, +where λ = α−2 +2 +and ¯α > 0. As already addressed in Example 2.15, for 2 < α ≤ 3, that is 0 < λ ≤ 1 +2, +the model admits temporal and spatial long-range dependence whereas for α > 3, that is λ > 1 +2 the model +admits temporal and spatial short range dependence. +Statistical inference for STOU and MSTOU processes is reviewed in Appendix A. Such method- +ologies can be applied to the entire class of influenced mixed moving average fields (as long as certain +moment conditions are fulfilled). We remind the reader that the parameter c is involved in the definition of +the ambit set At(x) and therefore of the lightcone (3) which we assume modeling the causal relationships +between spatial-time points. Other parameters of interest appear in the kernel function or represent the +variance of the L´evy seed Λ′. We can then determine an estimate of the decay rate of the θ-lex coefficients, +which is given, for example, by the parameter λ in Examples 2.21 and 2.22. +In the MMAF framework, we can also find time series models. The latter are θ-weakly dependent, +a notion of dependence satisfied by causal stochastic processes and defined as follows. +Definition 2.23. Let Z = (Zt)t∈R be an Rn-valued stochastic process. Then, Z is called θ-weakly +dependent if +θ(k) = sup +u∈N +θu(k) −→ +k→∞ 0, +where +θu(k) = sup +�|Cov(F(Zi1, . . . , Zi1), G(Zj1))| +∥F∥∞Lip(G) +� +. +and F ∈ Gu, G ∈ G1; i1 ≤ i2 ≤ . . . ≤ iu ≤ iu + k ≤ j1. We call (θ(K))k∈R+ the θ-coefficients. +Example 2.24 (Time series case). The supOU process studied in [6] and [11] is an example of a causal +mixed moving average process. Let the kernel function f(A, s) = e−As1[0,∞)(s), A ∈ R+, s ∈ R and Λ a +14 + +L´evy basis on R+ × R with generating quadruple (γ, σ2, ν, π) such that +� +|x|>1 +log(|x|) ν(dx) < ∞, and +� +R+ +1 +Aπ(dA) < ∞, +(28) +then the process +Zt = +� +R+ +� t +−∞ +e−A(t−s) Λ(dA, ds) +(29) +is well defined for each t ∈ R and strictly stationary and called a supOU process where A represents a +random mean reversion parameter. +If E(Λ′) = 0 and +� +|x|>1 |x|2ν(dx) < ∞, the supOU process is θ-weakly dependent with coefficients +θZ(r) ≤ +� � +R+ +� r +−∞ +e−2Asσ2 ds π(dA) +� 1 +2 = +� +V ar(Λ′) +� +R+ +e−2Ar +2A +π(dA) +� 1 +2 +(30) += Cov(Z0, Z2r) +1 +2 , +by using Theorem 3.11 [11] and where V ar(Λ′) = σ2 + +� +R x2ν(dx). +If E(Λ′) = µ and +� +|x|>1 |x|2ν(dx) < ∞, the supOU process is θ-weakly dependent with coefficients +θZ(r) ≤ +� +Cov(Z0, Z2r) + +4µ2 +V ar(Λ′)2 Cov(Z0, Zr)2� 1 +2 . +(31) +If +� +R |x|ν(dx) < ∞, σ2 = 0, γ0 = γ − +� +|x|≤1 x ν(dx) > 0 and ν(R−) = 0, then the supOU process admits +θ-coefficients +θZ(r) ≤ µ +� +R+ +e−Ar +A +π(dA), +(32) +and when in addition +� +|x|>1 |x|2ν(dx) < ∞ +θZ(r) ≤ +2µ +V ar(Λ′)Cov(Z0, Zr). +(33) +Note that the necessary and sufficient condition +� +R+ 1 +A π(dA) for the supOU process to exist is +satisfied by many continuous and discrete distributions π, see [64, Section 2.4] for more details. For +example, a probability measure π being absolutely continuous with density π′ = xhl(x) and regularly +varying at zero from the right with h > 0, i.e., l is slowly varying at zero, satisfies the above condition. If +moreover, l(x) is continuous in (0. + ∞) and limx→0+ l(x) > 0 exists, it holds that +Cov(Z0, Zr) ∼ C +rh , with a constant C > 0 and r ∈ R+ +where for h ∈ (0, 1) the supOU process exhibits long memory and for h > 1 short memory. In this set-up, +concrete examples where the covariances are calculated explicitly can be found in [8] and [20]. +Another interesting example of mixed moving average processes is given by the class of trawl pro- +cesses. A distinctive feature of the class of trawl processes is that it allows one to model its correlation +structure independently from its marginal distribution, see [9] for further details on their definition. In the +case of trawl processes, we also have available in the literature likelihood-based methods for estimating +15 + +their parameters; see [13] for further details. In general, the generalized method of moments is employed +to estimate the parameters of an MMA process, see [20]. +3 +Mixed moving average field guided learning +3.1 +Pre-processing N frames: input-features extraction method +Our methodology is designed to work for spatio-temporal data when the spatial dimension d ≥ 1. With- +out loss of generality, we describe the procedure for d = 2, i.e., for frame images data through time +( ˜Zt(x))(t,x)∈T×L following the decomposition (1). We then represent the regular spatial lattice L as a +frame made of a finite amount of pixels, i.e., squared-cells representing each of them a unique spatial +position x ∈ R2, see Figure 1. +Figure 1: Space-time grid with origin in (t0, x0). +In several applications, such as satellite imagery, a pixel refers to a spatial cell of several square +meters. In the paper, a pixel refers to the spatial point x ∈ R2 corresponding to the center of the squared +cell. We then use the name pixel and spatial position throughout interchangeably. Moreover, we call +(t0, x0) the origin of the space-time grid, see Figure 1, and ht and hs the time and space discretization +step in the observed data set. Mixed moving average guided learning has the target to determine a one-time +ahead prediction of the field Z at a given pixel position x∗, not belonging to the frame boundary. +Definition 3.1. Let (X, Y )⊤ an input-output vector, H a model class, and a predictor function h ∈ H. +We define the loss function L as +L(h(X), Y ) = |Y − h(X)|. +(34) +For ϵ > 0 be an accuracy level (specified before learning), we define the truncated absolute loss as +Lϵ(h(X), Y )) = L(h(X), Y )) ∧ ϵ. +(35) +16 + +Frame +Pixel +Origin +TimeMoreover, we define the generalization error (out-of-sample risk) +Rϵ(h) = E[Lϵ(h(X), Y )], +(36) +and the empirical error (in-sample risk) +rϵ(h) = 1 +m +m +� +i=1 +Lϵ(h(Xi), Yi). +(37) +Remark 3.2 (About the accuracy level ϵ). The boundedness of the loss function Lϵ plays a key role in +the results of the paper as it allows us to prove Proposition 3.10, which is one of the main technical tools +used to obtain PAC Bayesian bounds in Section 3.2. Different choices of the parameter ϵ will result in +different randomized predictors. Therefore ϵ is an hyper-parameter. +The topic discussed in the remark below holds for a general loss function. However, given the use +of the truncated absolute loss function in the paper, we focus on this specific example. +Remark 3.3 (Generalization gap). The function Lϵ is used to measure the discrepancy between a pre- +dicted output h(X) and the true output Y . Using the in-sample risk, we measure the performance of +a given predictor h just over an observed sample. Its theoretical counterpart represented by the out-of- +sample risk gives us the performance of a given predictor h depending on the unknown distribution of the +data P. The latter represents the measure we want to evaluate when choosing a predictor h. However, +we do not know the distribution of the data, so typically, we select a predictor h by solely evaluating its +in-sample risk. We then need a guarantee that a selected predictor will perform well when used on a set of +out-of-sample observations, i.e., not belonging to the observed data. We can also rephrase the problem as +finding a predictor h for which the difference between the out-of-sample and in-sample risk Rϵ(h) − rϵ(h) +is as small as possible. We call the latter generalization gap. The PAC framework, of which in the next +section we introduce a Bayesian version, aims to find a bound of the generalization gap that holds with +high probability P, see for example [60] and[67]. Such probability inequalities are also called generalization +bounds and give a theoretical guarantee on the performance of a predictor on any unseen data. +We now prove three preliminary results needed to define the input-features extraction method at +the end of this section. In conclusion, we use these results to define a training data set, which can preserve +the dependence structure of the data and reduce as much as possible the number of past and neighboring +space-time points used in learning. +Preliminary 1: Let us consider a stationary random field (Zt(x))(t,x)∈Z×L, and select a pixel x∗ +in L not belonging to the boundary of the frame. We define +Xi = L− +p (t0 + ia, x∗), +and +Y i = Zt0+ia(x∗), for i ∈ Z, +where +L− +p (t, x) = (Zi1(ξ1), . . . , Zia(p,c)(ξa(p,c)))⊤, for +{(is, ξs) : ∥x − ξs∥ ≤ c (t − is) and 0 < t − s ≤ p} := I(t, x), c > 0 and a(p, c) := |I(t, x)|. The +parameters a > 0 and p > 0 are multiple of ht such that a = atht, p = ptht, at, pt ∈ N and +at ≥ pt + 1. Moreover, we assume throughout to store in the vectors Xi values of the fields Z in +17 + +Figure 2: Time and spatial indices in the definition of the random vector (Xi, Y i)⊤ for c = 1, pt = 2, at = +3, ht = hs = 1. In particular, the green and red pixels represent the time-spatial indices identifying the +elements of Z belonging to Xi and Y i, respectively. This sampling scheme cannot be performed starting +by a pixel x∗ at the boundary of the frame. +lexicographic order. Then, (Xi, Y i)⊤ are identically distributed for all i ∈ Z. We call ((Xi, Y i)⊤)i∈Z +a cone-shaped sampling process. An example of such a sampling scheme can be found in Figure 2. +Preliminary 2: Let us analyze the dependence structure of the stochastic processes (L(h(Xi), Y i))i∈Z +and (Lϵ(h(Xi), Y i))i∈Z for h a Lipschitz function belonging to the model class H := {h ∈ L(Ra(p,c))}. +Proposition 3.4. Let ((Xi, Y i)⊤)i∈Z be a cone-shaped sampling process from a stationary and +θ-lex weakly dependent random field (Zt(x))(t,x)∈R×Rd. Then for all h ∈ H, (L(h(Xi), Yi))i∈Z is a +θ-weakly dependent process. Moreover, for a, p > 0, k ∈ N, and r = ka − p > 0, it has coefficients +θ(k) ≤ ˜d(Lip(h)a(p, c) + 1) +�2 +˜d +E[|Zt(x) − Z(r) +t (x)|] + θlex(r) +� +, +(38) +where ˜d > 0 is a constant independent of r, and Z(r) +t (x) := Zt(x) ∧ r. +Remark 3.5 (Locally Lipschitz predictor). Let the predictor h be a locally Lipschitz function such +that h(0) = 0 and +|h(x) − h(y)| ≤ ˜c ∥x − y∥1(1 + ∥x∥1 + ∥y∥1) for x, y ∈ Ra(p,c), +for ˜c > 0. Moreover, let Z be a stationary and θ-weakly dependent random field such that |Z| ≤ C +a.s. An easy generalization of Proposition 3.4, leads to show that (L(h(Xi), Yi))i∈Z is θ-weakly +dependent with coefficients +θ(k) ≤ ˜d(˜c(1 + 2C)a(p, c) + 1) +�2 +˜d +E[|Zt(x) − Z(r) +t +(x)|] + θlex(r) +� +, +(39) +where r = ka − p > 0 for a, p > 0 and k ∈ N, ˜d > 0 is a constant independent of r, and +Z(r) +t +(x) := Zt(x) ∧ r. +MMAFs have a particular definition that allows us to obtain an explicit bound for the coefficients +18 + +Timeθlex(r), as proven in Propositions 2.17 and 2.19, and a more refined bound than (38) for the θ- +coefficients of the cone-shaped sampling process. We consider the following assumptions. +Assumption 3.6. Let T = {t0, t1, . . . , tN, N ∈ N} and L ⊂ R2, then the data set (Zt(x))(t,x)∈T×L +is drawn from (Zt(x))(t,x)∈Z×L. Moreover, the latter can be derived from a regular sampling of the +MMAF Z = (Zt(x))(t,x)∈R×R2, where Z ∈ L2(Ω). +Proposition 3.7. Let Assumption 2.20 and 3.6 hold. Then (L(h(Xi), Yi))i∈Z is θ-weakly dependent +for all h ∈ H with coefficients +θ(k) ≤ 2(Lip(h)a(p, c) + 1)�θlex(r), +(40) +where r = ka − p > 0 for a, p > 0 and k ∈ N. Moreover, for linear predictors, i.e., hβ(X) = +β0 + βT +1 X, for β := (β0, β1)⊤ ∈ B and B = Ra(p,c)+1, we have that (L(hβ(Xi), Yi))i∈Z is θ-weakly +dependent for all β ∈ B with coefficients +θ(k) ≤ 2(∥β1∥1 + 1)�θlex(r). +(41) +Lemma A.3 straightforwardly imply that the process Lϵ := (Lϵ(h(X)i, Y i)))i∈Z is θ-weakly depen- +dent with known θ-coefficients under the assumptions of Proposition 3.4, Remark 3.5 and Proposi- +tion 3.7. +Preliminary 3: θ-weakly dependent processes have an essential role in the paper because of the +property of the process Lϵ. We analyze now an exponential bound for such processes. +Proposition 3.8. Let (Xi)i∈Z be a Rn valued stationary θ-weakly dependent process, and f : Rn → +[a, b], for a, b ∈ R such that (f(Xi))i∈Z is itself θ-weakly dependent. Let l = ⌊ m +k ⌋, for m, k ∈ N, +such that l ≥ 2 and 0 < s < +3l +|b−a|, then +E +� +exp +� s +m +m +� +i=1 +(f(Xi) − E[f(Xi)]) +�� +≤ exp +� +s2V ar(f(X)) +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k), (42) +E +� +exp +� s +m +m +� +i=1 +(E[f(Xi)] − f(Xi)) +�� +≤ exp +� +s2V ar(f(X)) +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k). (43) +Remark 3.9. The assumptions of Proposition 3.8 hold, for example, if f is a Lipschitz function +or satisfies the assumptions of [21, Section 7]. Moreover, the assumption of Proposition 3.8 hold +for the function Lϵ(h(·), ·) applied to the process ((Xi, Y i)⊤)i∈Z under the assumptions analyzed in +the Preliminary 2. +We can now define a training data set S = {(X1, Y1)⊤, (X2, Y2)⊤, . . . , (Xm, Ym)⊤} as a draw from +the cone-shaped sampling process defined in the Preliminary 1, namely, +Xi = L− +p (t0 + ia, x∗), +and +Yi = Zt0+ia(x∗) for i = 1, . . . , m := +�N +at +� +, +(44) +where +19 + +L− +p (t, x) = (Zi1(ξ1), . . . , Zia(p,c)(ξa(p,c)))⊤, where (is, ξs) ∈ I(t, x) for s = 1, . . . , a(p, c). +(45) +We assume that the parameters a, p, and k follow the constraints in Table 1. Moreover, each parameter +has a precise interpretation, also summarized here. We note that each element of L− +p (t, x) belongs to +l−(t, x), where l−(t, x) is defined in (6) and identifies the set of all points in R × Rd that could possibly +influence the realization Zt(x). +Parameters +Constraints +Interpretation +a := atht +pt + 1 ≤ at ≤ +� +N +2 +� +translation vector +p := ptht +1 ≤ pt < +� +N +2 +� +− 1 +past time horizon +k +k ∈ +� +1, . . . , +� +N +at +�� +index of the θ-coefficient in Prop. 3.8. +m := N +at +2 < m < N +number of examples in S +c +c > 0 +speed of information propagation +λ +λ > 0 +decay rate of the θ-lex coefficients of Z +Table 1: Parameters involved in pre-processing N observed frames. +For each observed data set, we know the value of the constants N, ht, and hs. The remaining +parameters appearing in Table 1 have to be opportunely chosen. We explain in this section how to select +the parameter at and k by assuming the parameters pt, c, and λ are known. The selection of the parameter +pt is detailed in Section 3.3 for linear predictors, and c and λ are typically estimated from (Zt(x))(t,x)∈T×L, +see exemplary estimation methods in Sections A.2 and A.3. +The parameter at and k are selected to offset the lack of independence of the process Lϵ which is +θ-weakly dependent, as proven in the Preliminary 2. In Proposition 3.10, and Corollary 3.11 and 3.13, +we show an important building block in the proof of the PAC Bayesian bounds obtained in the next +section, which make use of an opportune selection of the parameters at and k. Such a result undergoes +minor changes when working in the class of linear predictors, feed-forward neural networks, or Lipschitz +functions. As exemplary, we show this result for linear predictors and discuss how to modify the proof +for more general model classes. We use the notation ∆(h) = Rϵ(h) − rϵ(h) and ∆′(h) = rϵ(h) − Rϵ(h) +when referring to a Lipschitz predictor; ∆(β) = Rϵ(β) − rϵ(β) and ∆′(β) = rϵ(β) − Rϵ(β), where we +call Rϵ(β) and rϵ(β) the generalization and empirical error, in the linear framework; and ∆(hnet,w) = +Rϵ(hnet,w)−rϵ(hnet,w) and ∆′(hnet,w) = rϵ(hnet,w)−Rϵ(hnet,w), where we call Rϵ(hnet,w) and rϵ(hnet,w) +the generalization and empirical error, in the case we use a feed-forward neural network. +Proposition 3.10. Let Assumption 2.20 and 3.6 hold, the parameter pt be known and r = ht(kat − pt). +Moreover, let us assume that H = {hβ(X) = β0 + βT +1 X, for β := (β0, β1)⊤ ∈ B} where B ∈ R(a(p,c)+1). +(i) Let 0 < ϵ < 3, l ≥ 9, β ∈ B and Z be a field with exponential decaying θ-lex coefficients, i.e., +�θlex(r) = ¯α exp(−λr) +for ¯α > 0, λ > 0. If +atk ≥ (λp − 1) + +� +(λp − 1)2 + 8λhtN +2λht +, +(46) +20 + +then +E +� +exp +�√ +l ∆(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(∥β1∥1 + 1)¯α, +(47) +E +� +exp +�√ +l ∆′(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(∥β1∥1 + 1)¯α. +(48) +(ii) Let 0 < ϵ < 3, l ≥ 9, β ∈ B and Z be a field with power decaying θ-lex coefficients +�θlex(r) = ¯αr−λ +for ¯α > 0. If +2N − atk +atk log(ak − p) ≤ λ, +(49) +then the inequalities (47) and (48) hold. +By substituting the bound (41) with (40) in the proof of Proposition 3.10 we obtain the following +result. +Corollary 3.11. Let Assumption 2.20 and 3.6 hold, the parameter pt be known and r = ht(kat − pt). +Moreover, let us assume that H = {h(X) ∈ L(Ra(p,c))}. Let 0 < ϵ < 3, l ≥ 9, h ∈ H and Z be a field +with exponential decaying θ-lex coefficients, i.e., +�θlex(r) = ¯α exp(−λr) +for ¯α > 0, λ > 0, or with power decaying θ-lex coefficients +�θlex(r) = ¯αr−λ +for ¯α > 0. If condition (46) or (49) hold, then for h ∈ H +E +� +exp +�√ +l ∆(h) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(Lip(h)a(p, c) + 1)¯α, +(50) +E +� +exp +�√ +l ∆′(h) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(Lip(h)a(p, c) + 1)¯α. +(51) +Corollary 3.11 applies to feed-forward neural network predictors as long as we can compute the Lip- +schitz function of the neural network. There exist numerical methods to compute the Lipschitz constants +of (deep) neural networks, see [29] and [40], which enable the computation of the inequalities (50) and +(51) for a given network. We give now an explicit version of the inequality (50) and (51) in the case a +1-layer neural network, which we define below. +Definition 3.12. Let σ : R → R an activation function, we define a 1-layer neural network as +hnet,w(X) = α0 + +K +� +l=1 +αlσ(βT +l X + γl), +where w = (α0, α1, . . . , αK, β1, . . . , βK, γ1, . . . , γK) ∈ R2K+1+Ka(p,c) := B′ for K ≥ 1. +21 + +Many frequently used activation functions are 1-Lipschitz continuous functions, meaning that their +Lipschitz constant equals one. Such property is, for example, satisfied by the activation functions ReLU, +Leaky ReLU, SoftPlus, Tanh, Sigmoid, ArcTan, or Softsign. We can then prove the following result. +Corollary 3.13. Let Assumption 2.20 and 3.6 hold, the parameter pt be known and r = ht(kat − pt). +Moreover, let us assume that H = {hnet,w(X) ∈ B′}, where the networks are defined following Definition +3.12 with a 1-Lipschitz activation function σ. Let 0 < ϵ < 3, l ≥ 9, w ∈ B′ and Z be a field with +exponential decaying θ-lex coefficients, i.e., +�θlex(r) = ¯α exp(−λr) +for ¯α > 0, λ > 0, or with power decaying θ-lex coefficients +�θlex(r) = ¯αr−λ +for ¯α > 0. If condition (46) or (49) hold, then +E +� +exp +�√ +l ∆(hnet,w) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2( +� +l +|αl|∥βl∥1 + 1)¯α, +(52) +E +� +exp +�√ +l ∆′(hnet,w) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2( +� +l +|αl|∥βl∥1 + 1)¯α. +(53) +Remark 3.14. Similarly, as in Corollary 3.13, using the bounds (38) or (39), Proposition 3.10 can be +generalized for Lipschitz and locally Lipschitz predictors, respectively, under the assumption that the data +are generated by a stationary θ-lex weakly dependent random field. +We then pre-process the data to obtain the largest possible training data set S starting by N +observed frames. +Assumption 3.15. We select k∗ being the minimum positive constants that satisfy (46) or (49) and a∗ +t +as the minimum constant satisfying (46) or (49) such that a∗ +t ≥ pt + 1. Therefore, +m = +� N +a∗ +t +� +and l = +� m +k∗ +� +. +Remark 3.16. Let us assume to work with linear predictors, for C > 1, if we choose +atk ≥ (λp − 1 − log(1/C)) + +� +(λp − 1 − log(1/C))2 + 8λhtN +2λht +, +(54) +or +2N − atk(1 + log(1/C)) +atk log(ak − p) +≤ λ, +(55) +then +E +� +exp +�√ +l ∆(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2 +C (∥β1∥1 + 1)¯α, +(56) +E +� +exp +�√ +l ∆′(β) +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2 +C (∥β1∥1 + 1)¯α. +(57) +22 + +For a fixed N, this means that we can obtain tighter bounds than in (47) and (48). However, using such +results instead of Proposition 3.10 must be evaluated on a case-by-case basis. Typically, we obtain smaller +training data sets if we have to satisfy the inequalities (54) and (55). Without loss of generality, therefore, +we consider throughout just the type of bounds analyzed in Proposition 3.10, which results in Assumption +3.15. +Remark 3.17. The input-features extraction method discussed in this section, Proposition 3.10, and +Corollary 3.11 and 3.13 straightforwardly apply to a θ-weakly dependent time series models Z. In such +case, the parameter c = 0, (Xi, Y i)⊤ +i∈Z is a flat cone-shaped sampling scheme, and straightforwardly it +is a θ-weakly dependent process with coefficients satisfying the bound (38). +3.2 +PAC Bayesian bounds for MMAF generated data +We start with a brief introduction to generalized Bayesian learning. +Firstly, let π be a reference distribution on the space (H, T ), where T indicates a σ-algebra on the +space H. The reference distribution gives a structure on the space H, which we can interpret as our belief +that certain models will perform better than others. The choice of π, therefore, is an indirect way to make +the size of H come into play; see [16, Section 3] for a detailed discussion on the latter point. Therefore, +π belongs to M1 ++(H), which denotes the set of all probability measures on the measurable set (H, T ). +Secondly, let S be a training data set with m examples and values in X × Y. Moreover, let us call P +the distribution of the random vector S. We then aim to determine a posterior distribution, also called a +randomized estimator, which is the regular conditional probability measure +ˆρ : (X × Y)m × T → [0, 1], +satisfying the following properties: +• for any A ∈ T , the map S → ˆρ(S, A) : (X × Y)m × T → [0, 1] is measurable; +• for any S ∈ (X × Y)m, the map A → ˆρ(S, A) : T → [0, 1] is a probability measure in M1 ++(B). +From now on, we indicate with π[·], ˆρ[·] the expectations w.r.t. the reference and posterior distribu- +tions (where the latter is a conditional expectation w.r.t. S), and simply with E[·] the expectation w.r.t. +the probability distribution P. Moreover, we call ˆρ[Rϵ(h)] and ˆρ[rϵ(h)] the average generalization error +and the average empirical error, respectively. +To evaluate the generalization performance of a randomized predictor ˆρ and then have a criterion +to select such distribution, we determine a PAC bound. The latter is called in this framework a PAC +Bayesian bound and is used to give with high probability a bound on the (average) generalization gap +defined as ˆρ[Rϵ(h)] − ˆρ[rϵ(h)]. We then choose a predictor having the best generalization performance in +the model class H, see also discussion in Remark 3.3, by minimizing the PAC Bayesian bound. +As exemplary, we first show a PAC Bayesian bound for linear predictors and then discuss how to +modify the bounds to obtain probabilistic inequalities valid for more general model classes. In general, +for a given measurable space (H, T ) and for any (ρ, π) ∈ M1 ++(H)2, we indicate with +KL(ρ||π) = +� +ρ +� +log dρ +dπ +� +if ρ << π ++∞ +otherwise +23 + +the Kullback-Leibler divergence. When the model class is given in dependence of a set of parameters, as +in the case of linear predictors, T indicates the σ-algebra on the parameter space B. We then obtain the +following result. +Proposition 3.18 (PAC Bayesian bound). Let 0 < ϵ < 3, Assumption 3.6 hold and let the underlying +MMAF Z be a random field with exponential or power decaying θ-lex coefficients. Moreover, let l be +selected following Assumption 3.15 and π be a distribution on B such that π[∥β1∥1] ≤ ∞, then for any ˆρ +such that ˆρ << π, and δ ∈ (0, 1) +P +� +ˆρ[Rϵ(β)] ≤ ˆρ[rϵ(β)] + +� +KL(ˆρ||π) + log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π +� +1 + 2(∥β1∥1 + 1)¯α +��� +≥ 1 − δ +(58) +and +P +� +ˆρ[rϵ(β)] ≤ ˆρ[Rϵ(β)] + +� +KL(ˆρ||π) + log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π +� +1 + 2(∥β1∥1 + 1)¯α +��� +≥ 1 − δ. +(59) +Example 3.19. The bound in (58) holds for all randomized estimators ˆρ absolutely continuous w.r.t. a +reference distribution π given a training data set S. Let us assume that the card(B) = M for M ∈ N. We +choose throughout as reference distribution the uniform distribution π on B and the class of randomized +predictors ˆρ = δ ˆ +β, the Dirac mass concentrated on the empirical risk minimizer, i.e., ˆβ := arg infβ rϵ(β). +For a given S, we have that +KL(δβ||π) = +� +β∈B +log +�δ ˆβ{β} +π{β} +� +δ ˆβ{β} = log +1 +π{ˆβ} += log(M). +(60) +It is crucial to notice that the bigger the cardinality of the space B is, the more the term log(M) and the +bound increase. +Therefore, for a given δ ∈ (0, 1) +P +� +Rϵ(ˆβ) ≤ rϵ(ˆβ) + +� +log +�M +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π[1 + 2(∥β1∥1 + 1)¯α] +�� +≥ 1 − δ. +Let us assume to work with an accuracy level ϵ = 1, l = 10000, M = 100, ¯α = 4, δ = 0.05, and +B := {β : ∥β∥1 ≤ 1}. Then, the generalization gap, as defined in Remark 3.3, is less than 0.12 with +probability P of at least 95%. +Remark 3.20. Differently from the i.i.d. setting, the parameter l is not tuned in the bounds obtained in +Proposition 3.18; see [17] and [65] for a discussion on this topic. The choice of the parameter l in the +bounds (58) and (59) is a consequence of Assumption 3.15. The value of l is chosen to offset the lack of +independence in the observed N frames. +We now determine an oracle inequality for the randomized estimator minimizing the average em- +pirical error, also known as Gibbs posterior distribution or Gibbs estimator, first in the case of a linear +predictor and then in the general case of a Lipschitz one. +24 + +Proposition 3.21 (Oracle type PAC Bayesian bound). Let 0 < ϵ < 3, Assumption 3.6 hold and let +the underlying MMAF Z be a random field with exponential or power decaying θ-lex coefficients. Let l +be selected following Assumption 3.15, π be a distribution on B such that π[∥β1∥1] ≤ ∞, and ¯ρ be a +regular conditional distribution that is absolutely continuous w.r.t. π and has Radon-Nikodym derivative +d¯ρ +dπ = +exp(− +√ +lrϵ(β)) +π[exp(− +√ +lrϵ(β))]. Then, for all δ ∈ (0, 1) +P +� +¯ρ[Rϵ(β)] ≤ inf +ˆρ +� +ˆρ[Rϵ(β)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 2 +√ +l ++ 2 +√ +l +log +� +π[1+2(∥β1∥1+1)¯α] +��� +≥ 1−δ. (61) +By employing Corollary 3.11 instead of Proposition 3.10 in the proof of the oracle inequality, we +obtain the general result below. +Corollary 3.22 (Oracle type PAC Bayesian bound for a general Lipschitz predictor). Let 0 < ϵ < 3, +Assumption 3.6 hold and let the underlying MMAF Z be a random field with exponential or power +decaying θ-lex coefficients. Let l be selected following Assumption 3.15, π be a distribution on H such that +π[Lip(h)] ≤ ∞, and ¯ρ be a regular conditional distribution that is absolutely continuous w.r.t. π and has +Radon-Nikodym derivative d¯ρ +dπ = +exp(− +√ +lrϵ(h)) +π[exp(− +√ +lrϵ(h))]. Then, for all δ ∈ (0, 1) +P +� +¯ρ[Rϵ(h)] ≤ inf +ˆρ +� +ˆρ[Rϵ(h)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 2 +√ +l ++ 2 +√ +l +log +� +π[1+2(Lip(h)a(p, c)+1)¯α] +��� +≥ 1−δ. +(62) +Remark 3.23. The PAC Bayesian bound (62) also applies to Lipschitz neural network predictors. +Better generalization performances are observed, i.e., tighter bounds for the generalization gap, when +Lip(hnet,w) ≤ 1. In the case of a 1-layer neural network, for which we have an explicit bound of the Lip- +schitz constant, we can then obtain an oracle inequality by employing Corollary 3.13 instead of Corollary +3.11 in the proof of the inequality (62). Finally, using the same argument as in Remark 3.14, we can +obtain an oracle inequality also in the case of locally Lipschitz predictors. +The rate at which the bounds (61) and (62) converge to zero as m → ∞ is called rate of convergence. +It allows us to give a measure on how fast the average generalization error of ¯ρ converges to the average +best theoretical risk inf ˆρ ˆρ[Rϵ(h)], which is obtained by the so-called oracle estimator. The rate depends +on the choice made for l = +� +m +k +� +in the pre-processing step. For k = 1, we obtain the fastest possible rate +of O(m− 1 +2 ). +Example 3.24 (Rate of convergence for models with spatial and temporal long range dependence). Let +us consider a data set with N = 10000 frames drawn from a regular sampling of an MSTOU as defined +in Example 2.22 with discretization steps ht = hs = 1. Further, let the parameters at = 1000, k = 1, and +pt = 10 in the pre-processing step of our methodology. From the calculations in Example 2.15, we have +that the underlying model admits temporal and spatial long-range dependence when its θ-lex coefficients +have power decay rate 0 < λ ≤ 1 +2. Because of the relationship (49, a Gibbs estimator applies as long as the +MSTOU admits θ-lex coefficients with λ ≥ 2, 76. Pre-processing the data to include a larger amount of +observations in an example (X, Y ), i.e., letting pt be a larger parameter, changes the range of applicability +of the estimator minimally. If, for example, we choose pt = 999 (the largest possible values to choose given +at = 1000), we obtain that λ ≥ 2, 09. +To apply the Gibbs estimator to MSTOU with long range dependence, we can choose a k > 1 in +the pre-processing step. Note that with this choice, however, the rate of convergence of the PAC Bayesian +25 + +bound in (61) and (62) becomes then slower than O(m− 1 +2 ). If we can also decide how to sample our +data along the temporal dimension, we can opt to work with a data set where the temporal discretization +step ht > 1. So doing, the rate of convergence of the Gibbs estimators can become O(m− 1 +2 ) also in the +long-range dependence framework. Note that log(ht) appears in (49). Therefore, a careful selection of the +parameters at, k, and pt has to be done when 0 < λ ≤ 1 such that the sampling frequency is not too low +and the rate of convergence of the PAC Bayesian bound remains the desired one. +In the literature, results can be found on the rate of convergence of PAC Bayesian bound just for +time series models. We review such results in the remark below to highlight the novelty of the results +obtained in the section. +Remark 3.25 (Rate of convergence in PAC Bayesian bounds for heavy tailed data). In [1], the au- +thors determine an oracle inequality for time series with a rate of O(m− 1 +2 ) under the assumption that +((Xi, Y i)⊤)i∈Z is a stationary and α-mixing process [14] with coefficient (αj)j∈Z such that � +j∈Z αj < ∞. +Such bound employs the chi-squared divergence and holds for unbounded losses: it is important to high- +light that the randomized estimator obtained by minimization of the PAC Bayesian bound is not a Gibbs +estimator in this framework. An explicit bound for linear predictors can be found in [1, Corollary 2]. This +result holds under the assumption that π[∥β∥6] ≤ ∞. +In [3], the authors prove oracle inequalities for a Gibbs estimator for data generated by a stationary +and bounded θ∞-weakly dependent process– which is an extension of the concept of φ-mixing discussed +in [55]– or a causal Bernoulli shift process and a Lipschitz predictor. Models that satisfy the dependence +notion just cited are causal Bernoulli shifts with bounded innovations, uniform φ-mixing sequences, and +dynamical systems; see [3] for more examples. The oracle inequality is here obtained for an absolute loss +function and has a rate of O(m− 1 +2 ). An extension of this work for Lipschitz loss functions under φ-mixing +[38] can be found in [2]. Here, the authors show that an oracle inequality for a Gibbs estimator with the +optimal rate O(m−1) can be achieved. Note that this rate is considered optimal in the literature when +using, for example, a squared loss function and independent and identically distributed data [15]. +Interesting results in the literature of PAC Bayesian bounds for heavy-tailed data can also be found +in [32], and [36]. However, the authors assume independent distribution of the underlying process in their +works. +We conclude by giving an oracle inequality for the aggregate Gibbs estimators in the general case +of a Lipschitz predictor. This result is obtained by using the convexity of the absolute loss function. +Corollary 3.26 (PAC Bound for the average Gibbs predictor). Let 0 < ϵ < 3, Assumption 3.6 hold and +let Z be a random field with exponential or power decaying θ-lex coefficients such that |Y − h(X)| < ϵ +a.s. for each h ∈ L(Ra(p,c)). Let l be selected following Assumption 3.15. Moreover, let π be a distribution +on H such that π[Lip(h)] ≤ ∞, and ¯ρ a Gibbs posterior distribution. Then, let ¯h = ¯ρ[h], for all δ ∈ (0, 1) +P +� +Rϵ(¯h) ≤ inf +ˆρ +� +ˆρ[Rϵ(h)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 2 +√ +l ++ 2 +√ +l +log +� +π[1+2(Lip(h)a(p, c)+1)¯α] +��� +≥ 1−δ. +3.3 +Modeling selection for the best randomized Gibbs estimator for linear +predictors +We discuss in this section the selection of the parameter pt for H = {hβ(X) : β ∈ R(a(p,c)+1)}. For different +pt, the predictor, we aim to define changes because the cardinality of the input-features vectors Xi changes. +26 + +Therefore, choosing this parameter means performing modeling selection. To ease the notations, we will +refer to pt simply using the symbol p in the section and its related proofs. +Let the parameter space +B = +⌊ N +2 ⌋−1 +� +p=1 +Bp +where the Bp are assumed to be disjoint sets, such that for any β ∈ B, there is only one p such that +β ∈ Bp. Our modeling selection procedure is designed to select the best predictor in the class below. +Definition 3.27. For all p = 1, . . . , +� +N +2 +� +− 1, and reference distributions πp on M1 ++(Bp) such that +πp[∥β1∥1] ≤ ∞, we define the class of Gibbs estimators as the regular conditional probability measures +which are absolutely continuous w.r.t. πp and have Radon Nikodym derivative +d¯ρp +dπp += +exp(− +√ +lrϵ(β)) +πp[exp(− +√ +lrϵ(β)] +. +(63) +The proposition below uses Lemma B.6 and B.7 that can be found in Appendix B. +Proposition 3.28 (Model selection). Let 0 < ϵ < 3, and the assumptions of Proposition 3.10 and 3.18 +hold. Moreover, let +p∗ = arg inf +p +� +¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log +��N +2 +��� +. +(64) +Then for all δ ∈ (0, 1), +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +2(3 − ϵ) +√ +l∗ + +1 +√ +l∗ log ¯α +δ +� +≥ 1 − δ, +(65) +where l∗ = +� +⌊ N +a∗ ⌋ +k∗ +� +, and a∗, k∗ are constants depending on p∗. +Lastly, for a bounded parameter space B, we can obtain the following oracle inequalities. +Proposition 3.29 (Oracle type PAC Bayesian bound for the best Gibbs estimator). Let the assumptions +of Proposition 3.21 hold, and p∗ be defined as in (64). Moreover, let supB ∥β∥ = +1 +C for C > 0. For all +δ ∈ (0, 1) +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +Rϵ(β) +� ++ 2 +√ +l +2 + 3C +C ++ 2 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +(3 − ϵ) +√ +l∗ + +2 +√ +l∗ log ¯α +δ +� +≥ 1 − δ +(66) +Corollary 3.30 (PAC bound for the average best Gibbs estimator). Let the assumptions of Corollary +3.26 hold, and p∗ be defined as in Proposition 3.28. Moreover, let supB ∥β∥ = 1 +C for C > 0 and ¯β = ¯ρ[hβ]. +27 + +For all δ ∈ (0, 1), +P +� +Rϵ(¯β) ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +Rϵ(β) +� ++ 2 +√ +l +2 + 3C +C ++ 2 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +(3 − ϵ) +√ +l∗ + +2 +√ +l∗ log ¯α +δ +� +≥ 1 − δ +(67) +Corollary 3.31 (Oracle inequality for a single draw out of the best Gibbs estimator). Let the assumptions +of Proposition 3.21 hold and p∗ be defined as in (64). Moreover, let supB ∥β∥ = +1 +C for C > 0 and +β∗ = arg infB R(β). For all δ ∈ (0, 1) and ˆβ ∼ ¯ρp∗, +P¯ρp∗ +� +Rϵ[ˆβ] ≤ Rϵ(β∗) + 1 +C E[|Z|] + +3ϵ2 +(3 − ϵ) +√ +l ++ 2 +√ +l +log +��N +2 +�� +1 + 2C + 1 +C +� ¯α +δ +�� +≥ 1 − δ. +(68) +Remark 3.32. The modeling selection procedure discussed here selects the best-randomized estimator +in the Gibbs posterior distributions family in dependence on the parameter p. In the case of non-linear +models, as the 1-layer neural network in Definition 3.12, a modeling selection procedure has also to select +the layer’s dimension, i.e., the parameter K. Moreover, for L-layers neural network, the parameter L +enters the modeling selection procedure. We aim in future research to analyze such a selection strategy +and determine a feasible procedure for applying the generalized Bayesian algorithms defined in the paper +to a feed-forward neural network. +4 +Predicting spatio-temporal data with MMAF guided learning +and linear predictors +Let us start by giving in Table 2 an overview of all the parameters appearing in the learning methodology. +Parameters +Type +Interpretation +ϵ +Hyperparameters +Accuracy level in the definition of the loss functions Rϵ(β) and rϵ(β) +N +Observed Parameter +Number of image frames through time composing our data set +ht +Observed Parameter +Discretization step along the temporal dimension +hs +Observed Parameter +Discretization step along the spatial dimension +λ +Unknown Parameter +Decay rate of the θ-lex coefficients of the underlying model +c +Unknown Parameter +Speed of information propagation +¯α +Unknown Parameter +Determines how tight ˜θlex(r) is as bound of θlex(r) +pt +Unknown Parameter +Length of the past information we include in each Xi +k and at +Derived Parameters +Chosen accordingly to Assumption 3.15 +Table 2: Overview of parameters appearing in MMAF guided learning +If, for example, the underlying MMAF is an STOU or an MSTOU process,i.e., we are assuming that +the data admits exponential or power-decaying autocorrelation functions, estimation methodologies for +their parameters can be found in [47] and [48]; see review in Section A.2 and A.3. We highlight that such +modeling set-up applies to data with short and possibly long-range spatial and temporal dependence. In +general, we follow the steps below to make one-step-ahead predictions. +(i) Observed N frames, we first use the entire data set to estimate the parameters λ and c. +28 + +Figure 3: Data set with spatial dimension d = 1. The last two frames are indicated with the blue stars, +whereas the violet circles represent the points in space and time where it is possible to provide predictions +with MMAF guided learning for pt = c = 1. The prediction in the time-spatial position (5, 3) lies in the +intersection (between red lines) of the future lightcones of the points (6, 2), (5, 2) and (4, 2) (represented +with green lines), which belong to L− +p (5, 3). +(ii) We fix a pixel position x∗ and we select pt (and as consequence a training data set S following +assumption 3.15) using the rule (64). +(iii) We then draw β from the Gibbs estimator determined for the pixel x∗ and the training data set S +defined in (ii) to determine a prediction at time t = N +1: we can use single draws or averaging a set +of different draws to this aim. The methodology employs novel-input features given by L− +p (N+1, x∗). +Therefore, we can make predictions in a future time point t = N + 1 as long as the set I(N + 1, x∗) +has cardinality a(p, c). +The procedure above can be implemented for any pixel where it is possible to determine a cone- +shaped training data set S, i.e., for the pixels that do not belong to the frame boundary. Moreover, the +prediction we obtain in (N +1, x∗) lies in the intersections of the future light cones of each input contained +in L− +p (N + 1, x∗), see Figure 3. This means that MMAF guided learning enables us to make predictions +in spatial-time points that are plausible starting from the set of inputs we observe. +Similarly to the kriging literature, we need an inference step before being capable of delivering +spatio-temporal predictions. In this literature, it is often assumed that the estimated parameters used +in the calculation of kriging weights and kriging variances are the true one, see [18, Chapter 3] and +[19, Chapter 6] for a discussion on the range of applicability of such estimates. We implicitly make the +same assumptions when substituting the value of the parameters λ and c in the pre-processing step. It +remains, however, an interesting open problem to understand the interplay of the estimates’ bias, also of +the parameter ¯α, on the validity of Proposition (3.10) and the PAC Bayesian bound (58). The biggest +problem of this analysis relies upon disentangling the effect of the bias of the constant c introduced in +the pre-processing step which changes the length of the input-features vector X. An optimal selection +strategy for the hyperparameter ϵ based on the bound (58) is connected to the latter issue. +29 + +10 +8 +6 +5 +4 +2 +1 +米 +米 +0 +0 +2 +44.1 +An example with simulated data +We simulate four data sets from a zero mean STOU process by employing the diamond grid algorithm +introduced in [47] for a spatial dimension d = 1. The data sets (Zt(x))(t,x)∈T×L is drawn from the +temporal-spatial interval [0, 1000] × [0, 10], with T = 0, . . . , 20.000 and L = 0, . . . , 200, corresponding to +a time and spatial discretization step ht = hs = 0.05. We choose as distribution for the L´evy seed Λ′ a +normal distribution with mean µ = 0 and standard deviation σ = 0.5, and a NIG(α, β, µ, δ) distribution +with α = 5, β = 0, δ = 0.2 and µ = 0. We use the latter distribution to test the behavior of MMAF +guided learning for different sets of heavy-tailed data. We also generate data with different mean-reverting +parameters A and use different seeds in generating the L´evy basis distribution, see summary scheme of +data’s characteristics in Table 3. The constant c = 1 across all generated data sets. +Name +Mean Reverting Parameter +L´evy seed +Random generator seed +GAU1 +A = 1 +Gaussian +1 +GAU10 +A = 4 +Gaussian +10 +NIG1 +A = 1 +NIG +1 +NIG10 +A = 4 +NIG +10 +Table 3: Overview on simulated data sets with c = 1 and spatial dimension d = 1. +We start our procedure by step (i), i.e., estimating for each data set the parameter λ and c. We +use the estimators (77) and (78) presented in Section A.2. The true parameter λ being equal to 1 +2 or +2 depending on the chosen mean reverting parameter A. The obtained results for each data set are +given in Table 4. We go to step (ii), and for each pixel position x∗ not belonging to the boundary of +Name +A∗ +c∗ +λ∗ +GAU1 +1.014 +1.0003 +0.5068 +GAU10 +4.0145 +1.0011 +2.005 +NIG1 +1, 0024 +0.9992 +0.5016 +NIG10 +3.9682 +1.0000 +1.9841 +Table 4: Estimations of parameters A,c and λ. +the frame (a total of 199 pixels), we select the parameter pt using the criteria (64) and a value of the +parameter ϵ = 2.99. For each pixel, we use a multivariate standard normal Gaussian distribution as +reference distribution and use importance sampling (with Gaussian proposal) to estimate the integral in +(64). The modeling selection criteria output pt = 1 for each pixel. We then extract from the frames a +training data set S following assumption 3.15, which we use at step (iii) of our forecasting procedure. +An acceptance-rejection algorithm with a Gaussian proposal is used to draw β from the best randomized +Gibbs estimator. We report in Figure 4 the results obtained in each data set and an analysis of the average +MSE across pixels in Table 5. MMAF guided learning has a low MSE in most of the pixels analyzed. +Name +MSE +GAU1 +0.07042 +GAU10 +0.37028 +NIG1 +0.12365 +NIG10 +0.07102 +Table 5: Average MSE across pixels. +30 + +0 +25 +50 +75 +100 +125 +150 +175 +200 +2 +1 +0 +1 +2 +prediction set +test set +(a) GAU1 +0 +25 +50 +75 +100 +125 +150 +175 +200 +3 +2 +1 +0 +1 +2 +prediction set +test set +(b) GAU10 +0 +25 +50 +75 +100 +125 +150 +175 +200 +1 +0 +1 +2 +3 +4 +prediction set +test set +(c) NIG1 +0 +25 +50 +75 +100 +125 +150 +175 +200 +1.5 +1.0 +0.5 +0.0 +0.5 +1.0 +1.5 +prediction set +test set +(d) NIG10 +Figure 4: One step ahead average prediction on 1.000.000 draws for each pixel position in the data sets. +31 + +5 +Conclusions +We define a novel theory-guided machine learning methodology for spatio-temporal data. The method- +ology starts by modeling the data using an influenced mixed moving average field. Such a step requires +the specification of the kernel function, including the presence of random parameters, and that the L´evy +seed underlying the definition of the model has finite second order moments. To enable one-time ahead +predictions, we use the underlying model to extract a training data set from the observed data, which +preserves the dependence structure of the latter and reduces as much as possible the number of past +and neighboring space-time points used in learning. In the model class of the Lipschitz function, which +includes linear functions but also neural networks, we show novel PAC Bayesian bounds for MMAF gen- +erated data. Their minimization leads to the determination of a randomized predictor for our prediction +task. Finally, in the case of linear models, we test the performance of our learning methodology on a set of +four simulated data from an STOU process and obtain an average favorable performance in all analyzed +scenarios. +Funding +The fist author was supported by the research grant GZ:CU 512/1-1 of the German Research Foundation +(DFG). +References +[1] P. Alquier and B. Guedj. Simpler PAC-Bayesian bounds for hostile data. Mach. Learn., 107 (5):887– +902, 2018. +[2] P. Alquier, X. Li, and O. Wintenberger. Prediction of time series by statistical learning: general +losses and fast rates. Depend. Model., 1:65–93, 2013. +[3] P. Alquier and O. Wintenberger. +Model selection for weakly dependent time series forecasting. +Bernoulli, 18 (3):883–913, 2012. +[4] F. Amato, F. Guignard, S. Robert, and M. Kanevski. A novel framework for spatio-temporal pre- +diction of environmental data using deep learning. Sci. Rep., 10:22243, 2020. +[5] Z. Bai, P. X.-K. Song, and T. E. Raghunathan. Joint composite estimating functions in spatiotem- +poral models. J. R. Stat. Soc. Ser. B. Stat. Methodol., 74:799–824, 2012. +[6] O. E. Barndorff-Nielsen. Superposition of Ornstein-Uhlenbeck type processes. Theory Probab. Appl., +45:175–194, 2021. +[7] O. E. Barndorff-Nielsen, F. E. Benth, and A. E. D. Veraart. Ambit Stochastics. Springer, Cham, +2018. +[8] O. E. Barndorff-Nielsen and N. N. Leonenko. Spectral properties of superpositions of 6Ornstein- +Uhlenbeck type processes, methodology and computing. Methodol. Comput. Appl. Probab., 7:335– +352, 2005. +32 + +[9] O. E. Barndorff-Nielsen, A. Lunde, N. Shepard, and A. E. D. Veraart. Integer-valued trawl processes: +a class of stationary infinitely divisible processes. Scand. J. Stat., 3:693–724, 2014. +[10] O. E. Barndorff-Nielsen and J. Schmiegel. L´evy-based spatial-temporal modelling, with applications +to turbulence. Russian Math. Surveys, 59:65–90, 2004. +[11] O. E. Barndorff-Nielsen and R. Stelzer. Multivariate supOU processes. Ann. Appl. Probab., 21:140– +182, 2011. +[12] L. B´egin, P. Germain, F. Laviolette, and J.-F. Roy. +PAC-Bayesian bounds based on the r´enyi +divergence. In Proceedings of the 19th annual International Conference on Artificial Intelligence and +Statistics, pages 435–444, 2016. +[13] M. Bennedsen, A. Lunde, N. Shephard, and A. E. D. Veraart. +Inference and forecasting for +continuous-time integer-valued trawl processes. arXiv:2107.03674v2, 2021. +[14] R. C. Bradley. Introduction to strong mixing conditions, Volume 1. Kendrick Press, Utah, 2007. +[15] F. Bunea, A. B. Tsybakov, and M. H. Wegkamp. Aggregation for Gaussian regression. Ann. Statist., +35:1674–1697, 2007. +[16] O. Catoni. Statistical learning theory and stochastic optimization. Lecture notes in Mathematics, +Springer, Berlin, 2004. +[17] O. Catoni. PAC-Bayesian supervised classification: the thermodynamics of statistical learning. In- +stitute of Mathematical Statistics Lecture Notes – Monograph Series, 56. Institute of Mathematical +Statistics, Beachwood, OH, 2007. +[18] N. Cressie. Statistics for Spatial Data. Wiley, New York, 1993. +[19] N. Cressie and C. K. Wikle. Statistics for spatio-temporal data. John Wiley & Sons, Inc., Hoboken, +New Jersey, 2011. +[20] I. V. Curato and R. Stelzer. Weak dependence and GMM estimation for supOU and mixed moving +average processes. Electron. J. Stat., 13 (1):310–360, 2019. +[21] I. V. Curato and R. Stelzer. Erratum: Weak dependence and GMM estimation for supOU and mixed +moving average processes. arXiv:1807.05801, 2022. +[22] I. V. Curato, R. Stelzer, and B. Str¨oh. Central limit theorems for stationary random fields under +weak dependence with application to ambit and mixed moving average fields. Ann. Appl. Probab., +32:1814–1861, 2022. +[23] A. D. Davis, C. Kl¨uppelberg, and C. Steinkohl. Statistical inference for max-stable processes in +space and time. J. R. Stat. Soc. Ser. B. Stat. Methodol., 75:791–819, 2013. +[24] J. Dedecker. A central limit theorem for stationary random fields. Probab. Theory Related Fields, +110:397–426, 1998. +[25] J. Dedecker and P. Doukhan. A new covariance inequality and applications. Stochastic Process. +Appl., 106, 2003. +33 + +[26] J. Dedecker, P. Doukhan, G. Lang, L. R. J Rafael, and S. Louhichi. Weak dependence: with examples +and applications. Springer Berlin, 2007. +[27] M. D. Donsker and S. S. Varadhan. Asymptotic evaluation of certain markov process expectations +for large time. Commun. Pure Appl. Math., 28:389–461, 1976. +[28] G. K. Dziugaite and M. D. Roy. Computing nonvacuous generalization bounds for deep (stochastic) +neural networks with many more parameters than training data. In Proceedings of the Conference +on Uncertainty in Artificial Intelligence, 2017. +[29] M. Fazlyab, A. Robey, H. Hassani, M. Morari, and G. J. Pappas. Efficient and accurate estimation +of lipschitz constants for deep neural networks. In Proceedings of the 33rd Conference on Neural +Information Processing Systems, page 11427–11438, 2019. +[30] C. A. Glasbey and D. J. Allcroft. A spatiotemporal auto-regressive moving average model for solar +radiation. Journal of the Royal Statistical Society. Series C, 57:343–355, 2008. +[31] J. A. Gonz`alez, F. J. Rodr`ıguez-Cort`es, O. Cronie, and J. Mateu. Spatio-temporal point process +statistics: a review. Spatial Statistics, 18:505–544, 2016. +[32] P. D. Gr¨unwald and N. A. Mehta. Fast rates for general unbounded loss functions:from erm to +generalized bayes. J. Mach. Learn. Res., 21:1–80, 2020. +[33] B. Guedj. A primer on PAC-Bayesian learning. arXiv:1901.05353v3, 2019. +[34] H. Hang and I. Steinwart. A Bernstein-type inquality for some mixing processes and dynamical +systems with an application to learning. Ann. Stat., 45 (2):708–743, 2017. +[35] W. Hoeffding. Probability inequalities for sums of bounded random variables. J. Americ. Statist. +Assoc., 58:13–30, 1963. +[36] M. Holland. PAC-Bayes under potentially heavy tails. In Advances in Neural Information Processing +Systems, volume 32, pages 2715–2124, 2019. +[37] P. Holmes, J. L. Lumley, G. Berkooz, and C. W. Rowley. Turbulence, Coherent Structures, Dynamical +Systems and Symmetry. Cambridge University Press, Cambridge, 2012. +[38] I. A. Ibragimov. Some limit theorems for stationary processes. Theory Probab. Appl., 7:349–382, +1962. +[39] K. Y. J`onsd`ottir, A. Rønn-Nielsen, K. Mouridsen, and E. B. V. Jensen. L´evy based modelling in +brain imaging. Scandinavian Journal of Statistics, 40:511–529, 2013. +[40] Scaman. K. and V. Virmaux. Lipschitz regularity of deep neural networks: analysis and efficient esti- +mation. In Proceedings of the Conference on Neural Information Processing System, page 3839–3848, +2018. +[41] A. Karpatne, G. Atluri, J. H. Faghmous, M. Steinbach, A. Banerjee, A. Ganguly, S. Shekhar, N. Sam- +atova, and V. Kumar. Theory-guided data science: A new paradigm for scientific discovery from data. +IEEE Trans. Knowl. Data Eng., 29:2318–2331, 2017. +34 + +[42] S. Kullback. Information Theory and Statistics. John Wiley & Sons, 1959. +[43] S. Lahiri, Y. Lee, and N. Cressie. On asymptotic distribution and asymptotic efficiency of least +squares estimators of spatial variogram parameters. J. Statist. Plann. Inference, 103:65–85, 2002. +[44] D. S. Modha and E. Masry. +Minimum complexity regression estimation with weakly dependent +observations. IEEE T. Inform. Theory, 42:2133–2145, 1996. +[45] G. D. Montanez and C. S. Shalizi. The LICORS cabinet: nonparametric light cone methods for +spatio-temporal modeling. arXiv:1506.02686v2, 2020. +[46] J.-M. Montero, G. Fern`andez-Avil`es, and J. Mateu. Spatial and Spatio-Temporal Geostatistical Mod- +eling and Kriging. Wiley, 2015. +[47] M. Nguyen and A. E. D. Veraart. Spatio-temporal Ornstein–Uhlenbeck processes: theory, simulation +and statistical inference. Scand. J. Stat., 44:46–80, 2017. +[48] M. Nguyen and A. E. D. Veraart. Bridging between short-range and long-range dependence with +mixed spatio-temporal Ornstein-Uhlenbeck processes. Stochastics, 90:1023–1052, 2018. +[49] J. Pearl, M. Glymour, and N. P. Jewell. Causal inference in statistics a primer. Wiley, 2016. +[50] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics informed deep learning (part I): Data-driven +solutions of nonlinear partial differential equations. arXiv: 1711.10561, 2017. +[51] M. Raissi, P. Perdikaris, and G. E. Karniadakis. Physics informed deep learning (part II): Data- +driven discovery of nonlinear partial differential equations. arXiv: 1711.10566, 2017. +[52] B. S. Rajput and J. Rosi´nski. +Spectral representations of infinitely divisible processes. +Probab. +Theory Rel., 82:451–487, 1989. +[53] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. The MIT Press +Cambridge, 2006. +[54] M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat. Deep +learning and process understanding for data-driven earth system science. Nature, 556:195–204, 2019. +[55] E. Rio. Sur le th´eor´eme de Berry–Esseen pour les suites faiblement d´ependantes. Theory Probab. +Appl., 104:255–282, 1996. +[56] M. Rosenblatt. A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA, +42:43–47, 1956. +[57] D. B. Rubin. Causal inference using potential outcomes: design, modeling, decisions. J. Am. Stat. +Assoc., 100:322–331, 2005. +[58] A. Rupe, N. Kumar, V. Epifanov, K. Kashinath, O. Pavlzk, F. Schlimbach, M. Patwary, S. Maidanov, +V. Lee, Prabhat, and J.P. Crutchfield. +DisCo: physics-based unsupervised discovery of coherent +structures in spatiotemporal systems. In 2019 IEEE/ACM Workshop on Machine Learning in High +Performance Computing Environments, pages 75–87, 2019. +35 + +[59] K. Sato. +L´evy Processes and Infinitely Divisible Distributions. +Cambridge Studies in Advanced +Mathematics 68. Cambridge Univ. Press, Cambridge, 2013. +[60] J. Shawe-Taylor, P. L. Bartlett, C. Williamson, R, and M. Anthony. Structural risk minimization +over data-dependent hierarchies. IEEE T. Inform. Theory, 44 (5):1926–1940, 1998. +[61] X. Shi, Z. Gao, L. Lausen, H. Wang, and D.-Y. Yeung. Deep learning for precipitation nowcasting: +A benchmark and a new model. +In Advances in Neural Information Processing Systems, page +5617–5627, 2017. +[62] X. Shi and D. Y. Yeung. +Machine learning for spatiotemporal sequence forecasting: A survey. +arXiv:1808.06865, 2018. +[63] M. L. Stein. Space-time covariance functions. J. Am. Stat. Assoc., 100 (469):310–322, 2005. +[64] R. Stelzer, T. Tossdorf, and M. Wittilinger. Moment based estimation of supOU processes and a +related stochastic volatility model. Stat. Risk Model, 32:1–24, 2015. +[65] N. Thiemann, C. Igel, O. Wintenberger, and Y. Seldin. A strongly quasiconvex PAC-Bayesian bound. +J. Mach. Learn. Res., pages 1–26, 2017. +[66] L. G. Valiant. A theory of the learnable. Commun. of the ACM, 27 (11):1134–1142, 1984. +[67] V. N. Vapnik. The nature of Statistical Learning Theory. Springer, Berlin, 2000. +[68] A. Vlontyos, H.B. Rocha, and D. Rueckert. Causal future prediction in a Minkowski space-time. In +International Conference on Learning Representations, 2021. +A +Appendix +A.1 +Weak dependence notions for casual processes and (influenced) MMAF +In this section, we discuss more in details the asymptotic dependence notions called θ-weak dependence +and θ-lex weak dependence. The latter notion has been introduced in [22, Definition 2.1] as an extension to +the random field case of the notion of θ-weak dependence satisfied by causal stochastic processes [25]. This +notion of dependence is presented in Definition 2.23. However, the notion of θ-lex weak dependence given +in Definition 2.16, slightly differs from the one given in [22, Definition 2.1] and represents an extension +to the random field case of the θ-weak dependence notion defined in [26, Remark 2.1]. Note that the +definition of θ-weak dependence in [25] and [26, Remark 2.1] differ for the cardinality of the marginal +distributions on which the function G is computed, namely, G ∈ G1 in the former and G ∈ Gν for ν ∈ N +in the latter. +Remark A.1 (Mixingale-type representation of θ-weak dependence). Let L1 = {g : R → R, bounded and +Lipschitz continuous with Lip(g) ≤ 1}, a stochastic process (Xt)t∈Z, and M = σ{Xt : t < j1, t ∈ Z}, +then it is showed in [26, Proposition 2.3] that +θ(r) = sup +g∈L1 +∥E[g(Xj1)|M] − E[g(Xj1)]∥1. +(69) +An alternative proof of this result can be also found in [25, Lemma 1]. +36 + +Let us now analyze the relationship between θ-weak dependence, α-mixing, and φ-mixing. Most +of the PAC Bayesian literature for stationary and heavy tailed data employs the following two mixing +conditions, see Remark 3.25, namely α-mixing and φ-mixing. The results in the Lemma below give us a +proof that the θ-weak dependence is more general than α-mixing and φ-mixing and therefore describes +the dependence structure of a bigger class of models. +Let M and V be two sub-sigma algebras of F. First of all, the strong mixing coefficient [56] is +defined as +α(M, V) = sup{|P(M)P(V ) − P(M ∩ V )|, M ∈ M, V ∈ V}. +A stochastic process X is said to be α-mixing if +α(r) = α(σ{Xs, s ≤ 0}, σ{Xs, s ≥ r}) +converges to zero as r → ∞. The φ-mixing coefficient has been introduced in [38] and defined as +φ(M, V) = sup{|P(V |M) − P(V )|, M ∈ M, V ∈ V, P(M) > 0}. +A stochastic process X is said to be φ-mixing if +φ(r) = φ(σ{Xs, s ≤ 0}, σ{Xs, s ≥ r}) +converges to zero as r → ∞. +Lemma A.2. Let (Xt)t∈Z be a stationary real-valued stochastic process such that E[|X0|q] ≤ ∞ for +some q > 1. Then, +(a) θ(r) ≤ 2 +2q−1 +q α(r) +q−1 +q ∥X0∥q ≤ 2 +q−1 +q φ(r) +q−1 +q ∥X0∥q, and +(b) θ-weak dependence is a more general dependence notion than α-mixing and φ-mixing. +Proof. The proof of the first inequality at point (a) is proven in [22, Proposition 2.5] using the represen- +tation of the θ-coefficients (69). The proof of the second inequality follows from a classical result in [14, +Proposition 3.11]. In [22, Proposition 2.7], it is defined a stochastic process which is θ-weak dependent +but neither α-mixing or φ-mixing. +As seen in Definition 2.16 by using the lexicographic order in R1+d, an opportune extension of +θ-weak dependence valid for random fields can be defined. +The definition of θ-lex coefficients for G ∈ G1 is given in [22, Definition 2.1]. The latter can be +represented as θv +lex(r) := supu∈N{θu,v(r)} for v = 1. Therefore, an alternative way to define the θ-lex +coefficients in Definition 2.16 is obviously +θlex(r) = sup +v∈N +θv +lex(r), v ∈ N for all r ∈ R+. +(70) +The following Lemma has important applications in the following sections. +Lemma A.3. Let Z be a θ-lex weakly dependent random field, then ZM +i += Zi ∨ (−M) ∧ M is θ-lex +weakly dependent. +37 + +Proof. Let u, v ∈ N, M > 0, F ∈ G∗ +u, G ∈ Gv, and i1, i2, . . . , iu ∈ V r +Γ′ where Γ′ = {j1, . . . , jv}. Let +F M(Zi1, . . . , Ziu) = F(ZM +i1 , . . . , ZM +iu ), and GM(Zj1, . . . , Zjv) = G(ZM +j1 , . . . , ZM +jv ). We have that F M is +a bounded function on (Rn)u and GM is a bounded and Lipschitz function on (Rn)v (with the same +Lipschitz coefficients as the function G): let (Z1, . . . , Zv) and ( ˜Z1, . . . , ˜Zv), +|GM(Z1, . . . , Zv) − GM( ˜Z1, . . . , ˜Zv)| ≤ Lip(G) +v +� +i=1 +|ZM +i +− ˜Z +M +i | +≤ Lip(G) +v +� +i=1 +|Zi − ˜Zi|. +Then, it holds that +|Cov(F(ZM +i1 , . . . , ZM +iu ), G(ZM +j1 , . . . , ZM +jv ))| ≤ ∥F∥∞vLip(G)θlex(r), +where θlex(r) are the θ-coefficients of the field Z, and so the field ZM +t (x) is θ-lex weakly dependent. +Note that the above result also holds for X a θ-weakly dependent process. Therefore, the truncated +XM +t += Xt ∨ (−M) ∧ M is a θ-weakly dependent process. +The notion of θ-lex weak dependence also admits a mixingale-type representation. +Remark A.4. Let L1 = {g : R → R, bounded and Lipschitz continuous with Lip(g) ≤ 1} and Γ′ = +{j1, . . . , jv} for {j1, . . . jv} ∈ Z1+d. For a random field (Zt)t∈Z1+d, by readily applying [22, Lemma 5.1] +on the σ-algebra M = σ{Zt : t ∈ V r +Γ′ ⊂ Z1+d}, the following result can be easily proved: +θlex(r) = sup +g∈L1 +sup +v∈N +∥E[g(Zj1, . . . , Zjv)|M] − E[g(Zj1, . . . , Zjv)]∥1. +(71) +We now use the representation of the θ-lex coefficients (71) to understand its relationships to α∞,v- +mixing and φ∞,v-mixing for v ∈ N∪{∞}. These notions are defined in [24] and are strong mixing notions +used in the study of stationary random fields. +In general, for u, v ∈ N ∪ {∞}, given coefficients +αu,v(r) = sup{α(σ(ZΓ), σ(ZΓ′)), Γ, Γ′ ∈ R1+d, |Γ| ≤ u, |Γ′| ≤ v, dist(Γ, Γ′) ≥ r}, +and +φu,v(r) = sup{φ(σ(ZΓ), σ(ZΓ′)), Γ, Γ′ ∈ R1+d, |Γ| ≤ u, |Γ′| ≤ v, dist(Γ, Γ′) ≥ r}. +a random field Z is said to be αu,v-mixing or φu,v-mixing if the coefficients (αu,v(r))r∈R+ or (φu,v(r))r∈R+ +converge to zero as r → ∞. We then have the following result. +Lemma A.5. Let (Zt)t∈Z1+d be a stationary real-valued random field such that E[|Z0|q] ≤ ∞ for some +q > 1. Then, for v ∈ N ∪ {∞}, +(a) θlex(r) ≤ 2 +2q−1 +q α∞,v(r) +q−1 +q ∥Z0∥q ≤ 2 +q−1 +q φ∞,v(r) +q−1 +q ∥Z0∥q, and +(b) it holds that θ-lex weak dependence is more general than α∞,v-mixing and α-mixing in the special +case of stochastic processes. Moreover, θ-lex weak dependence is more general than φ∞,v-mixing. +38 + +Proof. From the proof of [22, Proposition 2.5], we have that +θ1 +lex(r) ≤ 2 +2q−1 +q α∞,1(r) +q−1 +q ∥Z0∥q. +Because of (70) and [14, Proposition 3.11], we have that +θlex(r) ≤ 2 +2q−1 +q α∞,1(r) +q−1 +q ∥Z0∥q ≤ 2 +q−1 +q φ∞,1(r) +q−1 +q ∥Z0∥q. +Equally, +θlex(r) ≤ 2 +2q−1 +q α∞,v(r) +q−1 +q ∥Z0∥q ≤ 2 +q−1 +q φ∞,v(r) +q−1 +q ∥Z0∥q. +The proof of the point (b) follows directly by [22, Proposition 2.7]. In fact θlex(r) = θ1,∞(r) following +the notations of [26, Definition 2.3] and the process used in the proof of the Proposition is θ-lex weakly +dependent but neither α∞,v, α or φ∞,v-mixing. +A.2 +Inference on STOU processes +Let us start by explaining the available estimation methodologies for the parameter vector θ0 = {A, c, V ar(L′)} +under the STOU modeling assumption when the spatial dimension d = 1. Throughout, we refer to the +notations and calculations in Example 2.21. +We have two ways of estimating the parameter vector θ0 in such a scenario. The first one is presented +in [47]. Here, the parameters A and c are first estimated using normalized spatial and temporal variograms +defined as +γS(u) := E((Zt(x) − Zt(x − u))2) +V ar(Zt(x)) += 2(1 − ρS(u)) = 2 +� +1 − exp +� +− Au +c +�� +, +(72) +and +γT (τ) := E((Zt(x) − Zt−τ(x))2) +V ar(Zt(x)) += 2(1 − ρT (τ)) = 2(1 − exp(−Aτ)), +(73) +where ρS and ρT are defined in Example 2.10. Note that normalized variograms are used to separate the +estimation of the parameters A and c from the parameter V ar(Λ′). Let N(u) be the set containing all the +pairs of indices at mutual spatial distance u for u > 0 and the same observation time. Let N(τ) be the +set containing all the pairs of indices where the observation times are at a distance τ > 0 and have the +same spatial position. |N(u)| and |N(τ)| give the number of the obtained pairs, respectively. Moreover, +let ˆk2 be the empirical variance which is defined as +ˆk2 = +1 +(D − 1) +D +� +i=1 +Z2 +ti(xi), +(74) +where D denotes the sample size. The empirical normalized spatial and temporal variograms are then +39 + +defined as follows: +ˆγS(u) = +1 +|N(u)| +� +i,j∈N(u) +(Zti(xi) − Ztj(xj))2 +ˆk2 +(75) +ˆγT (τ) = +1 +|N(τ)| +� +i,j∈N(τ) +(Zti(xi) − Ztj(xj))2 +ˆk2 +. +(76) +By matching the empirical and the theoretical forms of the normalized variograms, we can estimate A +and c by employing the estimators +A∗ = −τ −1 log +� +1 − ˆγT (τ) +2 +� +, and c∗ = − +A∗u +log +� +1 − ˆγS(u) +2 +�. +(77) +Alternatively, we can use a least square methodology to estimate the parameters A and c, i.e. (75) and +(76) are computed at several lags, and a least-squares estimation is used to fit the computed values to the +theoretical curves. The authors in [47] use the methodology discussed in [43] to achieve the last target. We +refer the reader also to [18, Chapter 2] for further discussions and examples of possible variogram model +fitting. The parameter V ar(Λ′) can be estimated by matching the second-order cumulant of the STOU +with its empirical counterpart. The consistency of this estimation procedure is proven in [47, Theorem +12]. +A second possible methodology for estimating the vector θ0 employs a generalized method of moment +estimator (GMM), as in [48]. It is essential to notice that by using such an estimator, we cannot separate +the parameter V ar(Λ′) from the estimation of the parameters A and c. Instead, all moment conditions +must be combined into one optimization criterion, and all the estimations must be found simultaneously. +Consistency and asymptotic normality of the GMM estimator are discussed in [48] and [22], respectively. +It is important to notice that the parameter λ can be inferred by knowing the parameter A and c +alone (this also holds for d ≥ 2). We can then plug in the estimations (77) (or the ones obtained by least +squares or GMM) and obtain the estimator +λ∗ = A∗ min(2, c∗) +2c∗ +, +(78) +of the decay rate of the θ-lex coefficients of an STOU with spatial dimension d = 1. This estimator is +consistent because of [47, Theorem 12] and the continuous mapping theorem. Furthermore, by using the +estimation of the parameter V ar(Λ′), we can also obtain a consistent estimator for ¯α. +For d ≥ 2, a least square methodology is still applicable for estimating the variogram’s parameters. +The estimator used in [47] is a normalized version of the least-square estimator for spatial variogram’s +parameters discussed in [43], which also applies for d > 1. This method, paired with a method of moments +(matching the second order cumulant of the field Z with its empirical counterparts), allows estimating +the parameter V ar(Λ′). The GMM methodology discussed in [48] also continues to apply for d ≥ 2. +However, when the spatial dimension is increasing, the shape of the normalized variograms and the field’s +moments become more complex, and higher computational effort is required to navigate through the high +dimensional surface of the optimization criterion behinds least-squares or GMM estimators. +40 + +A.3 +Inference for MSTOU processes +When estimating the parameter vector θ1 = {α, β, c, V ar(L′)} under an MSTOU modeling assumption– +see, for example, the shape of the coefficients in Example 2.22– it is evident that the shape of the +autocorrelation function, and therefore of the normalized temporal and spatial variograms, become more +complex for increasing d. As already addressed in the previous sections, when estimating the parameters +(α, β, c) alone, we can use the least-squares type estimator discussed in [43]. Moreover, by pairing the +latter with a method of moments or using a GMM estimator, we can estimate the vector θ1. +B +Appendix +B.1 +Proofs of Section 2.5 +In [22, Proposition 3.11], it is given a general methodology to show that an MMAF Z is θ-lex weakly +dependent. Given that the definition of θ-lex-weakly dependence used in the paper slightly differs from +the one given in [22], the proof of Proposition 2.17 and 2.19 differ from the one of [22, Proposition 3.11]. +Before giving a detailed account of these proofs, let us state some notations used in both. Let r > 0, +{(tj1, xj1), . . . (tjv, xjv)} = Γ′ ⊂ R1+d and {(ti1, xi1), . . . , (tiu, xiu)} = Γ ⊂ V r +Γ′ such that |Γ| = u and +|Γ′| = v for (u, v) ∈ N × N. We call the truncated (influenced) MMAF the vector +Z(ψ) +Γ′ = +� +Z(ψ) +tj1 (xj1), . . . , Z(ψ) +tjv (xjv) +�⊤ +, +where ψ := ψ(r) > 0. In particular, for all a ∈ {1, . . . , u} and a b ∈ {1, . . . , v} , ψ has to be chosen such +that it exists a set Bψ +tjb (xjb) with the following properties. +• |Bψ +tjb (xjb)| → ∞ as r → ∞ for all b, and +• Iia = H × Atia (xia) and Ijb = H × Bψ +tjb (xjb) are disjoint sets or intersect on a set H × O, where +O ∈ R1+d and dim(O) < d + 1, for all a and b +Let us now assume that it is possible to construct the sets Bψ +tjb (xjb). Then, since π×λ1+d(H×O) = 0 +and by the definition of a L´evy basis, it follows that +Ztia (xia) = +� +H +� +Ati(xi) +f(A, ti − s, xi − ξ)Λ(dA, ds, dξ) and +Z(ψ) +tjb (xjb) = +� +H +� +Bψ +tjb (xjb) +f(A, tj − s, xj − ξ)Λ(dA, ds, dξ), +and ZΓ and Z(ψ) +Γ′ +are independent. Hence, for F ∈ G∗ +u and G ∈ Gv, F(ZΓ) and G(Z(ψ) +Γ′ ) are also +41 + +independent. Now +|Cov(F(ZΓ), G(ZΓ′))| +≤ |Cov(F(ZΓ), G(Z(ψ) +Γ′ ))| + |Cov(F(ZΓ), G(ZΓ′) − G(Z(ψ) +Γ′ ))| += |E[(G(ZΓ′) − G(Z(ψ) +Γ′ ))F(ZΓ)] − E[G(ZΓ′) − G(Z(ψ) +Γ′ )]E[F(ZΓ)]| +≤ 2∥F∥∞E[|G(ZΓ′) − G(Z(ψ) +Γ′ )|] ≤ 2Lip(G)∥F∥∞ +v +� +l=1 +E[|Ztjl (xjl) − Z(ψ) +tjl (xjl)|] = += 2Lip(G)∥F∥∞vE[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|], +(79) +because an (influenced) MMAF is a stationary random field. To show that a field satisfy Definition 2.16, is +then enough to prove that E[|Ztj1 (xj1)−Z(ψ) +tj1 (xj1)|] in the above inequality converges to zero as r → ∞. +The proofs of Proposition 2.17 and 2.19 differ in the definition of the sequence ψ and the sets Bψ +tjb (xjb). +Proof of Proposition 2.17. In this proof, we assume that Bψ +tjb (xjb) = At(x)\V ψ +(t,x), where ψ = ψ(r) := +r +√ +(d+1)(c2+1). +(i) Using the translation invariance of At(x) and V (ψ) +(t,x) we obtain +E[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|] ≤ +�� +H +� +A0(0)∩V ψ +0 +V ar(Λ′)f(A, −s, −ξ)2dξdsπ(dA) +� 1 +2 += +�� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2 dξdsπ(dA) +� 1 +2 +, +where we have used Proposition 2.8-(ii) to bound the L1-distance from above. Overall, we obtain +θlex(r) ≤ 2 +�� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2dξdsπ(dA) +� 1 +2 +, +which converges to zero as r tends to infinity by applying the dominated convergence theorem. +(ii) By applying Proposition 2.8-(i) and (ii), we obtain +E[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|] +≤ +� � +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +V ar(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)2dξdsπ(dA) ++ +�� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +E(Λ′) +� +∥ξ∥≤cs +f(A, −s, −ξ)dξdsπ(dA) +�2 � 1 +2 +. +Finally, we proceed similarly to proof (i) and obtain the desired bound. +42 + +(iii) We apply now Proposition 2.8-(iii). Then, +E[|Ztj1 (xj1) − Z(ψ) +tj1 (xj1)|] +≤ +� � +S +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +� +∥ξ∥≤cs +|f(A, −s, −ξ)γ0|dξdsπ(dA) ++ +� +H +� +−r min(1/c,1) +√ +(d+1)(c2+1) +−∞ +� +∥ξ∥≤cs +� +R +|f(A, −s, −ξ)y|ν(dy)dξdsπ(dA) +� +. +The bound for the θ-lex-coefficients is obtained following the proof line in (i). +Proof of Proposition 2.19. +(i) W.l.o.g., let us determine the truncated set when (tjb, xjb) = (0, 0). We +use, to this end, two auxiliary ambit sets translated by a value ψ > 0 along the spatial axis, namely, +the cones A0(ψ) and A0(−ψ), as illustrated in Figure 5-(a),(c) for c ≤ 1 and in Figure 5-(b),(d) +for c > 1. Then, we set the truncated integration set to Bψ +0 (0) = A0(0)\(A0(ψ) ∪ A0(−ψ)). Since +(tia, xia) ∈ V r +(0,0), it is sufficient to choose ψ such that the integration set of Z(ψ) +0 +(0) is a subset of +(V r +(0,0))c. To this end, the three intersecting points ( −ψ +2c , −ψ +2 ), ( −ψ +c , 0) and ( −ψ +2c , ψ +2 ) have to be inside +the set (V r +(0,0))c, as illustrated in Figure 5-(e) for c ≤ 1 and in Figure 5-(f) for c > 1. Clearly, this +leads to the conditions ψ ≤ rc, ψ ≤ 2r and ψ ≤ 2rc, which are satisfied for ψ = r min(2, c). Hence, +by using Proposition 2.8-(ii), we have that +θlex(r) ≤ 2V ar(Λ′)1/2 +�� ∞ +0 +� +A0(0)∩(A0(ψ)∪A0(−ψ)) +f(A, −s)2dsdξπ(dλ) +�1/2 += 2V ar(Λ′)1/2 +� � ∞ +0 +� +A0(0)∩A0(ψ) +f(A, −s)2dsdξπ(dλ) + +� ∞ +0 +� +A0(0)∩A0(−ψ) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(ψ)∩A0(−ψ) +f(A, −s)2dsdξπ(dλ) +�1/2 +≤ 2 +� +2Cov(Z0(0), Z0(r min(2, c))). +which converges to zero as r → ∞ for the dominated convergence theorem. +(ii) In this proof, we indicate the spatial components and write xia = (yia, zia) ∈ R × R and xjb = +(yjb, zjb) ∈ R×R for a ∈ {1, . . . , u} and b ∈ {1, . . . , v}. W.l.o.g., let us then determine the truncated +set when (tjb, yjb, zjb) = (0, 0, 0). To this end, we use four additional ambit sets that are translated +by a value ψ > 0 along both spatial axis, namely, the cones A0(ψ, ψ), A0(ψ, −ψ), A0(−ψ, ψ) and +A0(−ψ, −ψ), as illustrated in Figure 7-(c) for c ≤ 1 and in Figure 7-(d) for c > 1). Then, we set the +truncated integration set to Bψ +0 (0) = A0(0, 0)\(A0(ψ, ψ) ∪ A0(ψ, −ψ) ∪ A0(−ψ, ψ) ∪ A0(−ψ, −ψ)). +Since (tia, yia, zia) ∈ V r +(0,0,0), it is sufficient to choose ψ such that the integration set of Z(ψ) +0 +(0, 0) +is a subset of (V r +(0,0,0))c, i.e. +sup +b∈Bψ +0 (0) +∥b∥∞ ≤ r. +(80) +43 + +(a) Integration set A0(0) for c = 1/ +√ +2 together with the +complement of V r +(0,0) for r = 3. +(b) Integration set A0(0) for c = 2 +√ +2 together with the +complement of V r +(0,0) for r = 3. +(c) Integration set A0(0) together with A0(ψ) and A0(−ψ) +for ψ = r min(2, c). +(d) Integration set A0(0) together with A0(ψ) and A0(−ψ) +for ψ = r min(2, c). +(e) Integration set of Z(ψ) +0 +(0) together with the complement +of V r +(0,0). +(f) Integration set of Z(ψ) +0 +(0) together with the complement +of V r +(0,0). +Figure 5: Integration set and truncated integration set of an MMAF Zt(x), exemplary for (t, x) = (0, 0). +44 + +6 + = rmin(2,c) = 3/V2 +Ao(2) +4 +(0, 2b) +2 +0 +-2 +(0,一) +-4 +Ao(2b) +-6 +1 +-6 +-5 +-4 +-3 +-2 +-1 +0 +1(0)K +Ao(cb) +6 +4 +2 +ob = rmin(2, c) +=6 +-2 +-4 +Ao(-) +(0, -山 +-6 +-6 +-5 +-4 +-3 +-2 +-1 +0Ao(O)(Ao() U Ao() +6 +2c +2 +0 +-2 +-4 +2c +2 +-6 +-6 +-5 +-4 +-3 +-2 +-1 +0 +16 +D +2c +2 +4 +2 +Ao(O)(Ao() +UAo(-)) +-2 +-4 +C +2c +-6 +2 +-6 +-5 +-4 +-3 +-2 +-1 +06 +c= 1/V2 +4 +r=3 +Ao(0) +2 +(0,0) +0 +((00)A) +-2 +-4 +-6 +-6 +-5 +-4 +-3 +-2 +-1 +0 +16 +Ao(0) +4 +3 +2 +(0,0) +0 +-2 +-4 +(V(0,0))c +-6 +-6 +-5 +-4 +-3 +-2 +0In the following we prove that the choice ψ = r min(1, c/ +√ +2) is sufficient for (80) to hold. We +investigate cross sections of the truncated integration set Bψ +0 (0) along the time axis. For a fixed +time point t, we call this cross-section Bt and, similarly, we denote the cross-section of an ambit +set by At. Note that the cross sections of our ambit sets along the time axis are circles with radius +|ct| (see also Figure 6). +t ∈ +� +−ψ +√ +2c, 0 +� +: As the distance between the center of the circle At +0(0, 0) and the centers of the circles At +0(ψ, ψ), +At +0(−ψ, ψ), At +0(ψ, −ψ), At +0(−ψ, −ψ) is +√ +2ψ, respectively, the set At +0(0, 0) (which is a circle with +radius |ct|) is disjoint from every of the additional ambit sets’ cross-sections at t and hence +Bt = At +0(0, 0) (see Figure 6-(a)). Clearly, we obtain +sup +t∈ +� +−ψ/( +√ +2c),0 +� sup +b∈Bt∥b∥ = +sup +t∈ +� +−ψ/( +√ +2c),0 +� max(c|t|, |t|) = max +� ψ +√ +2, +ψ +√ +2c +� +. +(81) +t ∈ +� +−ψ +c , −ψ +√ +2c +� +: For such t the set At +0(0, 0) intersects with every additional ambit sets’ cross-section (see Figure +6-(b)). However, as the additional ambit sets’ cross-sections do not intersect with each other, +the point p1(t) = (t, c|t|, 0) ∈ Bt on the boundary of At +0(0, 0) (see the red point in Figure 6-(b)) +is not excluded from Bt by any additional ambit set. Note that symmetry makes it sufficient +to look at p1(t). Hence, we obtain +sup +t∈ +� +−ψ/c,−ψ/( +√ +2c) +� sup +b∈Bt∥b∥ = +sup +t∈ +� +−ψ/c,−ψ/( +√ +2c) +�∥p1(t)∥ = max +� +ψ, ψ +c +� +. +(82) +t ∈ +� +− +√ +2ψ +c +, −ψ +c +� +: For such t the set At +0(0, 0) intersects with every additional ambit sets’ cross-section. Such +intersection additionally restrict At +0(0, 0) (see Figure 6-(c)). Straightforward calculations show +that the point where the boundaries of At +0(−ψ, ψ) and At +0(−ψ, ψ) as well as the set At +0(0, 0) +intersect, say p2(t), is given by (t, 0, ψ − +� +(ct)2 − ψ2) (see red point in Figure 6-(c)). Note +that symmetry makes it sufficient to look at p2(t). We obtain +sup +t∈ +� +− +√ +2ψ/c,−ψ/c +� sup +b∈Bt∥b∥ = +sup +t∈ +� +− +√ +2ψ/c,−ψ/c +�∥p2(t)∥ = max +� +ψ, +√ +2ψ +c +� +. +(83) +t ≤ − +√ +2ψ +c +: In the following, we show that for such t, the set At(0, 0) is entirely included in the union of the +additional ambit sets’ cross sections. Clearly, this is true if the upper point where the bound- +aries of At +0(−ψ, ψ) and At +0(−ψ, ψ) intersect, say p3(t), is outside of At +0(0, 0) (see the red point +in Figure 6-(d)). Note that symmetry makes it sufficient to look at p3(t). As straighforward +calculations show that p3(t) = (t, 0, ψ + +� +(ct)2 − ψ2), this is true if ψ + +� +(ct)2 − ψ2 ≥ c|t|, +or equivalently (ψ + +� +(ct)2 − ψ2)2 ≥ (ct)2. Moreover, we have +ψ2 + 2ψ +� +(ct)2 − ψ2 + (ct)2 − ψ2 ≥ (ct)2 ⇐⇒ ψ ≥ 0. +(84) +In view of condition (80) we combine (81), (82) and (83) and set ψ = r min(1, c/ +√ +2), which also +satisfies (84). +45 + +In addition to the cross sectional views from Figure 6, we give a full three-dimensional view of the +set Bψ +0 (0) for c ≤ 1 in Figure 7-(e) and for c > 1 in Figure 7-(f) that highlight the points on the +boundary of Bψ +0 (0) with maximal ∞-norm for ψ = r min(1, c/ +√ +2). +Therefore, because of Proposition 2.8-(ii), we can conclude that +θlex(r) ≤ 2V ar(Λ′)1/2 +�� ∞ +0 +� +A0(0,0)∩(A0(ψ,ψ)∪A0(−ψ,ψ)∪A0(ψ,−ψ)∪A0(−ψ,−ψ)) +f(A, −s)2dsdξπ(dλ) +�1/2 += 2V ar(Λ′)1/2 +� � ∞ +0 +� +A0(0,0)∩A0(ψ,ψ) +f(A, −s)2dsdξπ(dλ) ++ +� ∞ +0 +� +A0(0,0)∩A0(−ψ,ψ) +f(A, −s)2dsdξπ(dλ) ++ +� ∞ +0 +� +A0(0,0)∩A0(ψ,−ψ) +f(A, −s)2dsdξπ(dλ) ++ +� ∞ +0 +� +A0(0,0)∩A0(−ψ,−ψ) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(ψ,−ψ)∩A0(−ψ,−ψ) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(−ψ,ψ)∩(A0(ψ,−ψ)∪A0(−ψ,−ψ)) +f(A, −s)2dsdξπ(dλ) +− +� ∞ +0 +� +A0(0)∩A0(ψ,ψ)∩(A0(−ψ,ψ)∪A0(ψ,−ψ)∪A0(−ψ,−ψ)) +f(A, −s)2dsdξπ(dλ) +�1/2 +≤ 2 +� +2Cov(Z0(0, 0), Z0(ψ, ψ)) + 2Cov(Z0(0, 0), Z0(ψ, −ψ)). +which converges to zero as r → ∞ for the dominated convergence theorem. +B.2 +Proofs of Section 3 +Proof of Proposition 3.4. We drop the bold notations indicating random fields and stochastic processes +in the following. Let h ∈ H, we call Li = L(h(Xi), Yi) for i ∈ Z, Z(M) +t +(x) := Zt(x) ∧ M for M > 1, and +L(M) = (L(h(X(M) +i +), Y (M) +i +))i∈Z where +X(M) +i += L−(M) +p +(t0 + ia, x∗), +and +Y (M) +i += Z(M) +t0+ia(x∗), for i ∈ Z, +where +L−(M) +p +(t, x) = {Z(M) +s +(ξ) : (s, ξ) ∈ Z × L, ∥x − ξ∥ ≤ c (t − s) and t − s ≤ p}. +For u ∈ N, i1 ≤ i2 ≤ . . . ≤ iu ≤ iu + k = j with k ∈ N, let us consider the marginal of the field +� +(Xi1, Yi1), . . . , (Xiu, Yiu), (Xj, Yj) +� +, +(85) +46 + +(a) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −1 +and c = 2. +(b) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −5/4 +and c = 2. +(c) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −7/4 +and c = 2. +(d) Cross sections of the auxiliary +ambit sets and At +0(0, 0) for t = −2.2 +and c = 2. +Figure 6: Cross sections of the auxiliary ambit sets and At +0(0, 0) at different time points for ψ = 3. +and let us define +Γ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + isa, x∗) or (ti, xi) = (t0 + isa, x∗) for s = 1, . . . , u}, +and +Γ′ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + ja, x∗) or (ti, xi) = (t0 + isa, x∗)}. +Then r = dist(Γ, Γ′). In particular Γ ⊂ V r +Γ′, and r = (j − iu)a − p. For F ∈ G∗ +u and G ∈ G1, then +|Cov(F(Li1, . . . , Liu), G(Lj))| +(86) +≤|Cov(F(Li1, . . . , Liu), G(Lj) − G(L(M) +j +))| +(87) ++ |Cov(F(Li1, . . . , Liu), G(L(M) +j +))|. +(88) +The summand (87) is less than or equal to +2∥F∥∞Lip(G)E[|Lj − L(M) +j +|] ≤ 2∥F∥∞Lip(G)(E[|Yj − Y (M) +j +|] + E[|h(Xj) − h(X(M) +j +)|]) +≤ 2∥F∥∞Lip(G)(Lip(h)a(p, c) + 1)E[|Zt1(x1) − Z(M) +t1 +(x1)|] +by stationarity of the field Z, and because L and h are Lipschitz functions. Moreover, the function G(L(M) +j +) +47 + +A(, ) +At(, b) +6 +4 +2 +A(0, 0) +2 +0 +-2 +-4 +t=-1 +-6 +A(,) +A(,-) +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +yA(, ) +At(,) +6 +4 +2 +Pi(t) +—A(O, 0) +-2 +-4 +t = -5/4 +-6 +A(,) +A(,) +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +yA(, ) +At(, b) +p2(t) +6 +4 +2 +A(0, 0) +2 +0 +-2 +-4 +t = -7/4 +9- +A(,) +A(,一) +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +y8 +P3(t) +6 +A(,) +A(,) +4 +2 +At (O, 0 +0 +-2 +-4 +A(,) +A(,) +-6 +t = -2.2 +-8 +-8 +-6 +-4 +-2 +0 +2 +4 +6 +8 +y(a) Integration set A0(0, 0) for c = 1/ +√ +2 together with the +complement of V r +(0,0,0) for r = 3. +(b) Integration set A0(0, 0) for c = 2 together with the +complement of V r +(0,0,0) for r = 3. +(c) Integration +set +A0(0, 0) +together +with +A0(ψ, ψ), +A0(−ψ, ψ), +A0(ψ, −ψ) +and +A0(−ψ, −ψ) +for +ψ += +r min(1, c/ +√ +2). +(d) Integration +set +A0(0, 0) +together +with +A0(ψ, ψ), +A0(−ψ, ψ), +A0(ψ, −ψ) +and +A0(−ψ, −ψ) +for +ψ += +r min(1, c/ +√ +2). +(e) B is the integration set of Z(ψ) +0 +(0, 0). In addition, we +illustrate A0(−ψ, −ψ) for c = 1/ +√ +2 together with the com- +plement of V r +(0,0,0) for r = 3. +(f) B is the integration set of Z(ψ) +0 +(0, 0). In addition, we +illustrate A0(−ψ, −ψ) for c = 2 together with the comple- +ment of V r +(0,0,0) for r = 3. +Figure 7: Integration set and truncated integration set of an MMAF Zt(y, z) for d = 2. +48 + +8- +9 +(0, 0,0) +4 +r=3 +2 +At(0, 0) +2 +0 +-2 +-4 +(000A) +9- +-8 +c = 1/V2 +5 +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t8 +Ao(0, 0) +6 +(0, 0,0) +4 +r=3 +0 +2 +-2 +-4 +(V(o,0,0) +9- +-8 +5 +C +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t8 +(0,-,) +9 +(0, b, ) +=rmin +4 +3/2 +2 +0 +-2 +-4 +9- +-8 +(0, 4,-) +5 +0,-山二山) +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t(0,一, ) +(0,, +8 +6 +b += rmin +4 +3 +0 +2 +-2 +-4 +9- +-8 +0一一山 +5 +(0,,-山) +0 +0 +-1 +-5 +-2 +-3 +y +-4 +t,0,4 +0,0 +4 +3 +B +1 +2 +.0 +-1 +-2 +-3 +Ao(一,一山 +4 +-4 +(V0,0,0))℃ +2 +0 +-3 +-2 +-2.5 +-2 +-1.5 +-1 +0.5 +-4 +0 +0.5 +y +t于,0,4 +b. +0 +(V(0.0,0)° +4 +0 +3 +B +1 +2 +.0 +-1 +-2 +-3 +Ao(-,一山) +-4 +2 +0 +-3 +-2.5 +-2 +-2 +-1.5 +-1 +-0.5 +-4 +0 +0.5 +y +tbelongs to Ga(p,c)+1. Let (X, Y ), (X′, Y ′) ∈ Ra(p,c)+1, then +|G(L(h(X(M)), Y (M))) − G(L(h(X′(M)), Y ′(M)))| ≤ Lip(G)|L(h(X(M)), Y (M)) − L(h(X′(M)), Y ′(M))| +≤ Lip(G)(|h(X(M))) − h(X′(M))| + |Y (M) − Y ′(M)| +≤ Lip(G)(Lip(h) + 1)(∥X − X′∥1 + |Y − Y ′|), +and Lip(G(L(M) +j +)) ≤ Lip(G)(Lip(h) + 1). +Because Z is a θ-lex weakly dependent random field, (88) is less than or equal to +˜d∥F∥∞Lip(G)(Lip(h) + 1)θlex(r). +We choose now M = r and obtain that (86) is less than or equal than +∥F∥∞Lip(G) ˜d(Lip(h)a(p, c) + 1) +�2 +˜d +E[|Zt1(x1) − Z(r) +t1 (x1)|] + θlex(r) +� +. +The quantity above converges to zero as r → ∞. Therefore, (Li)i∈Z is a θ-weakly dependent process. +Proof of Proposition 3.7. We drop the bold notations indicating random fields and stochastic processes in +the following. We call Li = L(hβ(Xi), Yi) for i ∈ Z, as defined in Proposition 2.17, and β ∈ B. Moreover +we will use the notation Z(ψ) +t +(x) to indicate a truncated of the field Z and L(ψ) = (L(hβ(X(ψ) +i +), Y (ψ) +i +))i∈Z +where +X(ψ) +i += L−(ψ) +p +(t0 + ia, x∗), +and +Y (ψ) +i += Z(ψ) +t0+ia(x∗), for i ∈ Z, +where +L−(ψ) +p +(t, x) = {Z(ψ) +s +(ξ) : (s, ξ) ∈ Z × L, ∥x − ξ∥ ≤ c (t − s) and t − s ≤ p}. +For u ∈ N, i1 ≤ i2 ≤ . . . ≤ iu ≤ iu + k = j with k ∈ N, let us consider the marginal of the field +� +(Xi1, Yi1), . . . , (Xiu, Yiu), (Xj, Yj) +� +, +(89) +and let us define +Γ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + isa, x∗) or (ti, xi) = (t0 + isa, x∗) for s = 1, . . . , u}, +and +Γ′ = {(ti, xi) ∈ Z1+d: Zti(xi) ∈ L− +p (t0 + ja, x∗) or (ti, xi) = (t0 + isa, x∗)}. +Then r = dist(Γ, Γ′). In particular Γ ⊂ V r +Γ′, and r = (j − iu)a − p. For F ∈ G∗ +u and G ∈ G1, then +|Cov(F(Li1, . . . , Liu), G(Lj))| +≤|Cov(F(Li1, . . . , Liu), G(Lj) − G(L(ψ) +j +))| +(90) ++ |Cov(F(Li1, . . . , Liu), G(L(ψ) +j +))|. +(91) +The summand (91) is equal to zero because Γ ⊂ V r +Γ′, see proof of Proposition 2.19 for more details about +49 + +this part of the proof. We can then bound (90) from above by +2∥F∥∞Lip(G)E[|Lj − L(ψ) +j +|] ≤ 2∥F∥∞Lip(G)(E[|Yj − Y (ψ) +j +|] + E[|h(Xj) − h(X(ψ) +j +)|]) +(92) +≤ 2∥F∥∞Lip(G)(Lip(h)a(p, c) + 1)E[|Zt1(x1) − Z(ψ) +t1 (x1)|] +(93) +where (92) holds because L is a function with Lipschitz constant equal to one, and (93) holds given that +h ∈ L(Ra(p,c)). +In this case, we work with linear functions, +E[|hβ(Xj) − hβ(X(ψ) +j +)∥] = E +���� +a(p,c) +� +l=1 +β1,l(Ztl(xl) − Z(ψ) +tl (xl)) +��� +� +. +(94) +By stationarity of the field Z, we have that (94) is smaller or equal than ∥β1∥1E[|Zt1(x1) − Z(ψ) +t1 (x1)|]. +On overall, we have that (90) is smaller or equal than +2∥F∥∞Lip(G)(Lip(h)a(p, c) + 1)E[|Zt1(x1) − Z(ψ) +t1 (x1)|] for h a Lipschitz function, +or it is smaller or equal than +2∥F∥∞Lip(G)(∥β1∥1 + 1)E[|Zt1(x1) − Z(ψ) +t1 (x1)|] for hβ a linear function. +Because of the properties of the truncated field Z(ψ) +t1 (x1), we have that the above bounds converge to +zero as r → ∞. Therefore, L is a θ-weakly dependent process. +The proof of Proposition 3.8 is based on a blocks technique used in the papers [34] and [44]. Such +results are based on the use of several lemmas. To ease the complete understanding of the proof of +Proposition 3.8, we prove these Lemmas below, given that they undergo several modifications in our +framework. Let us start by partitioning a set {1, 2, . . . , m} into k blocks. Each block will contain l = ⌊ m +k ⌋ +terms. Let h = m − k l < k denote the remainder when we divide m by k. We now construct k blocks +such that the number of elements in the jth-block is defined by +¯lj = +� +l + 1 +if j = 1, 2, . . . , h +l +if j = h + 1, . . . , k +. +Let (U i)i∈Z a stationary process, and V m = �m +i=1 U i, for j = 1, . . . , k we define the j-th block as +V j,m = U j + U j+k + . . . U j+(¯lj−1) k = +¯lj +� +i=1 +U j+(i−1) k +such that +V m = +k +� +j=1 +V j,m = +k +� +j=1 +¯lj +� +i=1 +U j+(i−1) k. +For j = 1, 2, . . . , k, let us define pj = +¯lj +m. It follows that �k +j=1 pj = 1 +m +�k +j=1 ¯lj = 1. +50 + +Lemma B.1. For all s ∈ R +E +� +exp +� +sV m +m +�� +≤ +k +� +j=1 +pjE +� +exp +� +sV j,m +¯lj +�� +The proof of the result above is due to Hoeffding [35]. +Lemma B.2. Let the assumptions of Proposition 3.8 hold and define the process (U i)i∈Z such that +U i := f(Xi) − E[f(Xi)]. For all j = 1, 2, . . . , k, l ≥ 2 and 0 < s < +3l +|b−a| +M¯lj(s) = +���E +� ¯lj +� +i=1 +exp +�s U j+(i−1) k +¯lj +�� +− +¯lj +� +i=1 +E +� +exp +�s U j+(i−1) k +¯lj +����� ≤ exp +� +l − 1 + s|b − a| +� +θ(k)s +l +(95) +The same result holds when defining the process (U i)i∈Z for U i = E[f(Xi)] − f(Xi). +Proof. Let us first discuss the case when U i = f(Xi) − E[f(Xi)], we have that the process U has mean +zero and |U i| ≤ |b − a|. Let us define Fj = σ(U i, i ≤ j). +M¯lj := +�����E +� ¯lj +� +i=1 +exp +�sU j+(i−1)k +¯lj +�� +− +¯lj +� +i=1 +E +� +exp +�sU j+(i−1)k +¯lj +������� += +�����E +� ¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +� +E +� +exp +�sU j+(¯lj−1)k +¯lj +����Fj+(¯lj−2)k +�� +− +¯lj +� +i=1 +E +� +exp +�sU j+(i−1)k +¯lj +������� +≤ +�����E +� ¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +�� +E +� +exp +�sU j+(¯lj−1)k +¯lj +����Fj+(¯lj−2)k +� +− E +� +exp +�sU j+(¯lj−1)k +¯lj +��������� ++ +�����E +� +exp +�sU j+(¯lj−1)k +¯lj +������� +�����E +� ¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +�� +− +¯lj−1 +� +i=1 +E +� +exp +�sU j+(i−1)k +¯lj +������� +≤ +����� +¯lj−1 +� +i=1 +exp +�sU j+(i−1)k +¯lj +������ +∞ +E +����E +� +exp +�sU j+(¯lj−1)k +¯lj +����Fj+(¯lj−2)k +� +− E +� +exp +�sU j+(¯lj−1)k +¯lj +����� +� ++ +����� exp +�sU j+(¯lj−1)k +¯lj +������ +∞ +M¯lj−1 +The above is then less than or equal to +exp +�s(¯lj − 1)|b − a| +¯lj +� +exp +�−s|a| +¯lj +� +E +������E +� +exp +� +sf(Xj+(¯lj−1)k) +¯lj +������Fj+(¯lj−2)k +� +(96) +− E +� +exp +� +sf(Xj+(¯lj−1)k) +¯lj +������� +� ++ exp +�s|b − a| +¯lj +� +M¯lj−1. +Note that the function g(x) = +exp +� +sx +¯lj +� +exp +� +s|b| +¯lj +� +s +¯lj +1A(x) is in L1 for each s, where A = {x : |x| ≤ |b − a|}. We +51 + +then use the mixingales-type representation of the θ-coefficients of f(X), and obtain that +M¯lj ≤ exp +�s¯lj|b − a| +¯lj +� +θ(k) s +¯lj ++ exp +�s|b − a| +¯lj +� +M¯lj−1. +Let now, u = exp +� +s|b−a| +¯lj +� +, we have that +M¯lj ≤ θ(k)u +¯lj s +¯lj ++ uM¯lj−1 +≤ (¯lj − 2)θ(k)u +¯lj s +¯lj ++ u +¯lj−2���E +� +exp +�sU j +¯lj +� +exp +�sU j+k +¯lj +�� +− E +� +exp +�sU j +¯lj +�� +E +� +exp +�sU j+k +¯lj +����� +≤ (¯lj − 2)θ(k)u +¯lj s +¯lj ++ u +¯lj−1���E +� +exp +�sU j+k +¯lj +� +|Fj +� +− E +� +exp +�sUj+k +¯lj +����� +≤ (¯lj − 1)θ(k)u +¯lj s +¯lj +≤ exp +� +¯lj − 2 + s|b − a| +� +θ(k) s +¯lj +. +By assumption ¯lj ≥ 2. Moreover, we apply the following estimation in the last inequality: for all x > 0, +we have that log(x) ≤ x−1. In conclusion, for all j = 1, . . . , k (and remembering that ¯lj = l or ¯lj = l +1) +M¯lj ≤ exp(l − 1 + s |b − a|)θ(k)s +l . +Similar calculations apply when U i = E[f(Xi)] − f(Xi). +Remark B.3. Note that by showing Lemma B.2 for U i = E[f(Xi)] − f(Xi) there is a slight change in +the proof at point (96). However, in the end, the result (95) equal holds. +Lemma B.4. Let the Assumptions of Proposition 3.8 hold and define the process (U i)i∈Z such that +U i = f(Xi) − E[f(Xi)]. For all j = 1, 2, . . . , k, l ≥ 2 and 0 < s < +3l +|b−a| +E +� +exp +�sV j,m +¯lj +�� +≤ exp +� +s2E[U 2 +1] +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k) +The same result holds when defining the process (U i)i∈Z for U i = E[f(Xi)] − f(Xi). +Proof. +E +� +exp +�sV j,m +¯lj +�� += E +� +exp +� +¯lj +� +i=1 +sU j+(i−1) k +¯lj +�� +≤ +¯lj +� +i=1 +E +� +exp +�sU j+(i−1) k +¯lj +�� ++ +���E +� ¯lj +� +i=1 +exp +�sU j+(i−1) k +¯lj +�� +− +¯lj +� +i=1 +E +� +exp +�sU j+(i−1) k +¯lj +����� += E +� +exp +�sU j+(i−1) k +¯lj +��¯lj ++ M¯lj +(97) +We have that E[U j+(i−1) k] = 0 by definition of the process U, and +U j+(i−1) k +¯lj +satisfies the Bernstein +52 + +moment condition (Remark A1 [44]) with K1 = |b−a| +3¯lj . Hence, for ¯lj ≥ 2 and 0 < s < +3¯lj +|b−a| +E +� +exp +�sU j+(i−1) k +¯lj +�� +≤ exp +� +s2E[(U j+(i−1)k/¯lj)2] +2 +� +1 − s|b−a| +3¯lj +� +� +. +(98) +Because ¯lj ≥ l, we can conclude that the inequality above holds for l ≥ 2. Moreover, by stationarity of +the process U and since for all j = 1, 2, . . . , k, we can bound (97) uniformly with respect to the index j +by using Lemma B.2, and noticing that +0 < s < +3l +|b − a| ≤ +3¯lj +|b − a|, +and then +� +1 − s|b − a| +3¯lj +� +≥ +� +1 − s|b − a| +3l +� +. +The same proof applies when defining U i = E[f(Xi)] − f(Xi). +Proof of Proposition 3.8. By combining Lemmas B.1, B.2, B.4, we can show the bound for the Laplace +transform of the process U := f(X) − E[f(X)] for 0 < s < +3l +|b−a| is given by: +E +� +exp +� +s 1 +m +m +� +i=1 +f(Xi) − E[f(Xi)] +�� += E +� +exp +� +s 1 +m +m +� +i=1 +U i +�� += E +� +exp +� s +m +k +� +j=1 +V j,m +�� += E +� +exp +� s +m +k +� +j=1 +¯lj +� +i=1 +U j+(i−1) k +�� +≤ +k +� +j=1 +pjE +� +exp +�sV j,m +¯lj +�� +(99) +≤ exp +� +s2V ar(f(X1)) +2l +� +1 − s|b−a| +3l +� +� ++ s +l exp(l − 1 + s|b − a|)θ(k), +(100) +where V j,m = �¯lj +i=1 U j+(i−1) k. The inequalities (99) and (100) hold because of Lemma B.1 and Lemma +B.4, respectively. We have then proved the inequality (42). The same proof applies for showing the bound +(43) by defining U i = E[f(Xi)] − f(Xi) . +Proof of Proposition 3.10. Let us choose f(s) = +s +ϵ for 0 < s < ϵ , which satisfies the assumptions of +Proposition 3.8 and has support in [0, 1]. Note, that ∆(β) = +ϵ +m(�m +i=1 E[f(Lϵ +i)] − f(Lϵ +i)). We have in this +case that U i = E[f(Li)] − f(Lϵ +i) such that E[U i] = 0 and |U i| ≤ 1. Equally, ∆′(β) = rϵ(β) − Rϵ(β) = +ϵ +m(�m +i=1 f(Lϵ +i) − E[f(Lϵ +i)]), and again by defining U ′ +i = f(Lϵ +i) − E[f(Li +ϵ)] has E[U ′ +i] = 0 and |U ′ +i| ≤ 1. +Note that the process f(Lϵ +i)i∈Z has the same θ-weak coefficients of the process (Lϵ +i)i∈Z because f is a +Lipschitz function. We follow below the notations of Table 1. +53 + +(i) By Proposition 3.8 applied for 0 < ϵ +√ +l < 3 +√ +l, and the bound (41), +E +� +exp +�√ +l ∆(β) +�� += E +� +exp +� +ϵ +√ +l 1 +m +m +� +i=1 +U i +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2¯α(∥β1∥1 + 1) exp(l − 1 + 3 +√ +l − λr) +where the last equality holds because of the particular shape of the chosen function f. Let us +determine the conditions on the parameter at and k such that exp(l − 1 + 3 +√ +l − λr) = exp(l − 1 + +3 +√ +l − λ(ka − p)) ≤ 1. Given that l ≥ 9, the inequality above holds if +2 N +atk − 1 − λathtk + λp ≤ 0 +The above is equivalent to +λhta2 +tk2 − (λp − 1)atk − 2N ≥ 0. +That holds, if +atk ≥ (λp − 1) + +� +(λp − 1)2 + 8λhtN +2λht +. +(ii) As in (i), we have that +E +� +exp +�√ +l ∆(β) +�� += E +� +exp +� +ϵ +√ +l 1 +m +m +� +i=1 +U i +�� +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2¯α(∥β1∥1 + 1) exp(l − 1 + 3 +√ +l)r−λ +≤ exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2¯α(∥β1∥1 + 1) exp(l − 1 + 3 +√ +l − λ log(ak − p)) +We have that exp(l − 1 + 3 +√ +l − λ log(ak − p)) ≤ 1 holds if +2 N +atk − 1 − λ log(ak − p) ≤ 0, +given that l ≥ 9. The latter inequality holds if and only if the parameters at and k are chosen such +that +2N − atk +atk log(ak − p) ≤ λ, +Proof of Corollary 3.13. We drop the bold notations indicating random fields and stochastic processes +in this proof. When working with the family of predictors hnet,w for w ∈ B′, we have that the proof of +Proposition 3.4 changes in the estimation of the bound (92). In particular, for a given w ∈ B′ we have +that +E[|h(Xj) − h(X(ψ) +j +)|] = E[| +K +� +l=1 +αlσ(β⊤ +l Xj + γl) − +K +� +l=1 +αlσ(β⊤ +l X(ψ) +j ++ γl)|] ≤ +� +l +|αl|∥βl∥1E[|Zt1(x1) − Zψ +t1(x1)|] +by the stationarity of the field Z. The proof of the Corollary proceeds then identically as in the proof of +Proposition 3.10 by modifying the bound (40) with the above calculations. +54 + +We remind the reader that the proof of Proposition 3.18 and 3.21 make use of the below Lemma +that we recall just for completeness. +Lemma B.5 (Legendre transform of the Kullback-Leibler divergence function). For any π ∈ M1 ++(B), +for any measurable function h : B → R such that π[exp(h)] ≤ ∞, we have that +π[exp(h)] = exp +� +sup +ρ∈M1 ++(B) +ρ[h] − KL(ρ||π) +� +, +with the convention ∞ − ∞ = −∞. Moreover, as soon as h is upper bounded on the support of π, the +supremum with respect to ρ in the right-hand side is reached for the Gibbs distribution with Radon- +Nikodym derivative w.r.t. π equal to +exp(h) +π[exp(h)]. +The proof of Lemma (B.5) has been known since the work of Kullback [42] in the case of a finite +space B, whereas the general case has been proved by Donsker and Varadhan [27]. Given this result, we +are now ready to prove our PAC Bayesian bounds. +Proof of Proposition 3.18. We follow a standard proof scheme developed in [12]. +√ +l (ˆρ[Rϵ(β)] − ˆρ[rϵ(β)]) = ˆρ[ +√ +l (Rϵ(β) − rϵ(β))] +≤ KL(ˆρ||π) + log(π[exp( +√ +l (Rϵ(β) − rϵ(β)))]) +(P-a.s. by Lemma B.5). +We have that π[exp( +√ +l (Rϵ(β) − rϵ(β)))] := AS is a random variable on S. By Markov’s inequality, for +δ ∈ (0, 1) +P +� +AS ≤ E[AS] +δ +� +≥ 1 − δ. +This in turn implies that with probability at least 1 − δ over S +ˆρ[Rϵ(β)] − ˆρ[rϵ(β)] ≤ KL(ˆρ||π) + log 1 +δ +√ +l ++ 1 +√ +l +log +� +π +� +E +� +exp +�√ +l∆(β) +���� +(101) +≤ KL(ˆρ||π) + log 1 +δ +√ +l ++ 1 +√ +l +log +� +π +� +exp +� +3ϵ2 +2(3 − ϵ) +� ++ 2(∥β1∥1 + 1)¯α +�� +, +(102) +where (101) holds by swapping the expectation over S and over π using Fubini’s Theorem, and (102) is +obtained by using Proposition 3.10. Similarly, the second inequality can easily be obtained. +Proof of Proposition 3.21. Let h = − +√ +lrϵ(β) and d¯ρ +dπ = +exp(h) +π[exp(h)]. By Lemma B.5, we have that +¯ρ = arg inf +ˆρ +� +K(ˆρ||π) − ˆρ[h] +� += arg inf +ˆρ +�K(ˆρ||π) +√ +l ++ ˆρ[rϵ(β)] +� +, +and by using (58), for all δ ∈ (0, 1) +P +� +¯ρ[Rϵ(β)] ≤ inf +ˆρ +� +ˆρ[rϵ(β)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π[1+2(∥β1∥1+1)¯α] +��� +≥ 1−δ +(103) +55 + +A union bound gives that the inequality (103) and the one in (59) hold with probability at least 1 − 2δ. +By now substituting to ˆρ[rϵ(β)] in (103) its bound obtained in inequality (59), we obtain that +P +� +¯ρ[Rϵ(β)] ≤ inf +ˆρ +� +ˆρ[Rϵ(β)]+ +� +KL(ˆρ||π)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) +� 1 +√ +l ++ 1 +√ +l +log +� +π[1+2(∥β1∥1+1)¯α] +��� +≥ 1−2δ. +We conclude by replacing δ with δ +2. +The modeling selection procedure of the paper discussed in Section 3.3 can be proven by using the +following two Lemmas +Lemma B.6. Let 0 < ϵ < 3, and the assumptions of Proposition 3.10 hold. Moreover, let fp be a regular +conditional probability measure such that fp << ¯ρp and ¯ρp << fp for each possible training data set S. +Then, +E +� +sup +fp +exp +� +fp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +(104) +≤ E +� +sup +fp +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1 +Proof. By Proposition 3.10, it holds that +E[exp( +√ +l∆(β))] ≤ exp +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) +� +. +which is equivalent to +E +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) +�� +≤ 1. +(105) +By using the tower property, for all p = 1, . . . , ⌊ N +2 ⌋ +E +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) +�� += E +� +¯ρp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) +��� +. +(106) +Moreover, for any non-negative and measurable function b on Bp, it holds that +¯ρp[b(β)] = +� +b(β) ¯ρp(dβ) ≥ +� +dfp +d¯ +ρp (β)>0 +b(β) ¯ρp(dβ) = +� +dfp +d¯ +ρp (β)>0 +b(β) d¯ρp +dfp +fp(dβ) += fp +� +b(β) exp +� +− log dfp +d¯ρp +�� +. +(107) +From (105), (106) and (107) we obtain that +E +� +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1. +56 + +By using now the Jensen inequality, we have that +E +� +exp +� +fp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ E +� +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1. +(108) +For a given p and by taking the supremum on every possible fp distribution, we conclude. +Lemma B.7. Let the assumptions of Lemma B.6 hold. Moreover, let g be a probability distribution on +the grid 1, . . . , ⌊ N +2 ⌋, such that g = � +p wpδp, where wp = +1 +⌊ N +2 ⌋. Then, we have that +E +� � +p +wp sup +fp +exp +� +fp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1, +(109) +and +E +� � +p +wp sup +fp +fp +� +exp +�√ +l∆(β) − log(1 + 2(∥β1∥1 + 1)¯α) − +3ϵ2 +2(3 − ϵ) − log dfp +d¯ρp +��� +≤ 1. +(110) +Proof. Let hp = supfp fp +� +exp +�√ +l∆(β))−log(1+2(∥β1∥1 +1)¯α)− +3ϵ2 +2(3−ϵ) −log dfp +d¯ρp +�� +, and h = � +p hp1Bp. +By definition of the probability measure g, and applying Lemma B.6, +E[g[h]] = E +� � +p +wp h +� +≤ 1. +By substituting the explicit expression of h, we obtain that the inequality (110) holds and consequently +also (109) again by Lemma B.6. +Proof of Proposition 3.28. By inequality (109) and the Chernoff bound, we obtain that for all p = +1, . . . , +� +N +2 +� +and fp, and δ ∈ (0, 1), with probability greater than 1 − δ that +fp[Rϵ(β)] ≤ fp +� +rϵ(β)+ log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +KL(fp||¯ρp)+ 1 +√ +l +log +� 1 +wp +� ++ 1 +√ +l +� +log ¯α+ +3ϵ2 +2(3 − ϵ) +log 1 +δ +� +. +For all Gibbs randomized estimator ¯ρp, with probability greater than 1 − δ, it holds that +¯ρp[Rϵ(β)] ≤ ¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log 1 +wp ++ 1 +√ +l +� +log ¯α + +3ϵ2 +2(3 − ϵ) + log 1 +δ +� +, +(111) +being the KL-divergence equal to zero in this case. By the definition of the parameter p∗ +inf +p +� +¯ρp +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l +� ++ 1 +√ +l +log +��N +2 +��� += ¯ρp∗ +� +rϵ(β) + log(2∥β1∥1 + 3) +√ +l∗ +� ++ +1 +√ +l∗ log +��N +2 +�� +. +Then, by using the above equality in (111) computed for ¯ρp∗ we conclude. +57 + +Proof of Proposition 3.29. If the inequality (65) holds, then because KL(¯ρp||πp) ≥ 0 for all possible +observed training sets +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +¯ρp +� +rϵ(β) +� ++ 1 +√ +l +2 + 3C +C ++ 1 +√ +l +KL(¯ρp||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +2(3 − ϵ) +√ +l∗ + +1 +√ +l∗ log ¯α +δ +� +≥ 1 − 2δ, +(112) +by choosing a δ ∈ (0, 1) and using a union bound. Because of Lemma B.5, the above is equivalent to +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +rϵ(β) +� ++ 1 +√ +l +2 + 3C +C ++ 1 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +2(3 − ϵ) +√ +l∗ + +1 +√ +l∗ log ¯α +δ +� +≥ 1 − 2δ. +(113) +Following the line of proof in Proposition 3.18 applied to a reference distribution πp defined on M1 ++(Bp) +and such that πp[∥β1∥1] ≤ 1, it is straightforward to prove that for all ˆρ ∈ M1 ++(Bp) and δ ∈ (0, 1). +P +� +ˆρ[rϵ(β)] ≤ ˆρ[Rϵ(β)]+ +� +KL(ˆρ||πp)+log +�1 +δ +� ++ +3ϵ2 +2(3 − ϵ) l1−2α +� 1 +√ +l ++ 1 +√ +l +log +� +πp +� +1+2(∥β1∥1+1)¯α +��� +≥ 1−δ +(114) +By using a union bound, and replacing ˆρ[rϵ(β)] in (113) with the inequality appearing in (114), +P +� +¯ρp∗[Rϵ(β)] ≤ inf +p +� +inf +ˆρ∈M1 ++(Bp) ˆρ +� +Rϵ(β) +� ++ 2 +√ +l +2 + 3C +C ++ 2 +√ +l +KL(ˆρ||πp) + 1 +√ +l +log +��N +2 +��� ++ +3ϵ2 +(3 − ϵ) +√ +l∗ + +2 +√ +l∗ log ¯α +δ +� +≥ 1 − 3δ. +(115) +By choosing a δ equal to δ +3 we conclude. +Proof of Corollary 3.31. First of all, let us remark that P¯ρp∗ is a well-defined probability measure as the +Gibbs estimator is defined conditionally on the observations in the training set S. By using inequality +(110) and the Chernoff bound, we can show that +P¯ρp∗ +� +Rϵ(ˆβ) ≤ rϵ(β) + 1 +√ +l +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) + log 1 +wp ++ log 1 +δ +�� +≥ 1 − δ. +(116) +It is straightforward to see that Lemma B.6 and B.7 hold also if we substitute at ∆(β) the expression +∆′(β). Therefore, with probability 1 − δ it holds that +P¯ρp∗ +� +rϵ(ˆβ) ≤ Rϵ(β) + 1 +√ +l +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) + log 1 +wp ++ log 1 +δ +�� +≥ 1 − δ. +(117) +58 + +Moreover, for a β ∈ B +Rϵ(β) − Rϵ(¯β) ≤ E +����Y − β0 − +a(c∗,p∗) +� +i=1 +β1,iXi +��� − +���Y − ¯β0 − +a(c∗,p∗) +� +i=1 +¯β1,iXi +��� +� +≤ E +����(¯β0 − β0) − +a(c∗,p∗) +� +i=1 +(¯β1,i − β1,i)Xi +��� +� +≤ ∥¯β − β∥ E[|Z|] +(118) +≤ 1 +C E[|Z|]. +(119) +Note that inequality (118) holds because the model underlying the data is stationary, whereas (119) holds +because of the assumptions made on B. +Let us now plug in (116), the estimations (117) and (118), by using a union bound we obtain that +P¯ρp∗ +� +Rϵ(ˆβ) ≤ 1 +C E[|Z|] + Rϵ(¯β) + 2 +√ +l +� +3ϵ2 +2(3 − ϵ) + log(1 + 2(∥β1∥1 + 1)¯α) + log 1 +wp ++ log 1 +δ +�� +≥ 1 − 3δ. +By choosing δ equal to δ +3 we obtain the thesis. +59 + diff --git a/19AyT4oBgHgl3EQf1fl-/content/tmp_files/load_file.txt b/19AyT4oBgHgl3EQf1fl-/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..44a92d1d5e4a7504028a97ff99902d63248a0f4d --- /dev/null +++ b/19AyT4oBgHgl3EQf1fl-/content/tmp_files/load_file.txt @@ -0,0 +1,2341 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf,len=2340 +page_content='Mixed moving average field guided learning for spatio-temporal data Imma Valentina Curato∗ , Orkun Furat† and Bennet Str¨oh ‡ January 3, 2023 Abstract Influenced mixed moving average fields are a versatile modeling class for spatio-temporal data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' However, their predictive distribution is not generally accessible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Under this modeling assumption, we define a novel theory-guided machine learning approach that employs a generalized Bayesian algorithm to make predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We employ a Lipschitz predictor, for example, a linear model or a feed-forward neural network, and determine a randomized estimator by minimizing a novel PAC Bayesian bound for data serially correlated along a spatial and temporal dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Performing causal future predictions is a highlight of our methodology as its potential application to data with short and long-range dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We conclude by showing the performance of the learning methodology in an example with linear predictors and simulated spatio-temporal data from an STOU process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' MSC 2020: primary 60E07, 60E15, 60G25, 60G60;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' secondary 62C10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Keywords: stationary models, weak dependence, oracle inequalities, randomized estimators, causal predic- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' 1 Introduction Modeling spatio-temporal data representing measurements from a continuous physical system introduces various methodological challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' These include finding models that can account for the serial correlation typically observed along their spatial and temporal dimensions and simultaneously have good prediction performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Statistical models used nowadays to analyze spatio-temporal data are Gaussian processes [5], [23], [53], and [63];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' spatio-temporal kriging [19], and [46];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' space-time autoregressive moving average models [30];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' point processes [31], and hierarchical models [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' An important common denominator of statistical modeling is that they enable predictions once the variogram (covariance) structure or the data distribution (up to a set of parameters) is carefully chosen in relation to the studied phenomenon and practitioners’ experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' This paper aims to define a novel theory-guided (or physics-informed) machine learning methodology for spatio-temporal data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' By this name, go all hybrid procedures that use a stochastic (or deterministic) ∗Ulm University, Institute of Mathematical Finance, Helmholtzstrae 18, 89069 Ulm, Germany.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' E-mail: imma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='curato@uni-ulm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='de.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' †Ulm University, Institute of Stochastics, Helmholtzstrae 18, 89069 Ulm, Germany.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' E-mail: orkun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='furat@uni-ulm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='de.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' ‡Imperial College, Department of Mathematics, South Kensington Campus, SW7 2AZ London, United Kingdom.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' E- mail: b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='stroh@imperial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='uk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='00736v1 [stat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='ML] 2 Jan 2023 model in synergy with a specific data science one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Such methodologies have started to gain prominence in several scientific disciplines such as earth science, quantum chemistry, bio-medical science, climate science, and hydrology modeling as, for example, described in [41], [51], [50], and [54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' As in the statistical models cited above, we model the spatial-temporal covariance structure of the observed data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' However, we perform predictions using a generalized Bayesian algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Let us start by introducing the stochastic model involved in our methodology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We assume throughout to observe data ( ˜Zt(x))(t,x)∈T×L on a regular lattice L ⊂ Rd for d ≥ 1 across times T = {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' , N} such that the decomposition ˜Zt(x) = µt(x) + Zt(x) (1) holds and no measurement errors are present in the observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Here, µt(x) is a deterministic function, and Zt(x) are considered realizations from a zero mean stationary (influenced) mixed moving average field (in brief, MMAF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' When the spatial dimension d = 2, a category of data that falls in our assumptions is the one of frame images through time (also known as video data or multidimensional raster data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Early applications of MMAFs in image modeling can be found in [39].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' An MMAF is defined as Zt(x) = � H � At(x) f(A, x − ξ, t − s) Λ(dA, dξ, ds), (t, x) ∈ R × Rd, (2) where f is a deterministic function called kernel, H is denoting a non-empty topological space, Λ a L´evy basis and At(x) is a so-called ambit set [7], we refer the reader to Section 2 for more details on the definition (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Examples of such models can be found in [7], [11] [47], and [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' A significant feature of MMAFs is that they provide a direct way to specify a model for an observed physical phenomenon based on a probabilistic understanding of the latter as exemplified by choice of the L´evy basis and the kernel function appearing in (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Choosing an opportune distribution Λ allows us to work with Gaussian and non-Gaussian distributed models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Moreover, the random parameter A in the kernel function allows us to model short and long-range temporal and spatial dependence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' A further highlight of these models is that their autocovariance functions can be exponential or power decaying by choosing an exponential kernel function f, an opportune distribution of the random parameter A, and assuming that the L´evy basis Λ has finite second-order moments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Therefore, such types of autocovariance functions can be obtained without the need for further assumptions on the distribution of the random field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' MMAFs with such properties are, for example, the spatio-temporal Ornstein-Uhlenbeck process (in brief, STOU) [47] and its mixed version called the MSTOU process [48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We also know that the entire class of MMAFs is θ-lex weakly dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' This notion of dependence has first been introduced in [22], and it is more general than α∞,v-mixing for random fields as defined in [24] for v ∈ N∪{∞} and α-mixing [14] in the particular case of stochastic processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Although the MMAFs are versatile models, only few results in the literature are to be found con- cerning their predictive distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' To our knowledge, the only explicit result concerns Gaussian STOU processes defined on cone-shaped ambit sets, see [47, Theorem 13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We then learn a predictor h ∈ H, for H the class of the Lipschitz functions, by determining a randomized estimator ˆρ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=', a regular condi- tional probability measure) on H using generalized Bayesian learning, see [33] for a review.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We call our methodology mixed moving average field guided learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' This procedure is applicable, for example, when 2 using linear models and feed-forward neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' The learning task on which we focus is making a one-step-ahead prediction of the field Z in a given spatial position x∗.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' The methodology starts by computing the θ-lex coefficients of the underlying MMAF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' If the analyzed field has finite second-order moments, then such coefficients can be obtained following the calculations in Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We use this information to select a set of input features from (Zt(x))(t,x)∈T×L and determine a training set S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We then prove a PAC Bayesian bound for the sampled data S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We ultimately deter- mine a randomized estimator by minimization of the PAC Bayesian bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' The acronym PAC stands for Probably Approximately Correct and may be traced back to [66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' A PAC inequality states that with an arbitrarily high probability (hence ”probably”), the performance (as provided by a loss function) of a learning algorithm is upper-bounded by a term decaying to an optimal value as more data is collected (hence ”approximately correct”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' PAC-Bayesian bounds have proven over the past two decades to suc- cessfully address various learning problems such as classification, sequential or batch learning, and deep learning [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Indeed, they are a powerful probabilistic tool to derive theoretical generalization guarantees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' To the best of our knowledge, the PAC Bayesian bounds determined in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='2 are the first results in the literature obtained for data serially correlated along a spatial and temporal dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' It is important to emphasize that using a randomized estimator over a classical supervised learning methodology has the same advantages as a Bayesian approach, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=', it allows a deeper understanding of the uncertainty of each possible h ∈ H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Moreover, we can enable the analysis of aggregate or ensemble predictors ˆh = ˆρ[h].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Despite these similarities, generalized Bayesian learning substantially differs from the classical Bayesian learning approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In the latter, we specify a prior distribution, a statistical model connecting output-input pairs called likelihood function, and determine the unique posterior distribution by Bayes’ theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' When using generalized Bayes to determine a randomized estimator, no assumptions on the likelihood function are required but just a loss function and a so-called reference distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' These ingredients, together with a PAC Bayesian bound, are employed to determine a randomized predictor, which is unique just under a specific set of assumptions, see Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='1 Outline and Contributions Between the data-science models used to tackle predictions for spatio-temporal data, we find deep learn- ing, see [4], [54], [61] and [62] for a review, and video frame prediction algorithms as in [45] and [68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Deep learning techniques are increasingly popular because they successfully extract spatio-temporal features and learn the inner law of an observed spatio-temporal system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' However, these models lack interpretabil- ity, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=', it is not possible to disentangle the causal relationship between variables in different spatial-time points, and typically no proofs of their generalization (predictive) abilities are available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' On the other hand, [45] and [68] are methodologies retaining a causal interpretation, see discussion below, but do not have proven generalization (predictive) performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Given a model class H, mixed moving average field guided learning selects a predictor h ∈ H that has the best generalization performance for the analyzed prediction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Moreover, the MMAF modeling framework has a causal interpretation when using cone-shaped ambit sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' To explain this point, we borrow the concept of lightcone from special relativity that describes the possible paths that the light can make in space-time leading to a point (t, x) and the ones that lie in its future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In the context of our paper, we use their geometry to identify the points in space-time having causal relationships.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Let c > 0, for a point (t, x), and by using the Euclidean norm to assess the distance between different space-time 3 points, we define a lightcone as the set Alight t (x) = � (s, ξ) ∈ R × Rd : ∥x − ξ∥ ≤ c|t − s| � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' (3) The set Alight with respect to the point (t, x) can be split into two disjoint sets, namely, At(x) and At(x)+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' The set At(x) is called past lightcone, and its definition corresponds to the one of a cone-shaped ambit set At(x) := � (s, ξ) ∈ R × Rd : s ≤ t and ∥x − ξ∥ ≤ c|t − s| � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' (4) The set At(x)+ = {(s, ξ) ∈ R × Rd : s > t and ∥x − ξ∥ ≤ c|t − s|}, (5) is called instead the future lightcone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' By using an influenced MMAF on a cone-shaped ambit set as the underlying model, we implicitly assume that the following sets l−(t, x) = {Zs(ξ) : (s, ξ) ∈ At(x) \\ (t, x)} and l+(t, x) = {Zs(ξ) : (s, ξ) ∈ At(x)+} (6) are respectively describing the values of the field that have a direct influence on Zt(x) and the future field values influenced by Zt(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We can then uncover the causal relationships described above by estimating the constant c from observed data, called throughout the speed of information propagation in the physical system under analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' A similar approach to the modeling of causal relationships can be found in [45], [58], and [68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In [45], and [58], the sets (6) are considered and employed to discover coherent structures, see [37] for a formal definition, in spatio-temporal physical systems and to perform video frame prediction, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Also, in [68], predictions are performed by embedding spatio-temporal information on a Minkowski space-time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Hence, the concept of lightcones enters into play in the definition of their algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In machine learning, we typically have two equivalent approaches towards causality: structural causal models, which rely on the use of directed acyclical graphs (DAG) [49], and Rubin causal models, which rely upon the potential outcomes framework [57].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' The concept of causality employed in this paper can be inscribed into the latter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In fact, by using MMAFs on cone-shaped ambit sets, the set l+(t, x) describes the possible future outcomes that can be observed starting from the spatial position (t, x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' The paper is structured as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In Section 2, we introduce the MMAF framework and define STOU and MSTOU processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In Section 3, we introduce the notations that allow us to bridge the MMAF framework (that by definition is continuous in time and space) with a data science one (that by definition is discrete in time and space).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Important theoretical preliminaries and the input-features extraction method can be found in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We then prove PAC Bayesian bounds (also of oracle type) in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='2 for Lipschitz predictors, among which we discuss the shape of the bound for linear models and feed-forward neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We then focus on linear predictors and show in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='3 how to select the best one to be used in a given prediction task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' We give in Section 4 an explicit procedure to perform one-step ahead casual future predictions in such a framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In conclusion, we apply our theory- guided machine learning methodology to simulated data from an STOU process driven by a Gaussian and a NIG-distributed L´evy basis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Appendix A contains further details on the weak dependence measures employed in the paper, and a review of the estimation methodologies for STOU and MSTOU processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Appendix B contains detailed proofs of the results presented in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' 4 2 Mixed moving average fields 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='1 Notations Throughout the paper, we indicate with N the set of positive integers and R+ the set of non-negative real numbers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' As usual, we write Lp(Ω) for the space of (equivalence classes of) measurable functions f : Ω → R with finite Lp-norm ∥f∥p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' When Ω = Rn, ∥x∥1 and ∥x∥ denotes the L1-norm and the Euclidean norm, respectively, and we define ∥x∥∞ = maxj=1,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=',n |x(j)|, where x(j) represents the component j of the vector x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' To ease the notations in the following sections, unless it is important to keep track of both time and space components separately, we often indicate the index set R × Rd with R1+d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' A ⊂ B denotes a not necessarily proper subset A of a set B, |B| denotes the cardinality of B and dist(A, B) = infi∈A,j∈B∥i − j∥∞ indicates the distance of two sets A, B ⊂ R1+d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Let n, k ≥ 1, and F : Rn → Rk, we define ∥F∥∞ = supt∈Rd∥F(t)∥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Let Γ = {i1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' , iu} ⊂ R1+d for u ∈ N, we define the random vector ZΓ = (Zi1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' , Ziu).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In general, we use bold notations when referring to random elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' In the following Lipschitz continuous is understood to mean globally Lipschitz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' For u, n ∈ N, G∗ u is the class of bounded functions from Ru to R and Gu is the class of bounded, Lipschitz continuous functions from Ru to R with respect to the distance ∥ · ∥1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Moreover, we call L(Ω) the set of all Lipschitz functions h on Ω with respect to the distance ∥ · ∥1 and define the Lipschitz constant as Lip(h) = sup x̸=y |h(x) − h(y)| ∥x − y∥1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' (7) Hereafter, we often use the lexicographic order on R1+d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' Let the pedex t and s be indicating a temporal and spatial coordinate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' For distinct elements y = (y1,t, y1,s .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' , yd,s) ∈ R1+d and z = (z1,t, z1,s .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19AyT4oBgHgl3EQf1fl-/content/2301.00736v1.pdf'} +page_content=' , zd,s) ∈ R1+d we say y