Estimating a relative risk or risk difference with a binary exposure

Introduction

Let Y be a binary response, A a binary exposure, and V a vector of covariates.

DAG for the statistical model with the dashed edge representing a potential interaction between exposure A and covariates V.
DAG for the statistical model with the dashed edge representing a potential interaction between exposure A and covariates V.

In a common setting, the main interest lies in quantifying the treatment effect, ν, of A on Y adjusting for the set of covariates, and often a standard approach is to use a Generalized Linear Model (GLM):

$$g\{ E(Y\mid A,V) \} = A\nu^tW + \underset{\mathrm{nuisance}}{\mu^tZ}$$

with link function g, and W = w(V), Z = v(V) known vector functions of V.

The canonical link (logit) leads to nice computational properties (logistic regression) and parameters with an odds-ratio interpretation. But ORs are not collapsible even under randomization. For example

E(Y ∣ X) = E[E(Y ∣ X, Z) ∣ X] = E[expit (μ + αX + βZ) ∣ X] ≠ expit [μ + αX + βE(Z ∣ X)],

When marginalizing we leave the class of logistic regression. This non-collapsibility makes it hard to interpret odds-ratios and to compare results from different studies

Relative risks (and risk differences) are collapsible and generally considered easier to interpret than odds-ratios. Richardson et al (JASA, 2017) proposed a regression model for a binary exposures which solves the computational problems and need for parameter contraints that are associated with using for example binomial regression with a log-link function (or identify link for the risk difference) to obtain such parameter estimates. In the following we consider the relative risk as the target parameter

$$ \mathrm{RR}(v) = \frac{P(Y=1\mid A=1, V=v)}{P(Y=1\mid A=0, V=v)}. $$

Let pa(V) = P(Y ∣ A = a, V), a ∈ {0, 1}, the idea is then to posit a linear model for θ(v) = log (RR(v)), i.e., log (RR(v)) = αTv,

and a nuisance model for the odds-product $$ \phi(v) = \log\left(\frac{p_{0}(v)p_{1}(v)}{(1-p_{0}(v))(1-p_{1}(v))}\right) $$

noting that these two parameters are variation independent as illustrated by the below L’Abbé plot.

  p0 <- seq(0,1,length.out=100)
  p1 <- function(p0,op) 1/(1+(op*(1-p0)/p0)^-1)
  plot(0, type="n", xlim=c(0,1), ylim=c(0,1),
     xlab="P(Y=1|A=0)", ylab="P(Y=1|A=1)", main="Constant odds product")
  for (op in exp(seq(-6,6,by=.25))) lines(p0,p1(p0,op), col="lightblue")

  p0 <- seq(0,1,length.out=100)
  p1 <- function(p0,rr) rr*p0
  plot(0, type="n", xlim=c(0,1), ylim=c(0,1),
     xlab="P(Y=1|A=0)", ylab="P(Y=1|A=1)", main="Constant relative risk")
  for (rr in exp(seq(-3,3,by=.25))) lines(p0,p1(p0,rr), col="lightblue")

Similarly, a model can be constructed for the risk-difference on the following scale

θ(v) = arctanh (RD(v)).

Simulation

First the targeted package is loaded

library(targeted)

This automatically imports lava (CRAN) which we can use to simulate from the Relative-Risk Odds-Product (RR-OP) model.

m <- lvm(a ~ x,
         lp.target ~ 1,
         lp.nuisance ~ x+z)
m <- binomial.rr(m, response="y", exposure="a", target.model="lp.target", nuisance.model="lp.nuisance")

The lvm call first defines the linear predictor for the exposure to be of the form

LPA := μA + αX

and the linear predictors for the /target parameter/ (relative risk) and the /nuisance parameter/ (odds product) to be of the form

LPRR := μRR,

LPOP := μOP + βxX + βzZ.

The covariates are by default assumed to be independent and standard normal X, Z ∼ 𝒩(0, 1), but their distribution can easily be altered using the lava::distribution method.

The binomial.rr function

args(binomial.rr)
#> function (x, response, exposure, target.model, nuisance.model, 
#>     exposure.model = binomial.lvm(), ...) 
#> NULL

then defines the link functions, i.e.,

logit (E[A ∣ X, Z]) = μA + αX,

log (E[Y ∣ X, Z, A = 1]/E[Y ∣ X, A = 0]) = μRR,

log {p1(X, Z)p0(X, Z)/[(1 − p1(X, Z))(1 − p0(X, Z))]} = μOP + βxX + βzZ

with pa(X, Z) = E(Y ∣ A = a, X, Z).

The risk-difference model with the RD parameter modeled on the arctanh  scale can be defined similarly using the binomial.rd method

args(binomial.rd)
#> function (x, response, exposure, target.model, nuisance.model, 
#>     exposure.model = binomial.lvm(), ...) 
#> NULL

We can inspect the parameter names of the modeled

coef(m)
#>              m1              m2              m3              p1              p2 
#>             "a"     "lp.target"   "lp.nuisance"           "a~x" "lp.nuisance~x" 
#>              p3              p4 
#> "lp.nuisance~z"          "a~~a"

Here the intercepts of the model are simply given the same name as the variables, such that μA becomes a, and the other regression coefficients are labeled using the “~”-formula notation, e.g., α becomes a~x.

Intercepts are by default set to zero and regression parameters set to one in the simulation. Hence to simulate from the model with (muA, μRR, μOP, α, βx, βz)T = (−1, 1, −2, 2, 1, 1)T, we define the parameter vector p given by

p <- c('a'=-1, 'lp.target'=1, 'lp.nuisance'=-1, 'a~x'=2)

and then simulate from the model using the sim method

d <- sim(m, 1e4, p=p, seed=1)

head(d)
#>   a          x lp.target lp.nuisance          z y
#> 1 0 -0.6264538         1  -2.4307854 -0.8043316 0
#> 2 0  0.1836433         1  -1.8728823 -1.0565257 0
#> 3 0 -0.8356286         1  -2.8710244 -1.0353958 0
#> 4 1  1.5952808         1  -0.5902796 -1.1855604 1
#> 5 0  0.3295078         1  -1.1709317 -0.5004395 1
#> 6 0 -0.8204684         1  -2.3454571 -0.5249887 0

Notice, that in this simulated data the target parameter μRR has been set to lp.target = 1.

Estimation

MLE

We start by fitting the model using the maximum likelihood estimator.

args(riskreg_mle)
#> function (y, a, x1, x2 = x1, weights = rep(1, length(y)), std.err = TRUE, 
#>     type = "rr", start = NULL, control = list(), ...) 
#> NULL

The riskreg_mle function takes vectors/matrices as input arguments with the response y, exposure a, target parameter design matrix x1 (i.e., the matrix W at the start of this text), and the nuisance model design matrix x2 (odds product).

We first consider the case of a correctly specified model, hence we do not consider any interactions with the exposure for the odds product and simply let x1 be a vector of ones, whereas the design matrix for the log-odds-product depends on both X and Z

x1 <- model.matrix(~1, d)
x2 <- model.matrix(~x+z, d)

fit1 <- with(d, riskreg_mle(y, a, x1, x2, type="rr"))
fit1
#>                          Estimate Std.Err    2.5%   97.5%    P-value
#> (Intercept)                0.9512 0.03319  0.8862  1.0163 1.204e-180
#> odds-product:(Intercept)  -1.0610 0.05199 -1.1629 -0.9591  1.377e-92
#> odds-product:x             1.0330 0.05944  0.9165  1.1495  1.230e-67
#> odds-product:z             1.0421 0.05285  0.9386  1.1457  1.523e-86

The parameters are presented in the same order as the columns of x1and x2, hence the target parameter estimate is in the first row

estimate(fit1, keep=1)
#>             Estimate Std.Err   2.5% 97.5%    P-value
#> (Intercept)   0.9512 0.03336 0.8858 1.017 7.159e-179

DRE

We next fit the model using a double robust estimator (DRE) which introduces a model for the exposure E(A = 1 ∣ V) (propensity model). The double-robustness stems from the fact that the this estimator remains consistent in the union model where either the odds-product model or the propensity model is correctly specified. With both models correctly specified the estimator is efficient.

with(d, riskreg_fit(y, a, target=x1, nuisance=x2, propensity=x2, type="rr"))
#>             Estimate Std.Err   2.5% 97.5%    P-value
#> (Intercept)   0.9372  0.0339 0.8708 1.004 3.004e-168

The usual /formula/-syntax can be applied using the riskreg function. Here we illustrate the double-robustness by using a wrong propensity model but a correct nuisance paramter (odds-product) model:

  riskreg(y~a, nuisance=~x+z, propensity=~z, data=d, type="rr")
#>             Estimate Std.Err   2.5% 97.5%    P-value
#> (Intercept)   0.9511 0.03333 0.8857 1.016 4.547e-179

Or vice-versa

  riskreg(y~a, nuisance=~z, propensity=~x+z, data=d, type="rr")
#>             Estimate Std.Err   2.5% 97.5%    P-value
#> (Intercept)   0.9404 0.03727 0.8673 1.013 1.736e-140

whereas the MLE in this case yields a biased estimate of the relative risk:

  fit2 <- with(d, riskreg_mle(y, a, x1=model.matrix(~1,d), x2=model.matrix(~z, d)))
  estimate(fit2, keep=1)
#>             Estimate Std.Err  2.5% 97.5% P-value
#> (Intercept)    1.243 0.02778 1.189 1.298       0

Interactions

The more general model where log RR(V) = AαTV for a subset V of the covariates can be estimated using the target argument:

fit <- riskreg(y~a, target=~x, nuisance=~x+z, data=d)
fit
#>             Estimate Std.Err     2.5%   97.5%    P-value
#> (Intercept)  0.95267 0.03365  0.88673 1.01862 2.361e-176
#> x           -0.01078 0.03804 -0.08534 0.06378  7.769e-01

As expected we do not see any evidence of an effect of X on the relative risk with the 95% confidence limits clearly overlapping zero.

Note, that when the propensity argument is omitted as above, the same design matrix is used for both the odds-product model and the propensity model.

Risk-difference

The syntax for fitting the risk-difference model is similar. To illustrate this we simulate some new data from this model

m2 <- binomial.rd(m, response="y", exposure="a", target.model="lp.target", nuisance.model="lp.nuisance")
d2 <- sim(m2, 1e4, p=p)

And we can then fit the DRE with the syntax

riskreg(y~a, nuisance=~x+z, data=d2, type="rd")
#>             Estimate Std.Err   2.5% 97.5% P-value
#> (Intercept)    1.015 0.02069 0.9749 1.056       0

Influence-function

The DRE is a regular and asymptotic linear (RAL) estimator, hence $$\sqrt{n}(\widehat{\alpha}_{\mathrm{DRE}} - \alpha) = \frac{1}{\sqrt{n}}\sum_{i=1}^{n} \phi_{\mathrm{eff}}(Z_{i}) + o_{p}(1)$$ where Zi = (Yi, Ai, Vi), i = 1, …, n are the i.i.d. observations and ϕeff is the influence function.

The influence function can be extracted using the IC method

head(IC(fit))
#>   (Intercept)          x
#> 1   0.6226459 -0.4585424
#> 2   1.2319960  0.7925974
#> 3   0.3941325 -0.5798067
#> 4  -0.8854890  2.9621436
#> 5  -6.9949142 -5.2133643
#> 6   0.5853933 -0.7571452

SessionInfo

sessionInfo()
#> R version 4.4.1 (2024-06-14)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#> 
#> Matrix products: default
#> BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0
#> 
#> locale:
#>  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
#>  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
#>  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
#>  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
#>  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
#> 
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] targeted_0.6 lava_1.8.0   knitr_1.48  
#> 
#> loaded via a namespace (and not attached):
#>  [1] Matrix_1.7-0           future.apply_1.11.2    jsonlite_1.8.8        
#>  [4] highr_0.11             futile.logger_1.4.3    compiler_4.4.1        
#>  [7] Rcpp_1.0.13            parallel_4.4.1         jquerylib_0.1.4       
#> [10] globals_0.16.3         splines_4.4.1          yaml_2.3.10           
#> [13] fastmap_1.2.0          lattice_0.22-6         R6_2.5.1              
#> [16] future_1.34.0          nloptr_2.1.1           maketools_1.3.0       
#> [19] bslib_0.8.0            rlang_1.1.4            cachem_1.1.0          
#> [22] xfun_0.47              sass_0.4.9             sys_3.4.2             
#> [25] viridisLite_0.4.2      cli_3.6.3              formatR_1.14          
#> [28] futile.options_1.0.1   digest_0.6.37          grid_4.4.1            
#> [31] mvtnorm_1.3-1          lifecycle_1.0.4        RcppArmadillo_14.0.0-1
#> [34] timereg_2.0.6          scatterplot3d_0.3-44   pracma_2.4.4          
#> [37] evaluate_0.24.0        data.table_1.16.0      lambda.r_1.2.4        
#> [40] numDeriv_2016.8-1.1    listenv_0.9.1          codetools_0.2-20      
#> [43] buildtools_1.0.0       survival_3.7-0         optimx_2023-10.21     
#> [46] parallelly_1.38.0      rmarkdown_2.28         mets_1.3.5            
#> [49] tools_4.4.1            htmltools_0.5.8.1