Commit 87998fa0 authored by Kevin's avatar Kevin

mrge

Merge branch 'master' of git.center-tbi.eu:kunzmann/gose-6mo-imputation

# Conflicts:
#	.gitignore
#	manuscript/manuscript.Rmd
#	manuscript/references.bib
parents 8e8409f1 943d2c27
data
output
output*
.snakemake
.Rproj.user
.cache
*.sif
*.zip
*.pdf
*.Rproj
......@@ -10,5 +11,4 @@ output
*.Rhistory
*.out
*.docx
fetch
*.sif
......@@ -16,56 +16,85 @@ is required.
For information on how to get dat access, see https://www.center-tbi.eu/data.
### Software dependencies
The workflow assumes a linux command line.
To facilitate reproducibility, a
[docker container](https://cloud.docker.com/u/kkmann/repository/docker/kkmann/gose-6mo-imputation)
container with all software dependencies (R packages etc.) is provided.
The workflow itself is automated using [snakemake]() 5.2.1.
[singularity image](https://zenodo.org/record/2600384) with all software
dependencies (R packages etc.) is provided ![DOI:10.5281/zenodo.2600384](https://zenodo.org/badge/DOI/10.5281/zenodo.2600384.svg).
The workflow itself is automated using
[snakemake](https://snakemake.readthedocs.io/en/stable/index.html).
To fully leverage the container and snakemake workflow, the following software
dependencies must be available:
* [snakemake](https://snakemake.readthedocs.io/en/stable/getting_started/installation.html) version 5.2.1+; requires [python](https://www.python.org/download/releases/3.5.1/) 3.5.1+
* [singularity](https://www.sylabs.io/guides/2.6/user-guide/index.html) 2.6.0+
* [wget](https://www.gnu.org/software/wget/) [optional], only for automatic
download of container image file
* [snakemake](https://snakemake.readthedocs.io/en/stable/getting_started/installation.html) 5.2.1+
[optional], only required for cluster execution; requires [python](https://www.python.org/download/releases/3.5.1/) 3.5.1+
## How-To
## How-To ...
The download script requires the neurobot user name and the personal API key
to be stored in the environment variables `NEUROBOT_USR` `NEUROBOT_API`,
respectively, i.e.
The singularity container image can be downloaded manually from using the
digital object identifier.
Note that in this case the downloaded version of the container must match the
version specified in the URL given in the wget command of
`scripts/download_container.sh`.
It is strongly recommended to download the container via
```bash
./scripts/download_container.sh
```
to ensure the correct version.
The downloaded container image file's md5 sum is checked automatically and
an error is thrown in case of a mismatch.
Note that the image cannot be stored in this repository due to file-size
limitations.
Furthermore, the data-download script requires the neurobot user name and the
personal API key (v.s.) to be stored in the environment variables `NEUROBOT_USR`
`NEUROBOT_API`, respectively, i.e.
```bash
export NEUROBOT_USR=[my-neurobot-username]
export NEUROBOT_API=[my-neurobot-api-key]
```
### Execute Workflow on Desktop
### ... execute workflow on a single machine
The workflow can be executed on a potent desktop machine although a cluster
execution is recommended (cf. blow).
execution is recommended for speed (cf. below).
Given that singularity is installed and the container.sif file is present in
the repository root, simply invoke
```bash
./snakemake manuscript_v1_1
./snakemake impute_msm_v1_1
singularity exec container.sif snakemake create_manuscript_v1_1
singularity exec container.sif snakemake impute_population_wide_msm_v1_1
```
All output is written to `output/`.
The first command creates all files necessary to compile the cross-validated
analysis of imputation performance (output/v1.1/manuscript.docx);
the second one only computes the final imputations for the entire study
population (output/v1.1/data/imputation/msm).
Depending on the number of cores and available RAM,
the cross-validated model comparison may take several days (3+) to complete.
### Execute Workflow on Cluster
### ... execute workflow on cluster
Cluster execution requires a cluster-specific [configuration](https://github.com/kkmann/center-6mo-gose-imputation/blob/master/cluster.json).
The `singularity_slurm` script assumes existence of a slurm cluster.
Data should be downloaded on the login node:
Cluster execution requires a cluster-specific snakemake
[configuration](https://github.com/kkmann/center-6mo-gose-imputation/blob/master/cluster.json).
The `singularity_on_slurm_cluster` script assumes existence of a slurm cluster.
It is recommended to execute only the actual model fitting on the cluster and
to do all preprocessing on the login node to avoid unnecessary queueing time:
```bash
./snakemake download_data_v1_1
singularity exec container.sif snakemake generate_folds_v1_1
```
Then, simply modify the `cluster.json` accordingly and execute
Then, simply modify the `cluster.json` accordingly, make sure that
snakemake is installed and execute
```bash
./snakemake_slurm manuscript_v1_1
./snakemake_slurm impute_msm_v1_1
./snakemake_on_slurm_cluster create_manuscript_v1_1
./snakemake_on_slurm_cluster impute_population_wide_msm_v1_1
```
singularity: "docker://kkmann/gose-6mo-imputation@sha256:85724229d8f4243aaebd6228e5cc7833474577ac107f9719b00016765f9ee342"
singularity: "container.sif"
configfile: "config.yml"
......
......@@ -2,7 +2,7 @@
"__default__" :
{
"account" : "MRC-BSU-SL2-CPU",
"time" : "06:00:00",
"time" : "03:00:00",
"n" : 1,
"partition" : "bsu-cpu"
}
......
#!/bin/bash
USERNAME="kkmann"
IMAGE="gose-6mo-imputation"
BUILDNAME=$USERNAME/$IMAGE
docker build --no-cache -t $BUILDNAME .
FROM rocker/verse:latest
MAINTAINER Kevin Kunzmann kevin.kunzmann@mrc-bsu.cam.ac.uk
# update apt
RUN sudo apt-get update
# install prerequisits
RUN sudo apt-get -y install libcurl4-openssl-dev curl bzip2
# install required R packages
RUN R -e "install.packages('diagram')"
RUN R -e "install.packages('rstan')"
RUN R -e "install.packages('brms')"
RUN R -e "install.packages('mice')"
RUN R -e "install.packages('ggalluvial')"
RUN R -e "install.packages('caret')"
RUN R -e "install.packages('e1071')"
RUN R -e "install.packages('msm')"
RUN R -e "install.packages('cowplot')"
RUN R -e "install.packages('pander')"
RUN R -e "devtools::install_github('kkmann/describr')"
---
title: "Model-based longitudinal imputation of cross-sectional GOSe in CENTER-TBI"
title: "Model-based longitudinal imputation of cross-sectional outcomes in traumatic brain injury"
output:
word_document: default
pdf_document: default
......@@ -24,15 +24,25 @@ config <- yaml::read_yaml(params$config_file)
Assessments of global functional outcome such as the Glasgow Outcome Scale (GOS)
and the Glasgow Outcome Scale extended (GOSe) capture
meaningful differences across the full spectrum of recovery,
<<<<<<< HEAD
and have popularity as endpoints in TBI studies @horton2018randomized.
However, missing outcome data is a common problem in TBI research,
and for longitudinal studies completion rates at six months can be
lower than 70% @richter2019handling.
=======
and have popularity as endpoints in traumatic brain injury (TBI0 studies
(Horton et al 2018).
However, missing outcome data is a common problem in TBI research,
and for longitudinal studies completion rates at six months can be
lower than 70% (Richter et al).
This is important since it is well known that complete-case analyses may
introduce bias or reduce power [@white2010bias].
>>>>>>> 943d2c27a63fd82c265933c46a0d6ab674191f03
Imputation of patient outcomes is gradually gaining acceptance in the TBI
field as a method of dealing with missing data in studies.
This is important since it is well known that complete-case analyses may
introduce bias or reduce power [REFERENCE].
field as a method of dealing with missing data.
Recent longitudinal studies have successfully employed techniques for both
<<<<<<< HEAD
single @clifton2011 @silver2006 @skolnick2014 and
multiple imputation @bulger2010 @kirkness2006 @wright2014 @robertson2014.
The advantage of multiple imputation over single imputation clearly lies in the
......@@ -43,6 +53,16 @@ a widely applied method for dealing with missing data in TBI research,
e.g. simply substituting the respective 3-months outcome for a missing 6-months data point [REFERENCE].
Although LOCF is simple to understand and implement, the technique clearly
lacks in several respects.
=======
single (Clifton et al., 2011; Silver et al 2006; Skolnick et al 2014) and
multiple imputation (Bulger et al, 2010, Kirkness et al 2006;
Wright et al, 2014; Robertson et al, 2014).
Last observation carried forward (LOCF) is a widely applied single-imputation
method for dealing with missing data in TBI research.
Typically, 3-month outcome is substituted for missing 6-month data [REFERENCE].
Although LOCF is easz to understand and implement, the technique is clearly
lacking in several respects.
>>>>>>> 943d2c27a63fd82c265933c46a0d6ab674191f03
Firstly, it is biased in that it neglects potential trends in the GOS(e) trajectories.
Secondly, a naive application of LOCF is also inefficient since it neglects
data observed briefly after the target time window.
......@@ -55,45 +75,81 @@ This also implies that it is impossible to include additional covariates
to further reduce bias introduced by the imputation method and that LOCF
cannot be used to obtain multiply imputed data sets by design.
In this manuscript, we three model-based imputation strategies for longitudinal
GOSe data at 6 months (180 days) post-injury in TBI studies by example of the
CENTER-TBI study and compare the results with standard LOCF.
In this manuscript, three model-based imputation strategies for GOSe at
6 months (= 180 days) post-injury in the longitudinal CENTER-TBI study
[@center2015collaborative] are compared with LOCF.
While we acknowledge the principle superiority of multiple imputation over
single imputation to propagating imputation uncertainty,
we focus on single-imputation performance for primarily practical reasons:
CENTER-TBI is committed to providing a curated data base to facilitate subsequent
we focus on single-imputation performance for primarily practical reasons.
CENTER-TBI is committed to providing a curated database to facilitate subsequent
analyses.
Since the primary endpoint is functional outcome at 6 months, an default imputed
value for as many study participants as possible is desirable.
Since the primary endpoint in CENTER-TBI is functional outcome at 6 months,
a single default imputed value for as many study participants as possible is
desirable.
To this end, a single imputed value is to be included in the published database
together with a measure of uncertainty about the imputed value.
Subsequent analyses may then either resort to using the provided imputed values
or - based on the findings presented in this manuscript - fit a custom
imputation model to adjust for different covariates or obtain multiply
imputed values.
Subsequent analyses could then either use the imputed values or fit a custom
imputation model to adjust for different covariates.
Multiply imputed values can be sampled from the provided class probabilities or
the custom fit model respectively.
We compare three different model-based approaches -
a mixed-effects model, a Gaussian process regression, and a multi-state model -
for imputing cross-sectional GOSe at 6 months exploiting the longitudinal
GOSe measurements.
Each model is fit in a version with and without baseline covariates.
<<<<<<< HEAD
We propose three different model-based approaches - a mixed-effects model,
a Gaussian process regression, and a multi-state model - for imputing GOSe
longitudinally each of which we fit in a version including baseline covariates
and without.
=======
>>>>>>> 943d2c27a63fd82c265933c46a0d6ab674191f03
# Methods
# Study population
The study population for our empirical methods comparison are all individuals
from the CENTER-TBI database (n = 4509) whose GOSe status was queried at least
once and who were still alive 180 days post-injury.
The rationale for conducting the comparison conditional on 6-months survival
is simply that the GOSe can only be missing at 6-months if the individuals
are still alive since GOSe would be 1 (dead) otherwise.
## Study population
```{r read-population-data, include=FALSE}
df_baseline <- read_rds(paste0(params$data_dir, "/df_baseline.rds"))
n_pat <- nrow(df_baseline)
```
Overall, `r nrow(df_baseline)` individuals are eligible for a model-based
imputation.
The CENTER-TBI project methods and design are described in detail
elsewhere [@center2015collaborative].
Participants with TBI were recruited into three strata:
(a) patients attending the emergency room,
(b) patients admitted to hospital but not intensive care,
and (c) patients admitted to intensive care.
Follow-up of participants was scheduled per protocol at 2 weeks, 3 months,
and 6 months in group (a) and at 3 months, 6 months, and 12 months in groups
(b) and (c).
Outcome assessments at all timepoints included the GOSe
[@jennett1981disability, @mcmillan2016glasgow].
The GOSe is an eight-point scale with the following categories:
(1) dead, (2) vegetative state, (3) lower severe disability,
(4) upper severe disability, (5) lower moderate disability,
(6) upper moderate disability, (7) lower good recovery,
(8) upper good recovery.
The study population for this empirical methods comparison are all individuals
from the CENTER-TBI database (total of n = 4509) whose GOSe status was queried
at least once within the first 18 months and who were still alive 180 days
post-injury `r n_pat`.
The rationale for conducting the comparison conditional on 6-months survival
is simply that the GOSe can only be missing at 6-months if the individuals
are still alive since GOSe would be (1) otherwise.
**TODO:**
* ->LW: I explicitly asked for additional covariates that are needed from the DB,
happy to include them if you pass me the Neurobot codes!
* yes, the entire document is automatically generated to ensure reproducibility and
up-to-date data; once we agree on the manuscript we can pimp the table formatting and
add cross-references to the respective figures (that is only supported for pdf output which you could not edit directly)
```{r baseline-table-continuous, echo=FALSE, results='asis'}
summarizer <- function(x) {
......@@ -154,21 +210,37 @@ df_baseline %>%
```{r read-gose-data, include=FALSE}
df_gose <- read_rds(paste0(params$data_dir, "/df_gose.rds"))
n_gose_180 <- df_gose %>%
filter(between(Outcomes.DerivedCompositeGOSEDaysPostInjury, 180 - 14, 180 + 14)) %>%
group_by(gupi) %>%
n_groups()
n_gose_pp <- df_gose %>%
filter(between(Outcomes.DerivedCompositeGOSEDaysPostInjury, 5*30, 8*30)) %>%
group_by(gupi) %>%
n_groups()
```
A total of `r nrow(df_gose)` GOSe observations are available for the study
population.
The marginal distribution of their sampling times as well as absolute and
relative frequencies of the respective GOSe categories are depicted in
Only GOSe observations between injury and 18 months post injury are used
since extremely late follow-ups are not providing enough information.
This leads to a total of `r nrow(df_gose)` GOSe observations of the study
population being availabe for the analyses in this manuscript.
Only for `r n_gose_180` (`r round(n_gose_180 / n_pat * 100, 1)`%) individuals,
GOSe observations at 180 +/- 14 days post injury are available,
for `r n_gose_pp` (`r round(n_gose_pp / n_pat * 100, 1)`%) individuals
GOSe observations within the per-protocol window of 5-8 months post injury
exist.
The distribution of GOSe sampling times and both absolute and
relative frequencies of the respective GOSe categories are shown in
Figure (???).
Note that the small fraction of GOSe 1 (dead) observations at 6 months is due
to the fact that the true observation times where mapped to the respective
categories by rounding to the respective closest.
I.e., the '6 months' category actually contains observations of up to 9 months
post-injury and the figures are thus not contradicting the study population
inclusion criteria.
```{r gose-plots, echo=FALSE, fig.cap="GOSe time point distribution and marginal distribution at per-protocol time points.", fig.height=5, fig.width=7}
True observation times were mapped to categories by rounding to the
closest time point, i.e.,
the 6 months category contains observations up to 9 months post-injury.
Thus the figures include a small proportion of GOSE 1 representing patients
who died between 6 and 9 months.
```{r gose-plots, echo=FALSE, fig.cap="GOSe sampling time distribution and distribution at per-protocol time points.", fig.height=5, fig.width=7}
p_gose_times <- df_gose %>%
ggplot(aes(Outcomes.DerivedCompositeGOSEDaysPostInjury / 30)) +
geom_histogram(binwidth = 1) +
......@@ -253,41 +325,35 @@ gridExtra::grid.arrange(
# Methods
# Imputation methods
We compare the current default method of last observation carried forward
(LOCF) to
a mixed effect model (MM), a Gaussian process regression (GP)
as well as a multi-state model (MSM).
For all model-based approaches we additionally explore variants including
the key IMPACT [REFERENCE] predictors as covariates.
For the MSM approach the inclusion of all predictors proved to be numerically
unstable due to the resulting model complexity and we only considered
age at injury.
We compare last observation carried forward (LOCF) to a mixed effect model (MM),
a Gaussian process regression (GP), and a multi-state model (MSM).
For all model-based approaches we additionally explore variants including the key
IMPACT [@steyerberg2008predicting] predictors as covariates.
## Last-observation-carried-forward
Since LOCF is still widely used to impute missing outcomes in TBI studies,
it serves as baseline methods for the comparison at hand.
Since LOCF is widely used to impute missing outcomes in TBI studies,
it serves as the baseline method.
Here, LOCF is defined as the last GOSe observation before the
imputation time point of exactly 180 days post-injury.
Note that LOCF is not model-based and may not permit the imputation
of a GOSe value for all subjects in the study population.
A value cannot be imputed, if only valid GOSe observations after 180 days
post-injury are available for a subject but none before.
imputation time point of 180 days post-injury.
LOCF is not model-based and, by definition, only permits the imputation of a
GOSe value for subjects where at least one value is available within the
first 180 days post injury.
We account for this lack of complete coverage under LOCF by performing all
performance comparisons including LOCF on the subset of individuals
performance comparisons including LOCF only on the subset of individuals
for which a LOCF-imputed value can be obtained (cf. Section ???).
## Mixed-effects model
Mixed effects models are a standard approach to longitudinal data analysis
which model individual deviations from a population mean trajectory
(REFERENCE).
Mixed effects models are a a widely used approach in longitudinal
data analysis andd model individual deviations from a population mean trajectory
[@verbeke2009linear].
To account for the fact that the GOSe outcome is an ordered factor,
we employ a cumulative link function model with flexible intercepts @Agresti2003.
we employ a cumulative link function model with flexible intercepts [@Agresti2003].
The population mean is modeled as spline function with knots at ???
to allow a non-linear population mean trajectory.
Patient-individual deviations from this population mean are modeled
......@@ -295,38 +361,39 @@ as quadratic polynomials to allow sufficient flexibility (random effects).
Baseline covariates are added as linear fixed effects to to the
population mean.
The model was fitted using Bayesian statistics via the BRMS
package @brms2017, @brms2018 for the R environment for statistical
computing @R2016 and the Stan modelling language @stan2017.
package [@brms2017, @brms2018] for the R environment for statistical
computing [@R2016] and the Stan modelling language [@stan2017].
Non-informative priors were used for the model parameters.
## Gaussian process model
A potential drawback of the proposed longitudinal mixed effects model
is the fact that the individual deviations from the population mean
are modeled globally using polynomials.
Since linear and quadratic terms are not identifiable for patients with
only one observed GOSE value, this implies large uncertainty over the patient-specific effects for individuals with one or two observations
only one observed GOSE value, this implies large uncertainty over the
patient-specific effects for individuals with one or two observations
only by falling back on the non-informative priors on these model
parameters.
Thus, the overall uncertainty associated with model-based imputations
at exactly 180 days may become relatively large.
Gaussian process regression allows to flexibly model
## Gaussian process model
Gaussian process regression allows flexible modelling of
both the individual GOSE trajectories as well as the population mean
in a Bayesian non-parametric way @rasmussen2006.
This non-parametric paradigm leads to a very flexible model and
low model-uncertainty in the vicinity of actually observed GOSe outcomes.
in a Bayesian non-parametric way [@rasmussen2006].
This non-parametric paradigm leads to low model-uncertainty in the
vicinity of actually observed GOSe outcomes.
To account for the discreteness of the GOSe outcome,
the continuous output of the Gaussian process model is rounded to the
nearest integer in 1 to 8 (GOSe categories).
The commonly used squared exponential covariance function with shared
The squared exponential covariance function with shared
length scale for all individuals is used to model intra-individual
dependency of GOSe outcomes.
The population mean trajectory of the Gaussian process is modeled as
mean of an independent Gaussian process with pseudo observations at
`r config$gp$t_knots %>% as.numeric()` days post-injury (also using a squared
exponential covariance function).
`r config$gp$t_knots %>% as.numeric()` days post-injury
(also using a squared exponential covariance function).
This approach maintains flexibility of the population mean function
similar to the spline-based approach for the mixed effects model
while avoiding the the computational complexity of a fully
......@@ -340,7 +407,8 @@ Due to the sparseness of the data, the estimated length scale will naturally
tend towards extremely large values implying unrealistically
long-range dependency between observations.
We therefore limit the length scale to a maximum of 120 days (4 months) and
impose a Gaussian prior $\mathcal{N}(60, 14)$.
impose a Gaussian prior with a mean of 60 days post injury and and a
standard deviation of 14.
## Multi-state model
......@@ -397,12 +465,11 @@ piece wise constant transition intensities were fitted to the intervals
[0, 90), [90, 270), and 270+ days post-injury.
The model was fit using the msm @msm2011 package for the R environment
for statistical computing @R2016.
Due to the large number of 19 transition intensities in the proposed model
Due to the relatively large number of 19 transition intensities in the proposed model
(cf. arrows in Figure ???, structure of transition graph),
inclusion of all baseline covariates turned out to be numerically unstable.
For the MSM model, instead of including all covariates,
only a model adjusting for age at injury via a proportional hazard approach
was fit.
only a model adjusting for age at injury via a proportional hazard approach was fit.
## Performance assessment
......@@ -423,18 +490,7 @@ for (i in 1:config$folds) {
)
}
```
Model performance was assessed via three fold cross validation on the subset
of individuals with a valid GOSe value within 180 +/- 14 days post-injury
(n = `r nrow(df_ground_truth)`.
All models were fit on the entire available data after removing the
180 +/- 14 days post-injury observation from the respective test fold
thus mimicking a missing completely at random missing data mechanism.
The distribution of GOSe values in the respective three test sets is well
balanced, cf. Figure ??? (Appendix).
```{r misc, include=FALSE}
modelnames <- lapply(
list.dirs(
sprintf("%s/validation/posteriors", params$data_dir),
......@@ -502,20 +558,35 @@ df_predictions <- df_model_posteriors %>%
)
```
Model performance was assessed via three fold cross validation on the subset
of individuals with a valid GOSe value within 180 +/- 14 days post-injury
(n = `r nrow(df_ground_truth)`.
All models were fit on the entire available data after removing the
180 +/- 14 days post-injury observation from the respective test fold
thus mimicking a missing completely at random missing data mechanism.
The distribution of GOSe values in the respective three test sets is well
balanced, cf. Figure ??? (cf. Appendix).
Performance is assessed using the absolute-count and the normalized
(proportions) confusion matrices as well as
bias, mean absolute error (MAE) and root mean squared error (RMSE).
Bias, MAE, and RMSE are considered both conditional on the ground-truth
(unobserved true GOSe) as well as averaged over the entire test set.
Note that comparisons in terms of bias, MAE, and RMSE tacitly assume that
bias, mean absolute error (MAE), and root mean squared error (RMSE).
Bias indicates whether, on average, predicted values are systematically
lower (negative) or higher (positive) than observed values.
MAE and RMSE are both a measures of average precision where
RMSE puts more weight on large deviations as compared to RMSE.
Comparisons in terms of bias, MAE, and RMSE tacitly assume that
GOSe values can be sensibly interpreted on an interval scale.
LOCF, by design, may not be able to provide imputed values whenever
no observation before 180 days post-injury is observed.
Since there is reason to assume that these individuals 6-months GOSE will be
harder to predict,
a comparison of LOCF with the other methods must be conditional on the
set of individuals for which an LOCF imputation is possible.
We therefore also consider $Pr[imp. > true] - Pr[imp. < true]$ as an
alternative measure of bias which does not require this tacit assumption.
Note that the scale is not directly comparable to the one of the
other three quantities!
All measures are considered both conditional on the ground-truth
(unobserved observed GOSe) as well as averaged over the entire test set.
LOCF, by design, cannot provide imputed values when there are no
observations before 180 days post injury.
A valid comparison of LOCF with the other methods must therefore be
baseed on the set of individuals for whom an LOCF imputation is possible.
```{r non-locf-ids, include=FALSE}
idx <- df_predictions %>%
filter(model == "LOCF", !complete.cases(.)) %>%
......@@ -524,7 +595,6 @@ idx <- df_predictions %>%
Overall, `r length(idx)` out of
`r df_predictions %>% filter(model == "LOCF") %>% nrow` test cases
(`r round(100 * length(idx) / (df_predictions %>% filter(model == "LOCF") %>% nrow), 1)`%) cannot be imputed with the LOCF approach.
In the entire study population, `r df_gose %>% group_by(gupi) %>% summarize(LOCF = any(Outcomes.DerivedCompositeGOSEDaysPostInjury <= 180)) %>% ungroup %>% summarize(n_LOCF = sum(!LOCF)) %>% .[["n_LOCF"]]`
individuals do not permit an LOCF imputation (`r round(100 * (df_gose %>% group_by(gupi) %>% summarize(LOCF = any(Outcomes.DerivedCompositeGOSEDaysPostInjury <= 180)) %>% ungroup %>% summarize(n_LOCF = sum(!LOCF)) %>% .[["n_LOCF"]]) / (df_gose$gupi %>% unique %>% length), 1)`%).
......@@ -538,84 +608,88 @@ The overall performance of all fitted models in terms of bias, MAE, and RMSE
is depicted in Figure ??? both conditional on LOCF being applicable and,
excluding LOCF, on the entire test set.
```{r overall-comparison-all-methods, echo=FALSE, fig.height=7}
p1 <- df_predictions %>%
filter(!(gupi %in% idx)) %>%
group_by(model, fold) %>%
summarize(
RMSE = mean((GOSE - prediction)^2, na.rm = TRUE) %>% sqrt,
MAE = mean(abs(GOSE - prediction), na.rm = TRUE),
Bias = mean(prediction, na.rm = TRUE) - mean(GOSE, na.rm = TRUE),
`Pr[est > true] - Pr[est < true]` = mean(prediction > GOSE, na.rm = TRUE) - mean(prediction < GOSE, na.rm = TRUE)
) %>%
ungroup %>%
gather(error, value, -model, -fold) %>%
ggplot(aes(model, value)) +
geom_hline(yintercept = 0, color = "black") +
geom_boxplot() +
facet_wrap(~error, nrow = 1) +
scale_y_continuous(name = "", breaks = seq(-2, 8, .25), limits = c(-.5, 1.5)) +
scale_x_discrete("") +
theme_bw() +
theme(
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
axis.text.x = element_text(angle = 66, hjust = 1)
) +
ggtitle("Overall summary measures, LOCF subset")
p2 <- df_predictions %>%
filter(model != "LOCF") %>%
group_by(model, fold) %>%
summarize(
RMSE = mean((GOSE - prediction)^2, na.rm = TRUE) %>% sqrt,
MAE = mean(abs(GOSE - prediction), na.rm = TRUE),
Bias = mean(prediction, na.rm = TRUE) - mean(GOSE, na.rm = TRUE),
`Pr[est > true] - Pr[est < true]` = mean(prediction > GOSE, na.rm = TRUE) - mean(prediction < GOSE, na.rm = TRUE)
) %>%
ungroup %>%
gather(error, value, -model, -fold) %>%
ggplot(aes(model, value)) +
geom_hline(yintercept = 0, color = "black") +
geom_boxplot() +
facet_wrap(~error, nrow = 1) +
scale_y_continuous(name = "", breaks = seq(-2, 8, .25), limits = c(-.5, 1.5)) +
scale_x_discrete("") +
theme_bw() +
theme(
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
axis.text.x = element_text(angle = 66, hjust = 1)
) +
ggtitle("Overall summary measures, full test set")
```{r overall-comparison-all-methods, echo=FALSE, fig.height=9, fig.width=6}
plot_summary_measures <- function(df, label) {
df %>%
group_by(model, fold) %>%
summarize(
RMSE = mean((GOSE - prediction)^2, na.rm = TRUE) %>% sqrt,
MAE = mean(abs(GOSE - prediction), na.rm = TRUE),
Bias = mean(prediction, na.rm = TRUE) - mean(GOSE, na.rm = TRUE),
`Pr[imp. > true] - Pr[imp. < true]` = mean(prediction > GOSE, na.rm = TRUE) - mean(prediction < GOSE, na.rm = TRUE)
) %>%
ungroup %>%
gather(error, value, -model, -fold) %>%
mutate(
error = factor(error, c(
"Bias",
"Pr[imp. > true] - Pr[imp. < true]",
"MAE",
"RMSE"
))
) %>%
group_by(model, error) %>%
summarize(
mean_error = mean(value),
se_error = sd(value) / sqrt(n())
) %>%
ggplot(aes(x = model, y = mean_error)) +
geom_hline(yintercept = 0, color = "black") +
geom_point(size = .8) +
geom_errorbar(
aes(ymin = mean_error - 1.96*se_error, ymax = mean_error + 1.96*se_error),
width = .25
) +
facet_wrap(~error, nrow = 2) +
scale_y_continuous(name = "", breaks = seq(-2, 8, .25), limits = c(-.5, 1.25)) +
scale_x_discrete("") +
theme_bw() +
theme(
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
axis.text.x = element_text(angle = 66, hjust = 1)
) +
ggtitle(label)
}
cowplot::plot_grid(
p1,
p2,
ncol = 1, align = "v", axis = "lr"
)
plot_summary_measures(
df_predictions %>% filter(!(gupi %in% idx)),
"Summary measures, LOCF subset"
),
plot_summary_measures(
df_predictions,
"Summary measures, full test set"
),
ncol = 1, align = "v", axis = "lr"
)
```
Firstly, as could be expected, LOCF is overall negatively biased, i.e.,
Firstly, LOCF is overall negatively biased, i.e.,
on average it imputes lower-than-observed GOSe values.
This indicates that there is a population average trend towards continued
recovery within the first 6 months post-injury.
In terms of accuracy, the LOCF does perform worst but differences between
This reflects a population average trend towards continued
recovery within the first 6 months post injury.
The fact that both ways of quantifying bias qualitatively agree,
indicates that the interpretation of GOSe as an interval measure which is
tacitly underlying Bias, MAE, and RMSE comparisons is not too restrictive.
In terms of accuracy, LOCF does perform worst but differences between
methods are less pronounced than in terms of bias.
Notably, the difference in terms of RMSE is more substantial than in terms of
MAE which indicates that LOCF tends to incur more large deviations across
several GOSe categories than the other methods considered here.
Notably, the RMSE difference between LOCF and the other methods is slightly
larger than the MAE difference which indicates that LOCF tends to produce more
large deviations, i.e., across several GOSe categories.
Secondly, including baseline covariates only affects the GP regression model
notably.
Both the MM and MSM model perform more or less the same irrespective of adjustment
for baseline covariates.
This indicates that the marginal additional predictive value of baseline
This indicates that the additional predictive value of baseline
covariates over the information contained in at least one observed GOSe value
is limited.
Furthermore, note that both variants of the mixed effects model fail to remove
Furthermore, note that both variants of the mixed effects model fail to correct
the overall bias of the imputed values.
We proceed with an in-depth analyses of a subset of models both in direct
We proceed with a detailed analyses of a subset of models both in direct
comparison with LOCF and in the entire data set including those cases where
LOCF is not applicable.
In the following we only consider the baseline-adjusted Gaussian process model
......@@ -623,101 +697,142 @@ In the following we only consider the baseline-adjusted Gaussian process model
and the multi-state model without baseline covariates ('MSM').
The rationale behind dropping baseline adjustment for MM and MSM being that
the additional complexity does not substantially alter overall performance.
On the other hand, the GP model benefits from the inclusion of the IMPACT
baseline covariates.
## Detailed comparison conditional on LOCF subset
We first consider results for the set of test cases which allow LOCF imputation
(n = `r df_predictions %>% filter(model == "LOCF") %>% nrow - length(idx)`).
Both the raw count as well as the relative (by true GOSe) confusion matrices
Both the raw count as well as the relative (by left-out observed GOSe) confusion matrices
are presented in Figure ???.
```{r confusion-matrix-locf, warning=FALSE, message=FALSE, echo=FALSE, fig.cap="Confusion matrices on LOCF subset."}
df_average_confusion_matrices <- df_predictions %>%
filter(!(gupi %in% idx)) %>%
group_by(fold, model) %>%
do(
confusion_matrix = caret::confusionMatrix(
data = factor(.$prediction, levels = 1:8),
reference = factor(.$GOSE, levels = 1:8)
) %>%
as.matrix %>% as_tibble %>%
mutate(`Predicted GOSE` = row_number() %>% as.character) %>%
gather(`True GOSE`, n, 1:8)
) %>%
unnest %>%
group_by(model, `Predicted GOSE`, `True GOSE`) %>%
summarize(n = mean(n)) %>%
ungroup
p_cnf_mtrx_raw <- df_average_confusion_matrices %>%
filter(model %in% c("LOCF", "MM", "GP + cov", "MSM")) %>%
group_by(model, `True GOSE`) %>%
ggplot(aes(`True GOSE`, `Predicted GOSE`, fill = n)) +
geom_raster() +
geom_hline(yintercept = c(2, 4, 6) + .5, color = "black") +
geom_vline(xintercept = c(2, 4, 6) + .5, color = "black") +
scale_fill_gradient(low = "white", high = "black") +
coord_fixed(expand = FALSE) +
labs(x = "true GOSe", y = "imputed GOSe", fill = "") +
theme_bw() +
theme(
panel.grid = element_blank()
) +
facet_wrap(~model, nrow = 1) +
ggtitle("Average confusion matrix accross folds (absolute counts)")
p_cnf_mtrx_colnrm <- df_average_confusion_matrices %>%
filter(model %in% c("LOCF", "MM", "GP + cov", "MSM")) %>%
group_by(model, `True GOSE`) %>%
mutate(
`fraction (column)` = n / sum(n),
`fraction (column)` = ifelse(is.nan(`fraction (column)`), 0, `fraction (column)`)
) %>%
ggplot(aes(`True GOSE`, `Predicted GOSE`, fill = `fraction (column)`)) +
geom_raster() +
geom_hline(yintercept = c(2, 4, 6) + .5, color = "black") +
geom_vline(xintercept = c(2, 4, 6) + .5, color = "black") +
scale_fill_gradient("", low = "white", high = "black", limits = c(0, 1)) +
coord_fixed(expand = FALSE) +
labs(x = "true GOSe", y = "imputed GOSe", fill = "") +
theme_bw() +
theme(
panel.grid = element_blank()
) +
facet_wrap(~model, nrow = 1) +
ggtitle("Average confusion matrix accross folds (column fraction)")
```{r confusion-matrix-locf, warning=FALSE, message=FALSE, echo=FALSE, fig.cap="Confusion matrices on LOCF subset.", fig.height=9, fig.width=6}
plot_confusion_matrices <- function(df_predictions, models) {
df_average_confusion_matrices <- df_predictions %>%
filter(model %in% models) %>%
group_by(fold, model) %>%
do(
confusion_matrix = caret::confusionMatrix(
data = factor(.$prediction, levels = 1:8),
reference = factor(.$GOSE, levels = 1:8)
) %>%
as.matrix %>% as_tibble %>%
mutate(`Predicted GOSE` = row_number() %>% as.character) %>%
gather(`Observed GOSE`, n, 1:8)
) %>%
unnest %>%
group_by(model, `Predicted GOSE`, `Observed GOSE`) %>%
summarize(n = mean(n)) %>%
ungroup %>%
mutate(model = factor(model, models))
p_cnf_mtrx_raw <- df_average_confusion_matrices %>%
ggplot(aes(`Observed GOSE`, `Predicted GOSE`, fill = n)) +
geom_raster() +
geom_text(aes(
label = sprintf("%.1f", n) %>%
ifelse(. == "0.0", "", .)
),
size = 2
) +
geom_hline(yintercept = c(2, 4, 6) + .5, color = "black") +
geom_vline(xintercept = c(2, 4, 6) + .5, color = "black") +
scale_fill_gradient(low = "white", high = "#555555") +
coord_fixed(expand = FALSE) +
labs(x = "observed GOSe", y = "imputed GOSe", fill = "") +
theme_bw() +
theme(
panel.grid = element_blank()
) +
facet_wrap(~model, nrow = 2) +
ggtitle("Average confusion matrix accross folds (absolute counts)")
p_cnf_mtrx_colnrm <- df_average_confusion_matrices %>%
group_by(model, `Observed GOSE`) %>%
mutate(
`fraction (column)` = n / sum(n),
`fraction (column)` = ifelse(is.nan(`fraction (column)`), 0, `fraction (column)`)
) %>%
ggplot(aes(`Observed GOSE`, `Predicted GOSE`, fill = `fraction (column)`)) +
geom_raster() +
geom_hline(yintercept = c(2, 4, 6) + .5, color = "black") +
geom_vline(xintercept = c(2, 4, 6) + .5, color = "black") +
scale_fill_gradient("", low = "white", high = "black", limits = c(0, 1)) +
coord_fixed(expand = FALSE) +
labs(x = "observed GOSe", y = "imputed GOSe", fill = "") +
theme_bw() +
theme(
panel.grid = element_blank()
) +
facet_wrap(~model, nrow = 2) +
ggtitle("Average confusion matrix accross folds (column fraction)")
cowplot::plot_grid(p_cnf_mtrx_raw, p_cnf_mtrx_colnrm, ncol = 1, align = "v")
}
cowplot::plot_grid(p_cnf_mtrx_raw, p_cnf_mtrx_colnrm, ncol = 1, align = "v")
plot_confusion_matrices(
df_predictions %>% filter(!(gupi %in% idx)),
c("MSM", "GP + cov", "MM", "LOCF")
)
ggsave(filename = "confusion_matrices_locf.pdf", width = 7, height = 7)
ggsave(filename = "confusion_matrices_locf.png", width = 7, height = 7)
ggsave(filename = "confusion_matrices_locf.pdf", width = 6, height = 9)
ggsave(filename = "confusion_matrices_locf.png", width = 6, height = 9)
```
The absolute-count confusion matrices show that most imputed values are
within +/- one GOSE categories of the true value.
However, they also reflect the category imbalance (cf. Figures ??? and ???,
Appendix) in the study ppopulation.
The performance conditional on the (in practice unknown) true GOSe value
within +/- one GOSE categories of the observed ones.
However, they also reflect the category imbalance (cf. Figures ??? and ???
Appendix) in the study population.
The performance conditional on the (in practice unknown) observed GOSe value
clearly shows that imputation performance for the most infrequent category 4
(GOSe of 1 and 2 are not observed in the test set!) is most problematic.
This is, however, true across the board of methods considered.
Note that both the MSM as well as the MM models account for this fact by
almost never imputing a GOSe of 4.
is most problematic.
This is, however, true across the range of methods considered.
Both the MSM and the MM models account for this by almost never imputing a
GOSe of 4.
Instead, the respective cases tend to be imputed to GOSe 3 or 5.
Additionally, the following table ??? lists some confusion probabilities
of particular interest.
[Comment: 1 -> > 1 is not relevant since our imputation is conditional
on not being 1 at 6 months; the comparison seems to favor LOCF since only
**TODO:**
* this section table is the one we David requested in our last meeting,
not entirely convinced though ...
* ... 1 -> > 1 is not relevant since our imputation is conditional
on not being 1 at 6 months
* ... the comparison seems to favor LOCF since only
upward confusions are considered (which LOCF by design tends to do less)]
* Is there a clinical interpretation along the way that '4' might constitue
a short-term transition state or is it just defined in a way that makes it
highly unlikely to be observed in practice?
```{r crossing-table, echo=FALSE, warning=FALSE, results='asis'}
models <- c("MSM", "GP + cov", "MM")
df_average_confusion_matrices <- df_predictions %>%
filter(model %in% models) %>%
filter(!(gupi %in% idx)) %>%
group_by(fold, model) %>%
do(
confusion_matrix = caret::confusionMatrix(
data = factor(.$prediction, levels = 1:8),
reference = factor(.$GOSE, levels = 1:8)
) %>%
as.matrix %>% as_tibble %>%
mutate(`Predicted GOSE` = row_number() %>% as.character) %>%
gather(`Observed GOSE`, n, 1:8)
) %>%
unnest %>%
group_by(model, `Predicted GOSE`, `Observed GOSE`) %>%
summarize(n = mean(n)) %>%
ungroup %>%
mutate(model = factor(model, models))
rbind(
df_average_confusion_matrices %>%
filter(model %in% c("LOCF", "MM", "GP + cov", "MSM")) %>%
group_by(model) %>%
filter(`True GOSE` <= 3) %>%
filter(`Observed GOSE` <= 3) %>%
mutate(n_total = sum(n)) %>%
filter(`Predicted GOSE` > 3) %>%
summarize(fraction = sum(n / n_total)) %>%
......@@ -726,7 +841,7 @@ df_average_confusion_matrices %>%
df_average_confusion_matrices %>%
filter(model %in% c("LOCF", "MM", "GP + cov", "MSM")) %>%
group_by(model) %>%
filter(`True GOSE` == 4) %>%
filter(`Observed GOSE` == 4) %>%
mutate(n_total = sum(n)) %>%
filter(`Predicted GOSE` > 4) %>%
summarize(fraction = sum(n / n_total)) %>%
......@@ -735,7 +850,7 @@ df_average_confusion_matrices %>%
df_average_confusion_matrices %>%
filter(model %in% c("LOCF", "MM", "GP + cov", "MSM")) %>%
group_by(model) %>%
filter(`True GOSE` < 8) %>%
filter(`Observed GOSE` < 8) %>%
mutate(n_total = sum(n)) %>%
filter(`Predicted GOSE` == 8) %>%
summarize(fraction = sum(n / n_total)) %>%
......@@ -746,59 +861,86 @@ df_average_confusion_matrices %>%
pander::pandoc.table("Some specific confusion percentages, LOCF subset.", digits = 3)
```
[TODO: Interpretation? GOSe 4 potentially relatively brief transient state,
clinical relevance of distinction between 3/4?]
To better understand the overall performance assessment in Figure ???,
we also consider the performance conditional on the respective ground-truth
(i.e. the true GOSe values in the respective test sets).
The results are shown in Figure ???.
(i.e. the observed GOSe values in the test sets).
The results are shown in Figure ??? (vertical bars are =/- one standard error of the mean).
```{r error-scores-locf, echo=FALSE, fig.height=3}
df_predictions %>%
filter(!(gupi %in% idx)) %>%
filter(model %in% c("LOCF", "MM", "GP + cov", "MSM")) %>%
group_by(model, fold, GOSE) %>%
summarize(
RMSE = mean((GOSE - prediction)^2, na.rm = TRUE) %>% sqrt,
MAE = mean(abs(GOSE - prediction), na.rm = TRUE),
bias = mean(prediction, na.rm = TRUE) - mean(GOSE, na.rm = TRUE)
) %>%
gather(error, value, -model, -fold, -GOSE) %>%
group_by(GOSE, model, error) %>%
summarize(
mean = mean(value),
sd = sd(value)
) %>%
ungroup() %>%
```{r error-scores-locf, echo=FALSE, fig.height=5, fig.width=9}
plot_summary_measures_cond <- function(df_predictions, models, label) {
df_predictions %>%
filter(model %in% models) %>%
group_by(model, fold, GOSE) %>%
summarize(
RMSE = mean((GOSE - prediction)^2, na.rm = TRUE) %>% sqrt,
MAE = mean(abs(GOSE - prediction), na.rm = TRUE),
Bias = mean(prediction, na.rm = TRUE) - mean(GOSE, na.rm = TRUE),
`Pr[imp. > true] - Pr[imp. < true]` = mean(prediction > GOSE, na.rm = TRUE) - mean(prediction < GOSE, na.rm = TRUE)
) %>%
gather(error, value, -model, -GOSE, -fold) %>%
group_by(GOSE, model, error, fold) %>%
summarize(
mean_per_fold = mean(value) # overall mean
) %>%
group_by(GOSE, model, error) %>%
summarize(
mean = mean(mean_per_fold), # overall mean
se = sd(mean_per_fold) / sqrt(n())
) %>%
ungroup() %>%
mutate(
model = factor(model, models),
error = factor(error, c(
"Bias",
"Pr[imp. > true] - Pr[imp. < true]",
"MAE",
"RMSE"
))
) %>%
ggplot(aes(GOSE, color = model)) +
geom_hline(yintercept = 0, color = "black") +
geom_line(aes(y = mean)) +
geom_point(aes(y = mean)) +
xlab("GOSe") +
facet_wrap(~error) +
scale_y_continuous(name = "", breaks = seq(-2, 8, .25)) +
theme_bw() +
theme(
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
legend.title = element_blank()
) +
ggtitle("Bias and accuracy by true GOSe value, LOCF subset")
geom_hline(yintercept = 0, color = "black") +
geom_errorbar(aes(ymin = mean - se, ymax = mean + se),
width = .2,
position = position_dodge(.33)
) +
geom_line(aes(y = mean), alpha = .66) +
xlab("GOSe") +
facet_wrap(~error, nrow = 1) +
scale_y_continuous(name = "", breaks = seq(-2, 8, .25)) +
theme_bw() +
theme(
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
legend.title = element_blank()
) +
ggtitle(label)
}
ggsave(filename = "errors_stratified_locf.pdf", width = 7, height = 7)
ggsave(filename = "errors_stratified_locf.png", width = 7, height = 7)
plot_summary_measures_cond(
df_predictions %>% filter(!(gupi %in% idx)),
c("MSM", "GP + cov", "MM", "LOCF"),
"Summary measures by observed GOSe, LOCF subset"
)
ggsave(filename = "errors_stratified_locf.pdf", width = 9, height = 5)
ggsave(filename = "errors_stratified_locf.png", width = 9, height = 5)
```
Just as with the overall performance, differences are most pronounced in terms
of bias.
Interestingly, the conditional perspective reveals differences between bias
as difference between mean imputed and mean observed values (tacitly assuming
an at least interval scale) and the difference in the probability to
over- or undershoot the observed value.
Again, the category imbalance in the GOSe distribution explains the fact that
all model-based approaches tend to perform better for the most frequent
categories 6, 7, and 8 while sacrificing performance for the less frequent
categories 4 and 5 as compared to LOCF.
Bias-wise all methods exhibit a certain regression to the mean effect since low
categories tend to be confused with better (higher) GOSe on average while
high true GOSe values are dampened (negative bias at 7, 8).
high observed GOSe values are dampened (negative bias at 7, 8).
Since LOCF does not take the category imbalance into account and since it exhibits
a relatively large negative bias at the most frequent GOSe values, it is
overall negatively biased.
......@@ -806,14 +948,14 @@ Interestingly, the conditional assessment of the GP regressions bias profile
reveals that the overall unbiasedness can be explained by the relatively high
positive and negative biases conditional on low/high GOSe values canceling out
in the overall population.
Since the accuracy results mirror this effect, the GP regression model seems
to suffer from an overly string regression to the mean effect.
The MSM and MM models are fairly similar with respect to accuracy with a slight
advantage for MSM.
Interestingly, though, the MSM approach is consistently less positively biased
across the rarer low GOSe categories while being only insignificantly more
negatively biased for categories 7 and 8.
The MSM and MM models are fairly similar with respect to accuracy but MSM
clearly dominates with respect to bias.
Note that irrespective of the exact definition of bias used, MSM ominates the other
model-based approaches.
Comparing LOCF and MSM, there is a slight advantage of MSM in terms of accuracy for
the majority classes 3, 7, 8 which explain the overall difference shwon in Figure ???.
With respect to bias, MSM also performs better than LOCF for the most frequently
observed categories, but the extent of this improvement depend on the performance measure.
......@@ -826,73 +968,44 @@ where only GOSe values after 180 days post-injury are available.
The relative characteristics of the three considered approaches are comparable
to the LOCF subset.
```{r confusion-matrix, warning=FALSE, message=FALSE, echo=FALSE, fig.cap="Confusion matrices, full training set without LOCF."}
df_average_confusion_matrices <- df_predictions %>%
filter(
model %in% c("GP + cov", "MM", "MSM")
) %>%
group_by(fold, model) %>%
do(
confusion_matrix = caret::confusionMatrix(
data = factor(.$prediction, levels = 1:8),
reference = factor(.$GOSE, levels = 1:8)
) %>%
as.matrix %>% as_tibble %>%
mutate(`Predicted GOSE` = row_number() %>% as.character) %>%
gather(`True GOSE`, n, 1:8)
) %>%
unnest %>%
group_by(model, `Predicted GOSE`, `True GOSE`) %>%
summarize(n = mean(n)) %>%
ungroup
p_cnf_mtrx_raw <- df_average_confusion_matrices %>%
group_by(model, `True GOSE`) %>%
ggplot(aes(`True GOSE`, `Predicted GOSE`, fill = n)) +
geom_raster() +
geom_hline(yintercept = c(2, 4, 6) + .5, color = "black") +
geom_vline(xintercept = c(2, 4, 6) + .5, color = "black") +
scale_fill_gradient(low = "white", high = "black") +
coord_fixed(expand = FALSE) +
labs(x = "true GOSe", y = "imputed GOSe", fill = "") +
theme_bw() +
theme(
panel.grid = element_blank()
) +
facet_wrap(~model, nrow = 1) +
ggtitle("Average confusion matrix accross folds (absolute counts)")
**TODO**
p_cnf_mtrx_colnrm <- df_average_confusion_matrices %>%
group_by(model, `True GOSE`) %>%
mutate(
`fraction (column)` = n / sum(n),
`fraction (column)` = ifelse(is.nan(`fraction (column)`), 0, `fraction (column)`)
) %>%
ggplot(aes(`True GOSE`, `Predicted GOSE`, fill = `fraction (column)`)) +
geom_raster() +
geom_hline(yintercept = c(2, 4, 6) + .5, color = "black") +
geom_vline(xintercept = c(2, 4, 6) + .5, color = "black") +
scale_fill_gradient("", low = "white", high = "black", limits = c(0, 1)) +
coord_fixed(expand = FALSE) +
labs(x = "true GOSe", y = "imputed GOSe", fill = "") +
theme_bw() +
theme(
panel.grid = element_blank()
) +
facet_wrap(~model, nrow = 1) +
ggtitle("Average confusion matrix accross folds (column fraction)")
* decide whether figures go in appendix - David and I agree on them being actually the
primary analysis. we just needto convince people of the fact that LOCF should be dropped *first*. As always, I am open to debate this but we should just make a decision, figurexit or figuremain?
cowplot::plot_grid(p_cnf_mtrx_raw, p_cnf_mtrx_colnrm, ncol = 1, align = "v")
```{r confusion-matrix, warning=FALSE, message=FALSE, echo=FALSE, fig.cap="Confusion matrices, full training set without LOCF.", fig.height=9, fig.width=6}
plot_confusion_matrices(
df_predictions,
c("MSM", "GP + cov", "MM")
)
ggsave(filename = "confusion_matrices_all.pdf", width = 7, height = 7)
ggsave(filename = "confusion_matrices_all.png", width = 7, height = 7)
ggsave(filename = "confusion_matrices_all.pdf", width = 6, height = 9)
ggsave(filename = "confusion_matrices_all.png", width = 6, height = 9)
```
```{r crossing-table-full, echo=FALSE, warning=FALSE, results='asis'}
models <- c("MSM", "GP + cov", "MM")
df_average_confusion_matrices <- df_predictions %>%
filter(model %in% models) %>%
group_by(fold, model) %>%
do(
confusion_matrix = caret::confusionMatrix(
data = factor(.$prediction, levels = 1:8),
reference = factor(.$GOSE, levels = 1:8)
) %>%
as.matrix %>% as_tibble %>%
mutate(`Predicted GOSE` = row_number() %>% as.character) %>%
gather(`Observed GOSE`, n, 1:8)
) %>%
unnest %>%
group_by(model, `Predicted GOSE`, `Observed GOSE`) %>%
summarize(n = mean(n)) %>%
ungroup %>%
mutate(model = factor(model, models))
rbind(
df_average_confusion_matrices %>%
group_by(model) %>%
filter(`True GOSE` <= 3) %>%
filter(`Observed GOSE` <= 3) %>%
mutate(n_total = sum(n)) %>%
filter(`Predicted GOSE` > 3) %>%
summarize(fraction = sum(n / n_total)) %>%
......@@ -900,7 +1013,7 @@ df_average_confusion_matrices %>%
df_average_confusion_matrices %>%
group_by(model) %>%
filter(`True GOSE` == 4) %>%
filter(`Observed GOSE` == 4) %>%
mutate(n_total = sum(n)) %>%
filter(`Predicted GOSE` > 4) %>%
summarize(fraction = sum(n / n_total)) %>%
......@@ -908,7 +1021,7 @@ df_average_confusion_matrices %>%
df_average_confusion_matrices %>%
group_by(model) %>%
filter(`True GOSE` < 8) %>%
filter(`Observed GOSE` < 8) %>%
mutate(n_total = sum(n)) %>%
filter(`Predicted GOSE` == 8) %>%
summarize(fraction = sum(n / n_total)) %>%
......@@ -919,39 +1032,15 @@ df_average_confusion_matrices %>%
pander::pandoc.table("Some specific confusion percentages, full data set.", digits = 3)
```
```{r error-scores-all, echo=FALSE, fig.height=3}
df_predictions %>%
filter(model %in% c("GP + cov", "MM", "MSM")) %>%
group_by(model, fold, GOSE) %>%
summarize(
RMSE = mean((GOSE - prediction)^2, na.rm = TRUE) %>% sqrt,
MAE = mean(abs(GOSE - prediction), na.rm = TRUE),
bias = mean(prediction, na.rm = TRUE) - mean(GOSE, na.rm = TRUE)
) %>%
gather(error, value, -model, -fold, -GOSE) %>%
group_by(GOSE, model, error) %>%
summarize(
mean = mean(value),
sd = sd(value)
) %>%
ungroup() %>%
ggplot(aes(GOSE, color = model)) +
geom_hline(yintercept = 0, color = "black") +
geom_point(aes(y = mean)) +
geom_line(aes(y = mean)) +
facet_wrap(~error) +
scale_y_continuous(name = "", breaks = seq(-2, 8, .25)) +
xlab("GOSe") +
theme_bw() +
theme(
panel.grid.minor = element_blank(),
panel.grid.major.x = element_blank(),
legend.title = element_blank()
) +
ggtitle("Accuracy by true GOSe value, full test set")
```{r error-scores-all, echo=FALSE, fig.height=5, fig.width=99}
plot_summary_measures_cond(
df_predictions %>% filter(!(gupi %in% idx)),
c("MSM", "GP + cov", "MM"),
"Summary measures by observed GOSe, full test set"
)
ggsave(filename = "imputation_error.pdf", width = 7, height = 7)
ggsave(filename = "imputation_error.png", width = 7, height = 7)
ggsave(filename = "imputation_error.pdf", width = 9, height = 5)
ggsave(filename = "imputation_error.png", width = 9, height = 5)
```
......@@ -960,40 +1049,49 @@ ggsave(filename = "imputation_error.png", width = 7, height = 7)
# Discussion
The most important measure to prevent missing data bias
is to avoid missing values in the first place!
Study protocols and procedures should be checked preemptively for their
applicability to clinical practice and adequate effort should be put in
ensuring good compliance with follow-up protocols.
Handling missing data *post-hoc* to prevent biased analyses often requires
great effort.
It is thus of the utmost importance to implement measures for avoiding missing
data in the first place [comment: I strongly feel we should lead with this
sentence or something in the same spirit to make it absolutely clear that
statistics cannot be used to impute data out of nowhere.
Raising awareness for the complexity of missing data problems and should rather be seen
as an incetive to invest more effort upfront in preventing missingness in the first place ;)]
Nevertheless, in practice, missing values due to loss-to-follow-up will always
occur and should be addressed efficiently.
There is a wide consensus that non-imputation of missing values cannot be
considered best practice and may introduce bias or lead to an unnecessary loss
of power.
The current gold-standard for imputing missing values is certainly multiple
occur and should be addressed effectively
There is a wide consensus that statistically sound imputation of missing values
is beneficial for both the reduction of bias and for increasing statistical power.
The current gold-standard for imputing missing values is multiple
imputation on a per-analysis basis including analysis-specific covariates to
further reduce bias and to preserve the imputation uncertainty in the
downstream analysis.
However, in practice, there are good reasons to provide a set of single-imputed
default values in larger projects such as CENTER-TBI.
E.g., consortia are increasingly committed to making their databases
In practice, however, there are good reasons for providing a set of single-imputed
default values in large bservational studies such as CENTER-TBI.
Consortia are increasingly committed to making their databases
available to a wider range of researchers.
In fact, more liberal data-sharing policies are becoming an core requirement
for many funding bodies [REFERENCES].
In fact, more liberal data-sharing policies are becoming a core requirement
for funding bodies (cf. https://www.openaire.eu/).
In this context, it might not be possible to ensure that every analysis team
has the necessary statistical expertise to properly conduct a per-analysis
multiple imputation.
multiple imputation in the future.
Furthermore, the imputed values of a multiple-imputation procedure are
inherently random and it is thus difficult to ensure consistency across
different analysis teams.
For this reason, we suggest to provide a default single-imputation with
appropriate measures of uncertainty for key outcomes in the published
data base itself.
This mitigates problems with complete cases analyses and provide a more
different analysis teams if the values themselves cannot be stored
directly in a database.
For this reason, as a pratical way forward, we suggest providing a default
single-imputation with appropriate measures of uncertainty for key outcomes
in the published data base itself.
This mitigates problems with complete-case analyses and provides a
principled and consistent default approach to handling missing values.
Since we strongly suggest to employ a model-based approach to imputation,
the fitted class probabilities can be provided in the core databse along the
imputed values.
Based on these probabilities, it is easy to draw samples for a multiple imputation
analysis.
Wherever necessary and practical, a custom, analysis-specific multiple
imputation approach might still be used and the model used to provide the
single-imputed values may be used as a starting point.
imputation approach might still be employed.
In these cases, the model providing the single-imputed values may
be used as a starting point.
Several reasons disqualify LOCF as method of choice.
Not only is it inherently biased, but LOCF is also inefficient in that it
......@@ -1008,40 +1106,38 @@ necessary in some cases to further reduce bias.
Finally, LOCF cannot produce an adequate measure of imputation uncertainty
since it is not model based.
We draw two main conclusion from our in-depth comparison of three
We draw two main conclusion from our comparison of three
alternative, model-based approaches.
Firstly, despite its theoretical drawbacks, LOCF is hard to beat in terms of
raw accuracy.
accuracy.
Still, small improvements are possible (cf. Section ???).
The main advantages of a model-based approach is thus a reduction of bias,
the ability to provide a measure of uncertainty together with the imputed
values
(or to use the same very same model to draw multiple imputations),
as well as the possibility to include further analysis-specific covariates.
as well as the possibility of including further analysis-specific covariates.
Secondly, we found that the inclusion of established baseline predictors for
GOSe at 6 months post-injury had next to no effect on the imputation quality.
GOSe at 6 months post-injury had little effect on the imputation quality.
Note that this does not refute their predictive value but only indicates that
there is little marginal benefit over knowing at least one other GOSe value.
Finally, we conclude that the differences between the considered model-bases
approaches are rather small.
We nevertheless favor the multi-state model.
The multi state model is well-interpretable in terms of modeling transition
intensities.
Efficient implementations in standard statistical software (msm, R) are
available, it succeeds in eliminating the bias introduced by LOCF, is
Differences between the model-based approaches tend to be rather nuanced.
We nevertheless favor the multi-state model (MSM).
It is well-interpretable in terms of transition intensities.
An efficient implementation [@msm2011] in standard statistical software [@R2016] is
available.
It succeeds in eliminating the bias introduced by LOCF, is
able to provide imputed values for the entire population and is able to
provide a probabilistic output.
I.e., besides the single predicted GOSe value, the fitted probabilities
for each GOSe value may be stored in the database.
Based on these values, it is easy to draw samples for a multiple imputation
analysis.
# Appendix / Supplemental Material
## Distribution of GOSe in validation folds
```{r marginal-GOSe-folds, echo=FALSE, fig.cap="Marginal GOSe distribution per validation fold."}
df_ground_truth %>%
ggplot(aes(Outcomes.DerivedCompositeGOSE)) +
......@@ -1057,6 +1153,21 @@ df_ground_truth %>%
```
## Reproducible Research Strategy
CENTER-TBI is commited to reproducible research.
To this end, the entire source code to run the analyses is publicly available
at https://git.center-tbi.eu/kunzmann/gose-6mo-imputation.
Scripts for automatically downloading the required data from the central
access restricted 'Neurobot' (https://neurobot.incf.org/) database at
https://center-tbi.incf.org/ are provided.
The analysis is completely automated using the workflow managment tool 'snakemake'
[@koster2012snakemake] and a singularity [@kurtzer2017singularity] container image
containing all required dependencies is publicly available from zenodo.org
at https://zenodo.org/record/2600385#.XJzZwEOnw5k (DOI: 10.5281/zenodo.2600385).
Detailed step-bz-step instructions on how to reproduce the analysis are provided
in the README.md file of the GitLab repository.
```{r zip-figures, include=FALSE}
......
......@@ -102,6 +102,80 @@
pages={36--47},
year={2014},
publisher={American Medical Association}
@article{center2015collaborative,
title={Collaborative European neurotrauma effectiveness research in traumatic brain injury (CENTER-TBI): A prospective longitudinal observational study},
author={CENTER-TBI Participants and Investigators and others},
journal={Neurosurgery},
volume={76},
number={1},
pages={67--80},
year={2015},
publisher={Lippincott Williams and Wilkins}
}
@article{kurtzer2017singularity,
title={Singularity: Scientific containers for mobility of compute},
author={Kurtzer, Gregory M and Sochat, Vanessa and Bauer, Michael W},
journal={PloS one},
volume={12},
number={5},
pages={e0177459},
year={2017},
publisher={Public Library of Science}
}
@article{mcmillan2016glasgow,
title={The Glasgow Outcome Scale—40 years of application and refinement},
author={McMillan, Tom and Wilson, Lindsay and Ponsford, Jennie and Levin, Harvey and Teasdale, Graham and Bond, Michael},
journal={Nature Reviews Neurology},
volume={12},
number={8},
pages={477},
year={2016},
publisher={Nature Publishing Group}
}
@article{jennett1981disability,
title={Disability after severe head injury: observations on the use of the Glasgow Outcome Scale.},
author={Jennett, B and Snoek, J and Bond, MR and Brooks, N},
journal={Journal of Neurology, Neurosurgery \& Psychiatry},
volume={44},
number={4},
pages={285--293},
year={1981},
publisher={BMJ Publishing Group Ltd}
}
@article{steyerberg2008predicting,
title={Predicting outcome after traumatic brain injury: development and international validation of prognostic scores based on admission characteristics},
author={Steyerberg, Ewout W and Mushkudiani, Nino and Perel, Pablo and Butcher, Isabella and Lu, Juan and McHugh, Gillian S and Murray, Gordon D and Marmarou, Anthony and Roberts, Ian and Habbema, J Dik F and others},
journal={PLoS medicine},
volume={5},
number={8},
pages={e165},
year={2008},
publisher={Public Library of Science}
}
@book{verbeke2009linear,
title={Linear mixed models for longitudinal data},
author={Verbeke, Geert and Molenberghs, Geert},
year={2009},
publisher={Springer Science \& Business Media}
}
@article{koster2012snakemake,
title={Snakemake—a scalable bioinformatics workflow engine},
author={K{\"o}ster, Johannes and Rahmann, Sven},
journal={Bioinformatics},
volume={28},
number={19},
pages={2520--2522},
year={2012},
publisher={Oxford University Press}
>>>>>>> 943d2c27a63fd82c265933c46a0d6ab674191f03
}
......@@ -116,6 +190,19 @@ title = {{Multi-State Models for Panel Data: The msm Package for R}},
volume = {38},
year = {2011}
}
@article{white2010bias,
title={Bias and efficiency of multiple imputation compared with complete-case analysis for missing covariate values},
author={White, Ian R and Carlin, John B},
journal={Statistics in medicine},
volume={29},
number={28},
pages={2920--2931},
year={2010},
publisher={Wiley Online Library}
}
@article{R2016,
archivePrefix = {arXiv},
arxivId = {arXiv:1011.1669v3},
......
......@@ -9,3 +9,11 @@ rule download_data:
"""
bash scripts/download_{wildcards.version}.sh
"""
rule download_data_v1_1:
input:
"data/v1.1/df_baseline.rds",
"data/v1.1/df_ctmri.rds",
"data/v1.1/df_imaging.rds",
"data/v1.1/df_labs.rds",
"data/v1.1/df_gose.rds"
#!/bin/bash
set -e
wget https://zenodo.org/record/2600385/files/container.sif
checksum=($(md5sum container.sif))
if [ $checksum != 7db125c9c83621d78981e558546b1e88 ]; then
echo md5 mismatch!
exit 1
fi
Bootstrap: docker
From: rocker/verse:latest
%labels
Maintainer Kevin Kunzmann kevin.kunzmann@mrc-bsu.cam.ac.uk
%help
CENTER-TBI 6 months GOSe outcome imputation,
cf. https://git.center-tbi.eu/kunzmann/gose-6mo-imputation for details.
%post
apt-get update
apt-get -y install curl python3-pip
pip3 install snakemake
R -e "install.packages('diagram')"
R -e "install.packages('rstan')"
R -e "install.packages('brms')"
R -e "install.packages('mice')"
R -e "install.packages('ggalluvial')"
R -e "install.packages('caret')"
R -e "install.packages('e1071')"
R -e "install.packages('msm')"
R -e "install.packages('cowplot')"
R -e "install.packages('pander')"
R -e "devtools::install_github('kkmann/describr')"
#!/bin/bash
ncpus=$(getconf _NPROCESSORS_ONLN)
snakemake $1 --use-singularity -j $ncpus
#!/bin/bash
# single input: snakemake target
nohup snakemake $1 --use-singularity -j 99 --cluster-config cluster.json --cluster "sbatch -A {cluster.account} -p {cluster.partition} -n {cluster.n} -c {threads} -t {cluster.time}" &
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment