Abstract
Studies that investigate the performance of prognostic and predictive biomarkers are commonplace in medicine. Evaluating the performance of biomarkers is challenging in traumatic brain injury (TBI) and other conditions when both the time factor (i.e. time from injury to biomarker measurement) and different levels or doses of treatments are in play. Such factors need to be accounted for when assessing the biomarker’s performance in relation to a clinical outcome. The Hyperbaric Oxygen in Brain Injury Treatment (HOBIT) trial, a phase II randomized control clinical trial seeks to determine the dose of hyperbaric oxygen therapy (HBOT) for treating severe TBI that has the highest likelihood of demonstrating efficacy in a phase III trial. Hyperbaric Oxygen in Brain Injury Treatment will study up to 200 participants with severe TBI. This paper discusses the statistical approaches to assess the prognostic and predictive performance of the biomarkers studied in this trial, where prognosis refers to the association between a biomarker and the clinical outcome while the predictiveness refers to the ability of the biomarker to identify patient subgroups that benefit from therapy. Analyses based on initial biomarker levels accounting for different levels of HBOT and other baseline clinical characteristics, and analyses of longitudinal changes in biomarker levels are discussed from a statistical point of view. Methods for combining biomarkers that are of complementary nature are also considered and the relevant algorithms are illustrated in detail along with an extensive simulation study that assesses the performance of the statistical methods. Even though the discussed approaches are motivated by the HOBIT trial, their applications are broader. They can be applied in studies assessing the predictiveness and prognostic ability of biomarkers in relation to a well-defined therapeutic intervention and clinical outcome.
Keywords
Introduction
Currently, there are no therapeutic agents that have been shown to improve outcomes for patients with severe traumatic brain injury (TBI). Critical barriers to progress in developing treatments for severe TBI are the lack of predictive biomarkers for identifying patients likely to benefit from a promising intervention, as well as the lack of prognotic biomarkers for early and accurate prognostication of clinical outcomes. Clinical examination and brain CT imaging remain the fundamental tools for the initial evaluation of patients with severe TBI and for subject selection in clinical trials. Blood-based biomarkers might serve as adjunctive tools to clinical examination and CT imaging. They could make clinical trials more efficient by enabling the enrollment of subjects that are most likely to derive benefit. Three promising biomarkers that may be useful as both prognostic and predictive biomarkers in clinical trials involving patients with severe TBI are Glial fibrillary acidic protein (GFAP), neurofilament light chain (NfL) and high sensitivity c-reactive protein (hsCRP). Glial fibrillary acidic protein and NfL are structural proteins found in astrocytes and neurons, respectively, and they are released after a TBI in amounts that are proportional to anatomic injury severity; their levels are correlated with TBI lesion volume as measured by head CT and they can quantify the extent of neuronal and glial loss following TBI.1–4 Levels of hsCRP, a nonspecific marker of inflammation, may complement the anatomic biomarkers since dysfunction in TBI is also highly mediated by neuroinflammation. 5
Current gaps in knowledge limit the use of these biomarkers in TBI clinical trials. First, in human randomized control trials it is currently unknown whether a promising neuroprotective agent will have a treatment effect on these biomarkers, or whether the achievement of substantial treatment effects on these biomarkers reliably predicts clinical outcome. A poor clinical outcome is associated with a Glasgow Outcome Scale Extended (GOSE) score that is ≤4 while a favorable outcome refers to a score
To address these questions and knowledge gaps, we are conducting a biomarker study that includes longitudinal measurements of three biomarkers (GFAP, NfL, and hsCRP) in the Hyperbaric Oxygen in Brain Injury Treatment (HOBIT) trial. The HOBIT trial will enroll up to 200 severe TBI subjects. It is a phase II randomized control clinical trial that seeks to determine the dose of hyperbaric oxygen therapy (HBOT) that has the highest likelihood of demonstrating efficacy in a subsequent phase III trial. The key biological pathways through which HBOT affects clinical outcome are: (1) a decrease in neural cell lose (which can be measured by GFAP and NfL); and (2) a decrease in systemic inflammation (which can be measured by hsCRP). The main mechanism is improvement in oxidative metabolism which improves the energy depletion of severe TBI. A decrease in inflammatory markers in the brain has also been discussed in basic animal research studies (see Do and Woo, 2018 among others).
The goal of this manuscript is to illustrate the appropriate statistical methodologies and relevant modeling approaches for addressing these questions. Our aim is to provide statistical guidance for researchers that are interested in assessing the prognostic and predictive ability of biomarkers both at baseline and during subsequent follow-up measurements. We describe appropriate statistical strategies that accommodate biomarkers that are prognostic or predictive. In this paper, we discuss the differences between a prognostic and a predictive biomarker as well as the implications for the Receiver Operating Characteristic (ROC) analysis. We further discuss a statistical framework for assessing the performance of biomarkers at baseline. This involves a ROC analysis at baseline that aids in assessing the prognostic value of markers as well as the estimation of a cutoff for each marker that might be indicative of a subgroup that derives the most benefit from an experimental therapy. We illustrate the framework for adjusting for covariates that will make the cutoff estimation personalized as opposed to a ‘one size fits all’ notion. We illustrate how these covariates can be accommodated through Cox regression models even though we do not operate under a survival analysis framework. Such a proposed framework allows for natural assumptions such as a stochastic ordering between the biomarker distribution for the ‘favorable outcome group’ (
For the assessment of biomarkers that are time-dependent, we illustrate the use of both a generalized estimating equations (GEE) framework as well as a mixed modeling framework, and we highlight the advantages and disadvantages of both. We also present simulations that evaluate the discussed approaches, illustrate their implementation, and provide insight into their utility. We end with a discussion about conclusions and limitations.
Prognostic and predictive biomarkers
Distinguishing prognostic biomarkers and predictive biomarkers may be a challenging task. Biomarkers measured at baseline that can identify the likelihood of a clinical outcome, disease progression or an event of interest are known as prognostic biomarkers. Intrinsic clinical characteristics often can also be considered as prognostic (bio-)markers since they may be indicative of the final outcome of interest. Prognostic biomarkers may be used as part of the eligibility criteria for a clinical trial that enrolls patients who are at risk for an unfavorable outcome. For example, if a prognostic biomarker can determine at baseline those who will end up experiencing a favorable outcome from others that may not, then the latter group may be of interest for recruitment to an experimental therapy/study. Identifying prognostic biomarkers usually involves observational data.
On the other hand, predictive biomarkers are biomarkers that are used to identify patients that are more likely to exhibit a favorable outcome when treated with a specific therapy. For example, the control group may be receiving a standard therapy and comparisons are to be made against the experimental group that receives a new therapy. As discussed by the FDA, 7 prognostic biomarkers and predictive biomarkers cannot generally be distinguished when only patients who have received a particular therapy are studied. Generally, to identify a predictive biomarker, one needs to have a comparison of a ‘control group’ versus a ‘treatment group’. In addition, a biomarker can be prognostic, predictive, or both. A more detailed discussion with some graphical examples is provided therein.
Baseline analysis for the TBI biomarkers
Receiver operating characteristic curves at baseline to assess the prognostic value for each biomarker
In this context, we will mainly focus on our control group, for which the same standard of care is administered. To assess whether the baseline biomarkers are prognostic, we need to investigate whether the baseline biomarker scores are different for those individuals that ended up having a favorable outcome (
The ROC has a more convenient representation through the underlying cumulative distribution functions of the two groups, or equivalently, their survival functions. This is of the form
The AUC provides an overall measure of the degree of separation between
In our trial, the biomarkers at baseline depend on some baseline characteristics of the participants, denoted here by
The area under it can be employed to assess the overall prognostic ability of the biomarker to detect those participants with a favorable outcome from those with an unfavorable outcome. The ROC defined in (5), is formulated for any desired baseline profile of characteristics. This in turn allows us to explore the prognostic performance of the biomarker given a specific baseline profile that might be of clinical interest. It can be the case that the prognostic ability of a biomarker is higher for some specific baseline profiles and lower or even poor for others.
In the case where the baseline information does not affect the biomarker values, we can directly construct the ROC based on expression (2) and assess the discriminatory ability through the AUC presented in (3). To determine a cutoff beyond which an individual of the control group is more likely to exhibit an unfavorable outcome, we utilize the Youden index (see Youden
13
). This can be applied not only to the standard of care arm in a clinical trial, but to other arms as well. We hope to observe that the Youden index cutoff will be affected by the dose. This will allow us to calculate the probabilities that an individual of a given arm has a favorable outcome based on the arm’s cutoff. The Youden-based cutoff is given by
For parametric and non-parametric methods regarding the estimation and the corresponding confidence intervals of the maximized Youden index and the Youden-based cutoff we refer to Bantis et al.
14
In the case where some variables contained in the baseline information (
Note that this form of the Youden index i.e. max
In cases where the baseline profile is of no statistical significance, the optimized cutoff degenerates to a unique cutoff (in the sense that it will not be different for different baseline profiles). The biomarker performance on such a cutoff can be summarized by estimating the sensitivity (TPR) and the specificity (1 − FPR) at such a cutoff,
Note that the cutoff
Such a formulation implies that given a favorable outcome we get
Note that for a given
Whether formulation (8) or (4) is preferrable depends on the data set at hand, and in particular, on whether the underlying assumptions are plausible. It could be the case that a baseline characteristic affects the biomarker scores of the ‘favorable outcome’, yet it has no (or simply a different) impact for the ‘unfavorable outcome’ group (or vice versa). In that case, formulation (8) might be more preferable than that of (4). Such claims can also be explored by fitting the models and assessing whether the impact of
Note that even though the discussion so far has focused on the prognostic value of the baseline measurements, it can also be assessed when biomarker measurements are taken repeatedly over time. A framework in how to deal with such longitudinal settings is discussed later on.
Combining biomarkers at baseline to increase the individual prognostic value of each marker
Multiple (
It is known in the literature that in such formulations, a smooth ROC will alleviate numerical problems regarding the optimization of the AUC and in such scenarios one can employ kernel based approaches. A kernel based approach that is discussed in Yan et al.
17
is based on the usual kernel density estimator (for a biomarker
An advantage of a kernel-based approach is the resulting smoothness of the ROC which alleviates potential numerical problems. This is highlighted by Yin and Tian
20
among others. Nonetheless, uniqueness of vector
Combination Algorithm: • Step 1: Set the first biomarker as an anchor and set its coefficient, • Step 2: Scan the remaining coefficients in the interval [ − 1, 1], and solve the optimization problem stated in (13). • Step 3: Repeat Steps 1 and 2 by forcing • Step 4: Repeat Steps 1 to 3 for each biomarker. Namely, proceed consecutively with the next biomarker as an anchor. Thus, the optimal AUC is equal to: • Step 5: Extract the coefficients that correspond to the maximized AUC obtained in Step 4 and report them to finalize the reached optimal linear combination.
After applying this algorithm, one obtains the vector
Note that making inferences and constructing confidence intervals around the AUC estimate based on the composite score is not straightforward, as one needs to account for the variability of the estimated coefficients • Step 1: Resample with replacement • Step 2: Based on the current bootstrap sample from Step 1, perform the ‘combination algorithm’ presented above and obtain the optimal • Step 3: Using the • Step 4: Repeat Steps 1–4 1000 times to obtain 1000 bootstrapped AUCs. • Step 5: Report the (1 −
There are other alternative approaches that result in linear combinations and utilize different objective functions such as the Youden index. 21 Other investigators have also focused on linear combinations of markers and consider maximizing the AUC (see Pepe et al. 22 Liu et al. 23 ). A review as well as a comparison between these methods is provided in Yan et al. 17
Assessing the risk of a favorable versus an unfavorable outcome at baseline - link of the ROC and the logistic regression model
We will explore two strategies for the assessment of the risk of an individual at baseline. One is determined by the well known logistic regression model and one is based on the modeling presented in the previous subsection. For the former, we have the usual formulation
Such a formulation allows us to explore the risk of an unfavorable outcome given the biomarker value at baseline and the baseline profile. A unit increase of the biomarker
This model of log-odds given the biomarker and the baseline characteristics depends on the pre-test probability of an unfavorable outcome (i.e. the marginal probability of
An alternative strategy, that also depends on the pre-test probability
The quantities
Note that, in the discussed study (data), an important covariate is the so-called time to measurement. Even if all subjects released biomarkers at the same rate after injury, the time from injury to baseline measurement varies enough between subjects. This variable can be straight-forwardly accommodated as a covariate in
Link of the logistic regression and the ROC
As discussed above, a usual approach that many investigators use to assess the discriminatory ability of a biomarker is the ROC, and similar to the logistic regression, it usually involves two variables: the value of a continuous marker (r.v.
We note here that the post-test risk of an unfavorable outcome is a monotone function of the density ratio. Since the density ratio transformation yields the optimal ROC, so does the predicted (post-test) risk. Moreover, since only the intercept is affected by
By referring to the value of the marker as the upper
Note that when operating at a false positive rate
Note that for proper (i.e. concave) ROC curves the post-test risk is maximized at
Longitudinal analysis for assessing the predictiveness (or prognostic value) of the biomarkers - GEE versus mixed modeling
Exploring the time-dependent trajectory of the biomarkers through GEE
This section involves exploring the longitudinal trajectory of each marker and how it is affected by the different doses/treatment of HBOT. Namely, we first consider exploring the predictiveness of the biomarkers, since we are associating their trajectory with the treatment. Note that doses/treatment is considered as a discrete random variable. In our study, samples of whole blood will be collected every 8 h during the first 24 h post-enrollment. On study days 2, 3, 5, 7, 14 and 180, blood will be collected once a day. To assess the biomarkers’ time-trajectories we utilize marginal models under a GEE framework. As mentioned in Fitzmaurice,
24
the term marginal indicates that the model for the mean response depends only on the covariates of interest, and not on any random effects or previous responses. Namely, the marginal model does not incorporate individual-specific random effects that are used in the mixed modeling modeling approach that we discuss in the next section. An appealing characteristic in the GEE is that we do not need any strict parametric assumptions for the error term, as opposed to a mixed model strategy that typically relies on multivariate normal assumptions. Under the GEE approach the trajectory of a biomarker
Such a model implies that at baseline ( • If • If
This formulation implies that within a specific group ( • If • If • If • If
We do not include the Dose as a main effect since individuals are randomly assigned to different dose levels.
Re-writing model (18) in terms of the available data we have Time-dependent trajectory of a hypothetical biomarker for different doses of HBOT and for different groups (

The estimation of the coefficients involved in (18) can be done through an iterative process called iterative re-weighted least squares (IRLS) within the GEE framework, and nowadays this process is automated by many statistical software packages. The trajectory of each marker can be visualized accordingly for any given profile or any given dose of HBOT (see Figure 1 for a hypothetical example). The comparison of the levels of a single biomarker for different levels of doses at a given baseline profile can be based on the area between the two trajectories, either for the whole time-period of the study, or at a given time-interval of interest. This might involve consecutive timepoints [
Note that even though this section focuses on the predictiveness, the prognostic value of a biomarker can also be assessed in a longitudinal fashion. In a setting where there is no association with a disease (by design), one could have longitudinal measurements of a biomarker associated with an outcome (all individuals are in the same treatment - in many settings this may imply no treatment at all).
Exploring the time-dependent trajectory of the biomarkers through mixed modeling
An alternative approach to assess the trajectory of each biomarker over time is a mixed-modeling-based approach. Such an approach is different than a GEE-based approach since random effects are also incorporated in the model; a feature that provides both advantages and disadvantages. Before we point them out (see next paragraph), we refer to the structure of a plausible mixed model for this study
In the simplest case with two levels of dose, this model implies that the subject specific mean at time • For status • For status • For status • For status
Such a model incorporates the random effects (
Simulation section
Our simulations are reported in two parts. One that refers to the baseline performance of the markers and one that refers to the longitudinal analysis and their time-dependent behavior. More specifically, for the first part we assess the expected accuracy of a combination of three biomarkers under several scenarios when these are combined using the 5-Step algorithm previously described. For the second part we explore the power and size that refers to assessing the significance of the areas
Combinations of markers at baseline
For the simulation study, we consider combining three biomarkers into a single pseudoscore to achieve higher discriminatory ability than any individual biomarker that is collected in the study. The biomarker scores were simulated from normal, lognormal, and gamma distributions in order to demonstrate the performance of Yan’s method 17 in a wide variety of scenarios. The AUC is a measure of the performance of a biomarker. A properly ordered biomarker will have values ranging from 0.5 to 1. For the simulation study we considered individual biomarkers with AUCs of 0.7, which is indicative of a moderately useful biomarker. The sample size of the study is 200, but dropout is expected to be as high as 25%, and therefore sample sizes of 200 and 150 were used for this simulation study. The number of participants who will achieve a positive outcome is unknown, and therefore we use allocation proportions of 0.1, 0.2, 0.3, 0.4, and 0.5 to cover a reasonable set of scenarios for the proportion of participants who achieve a positive outcome. Additionally, the biomarkers being explored in this study are expected to be correlated, and thus, we consider markers with correlations of 0.3, 0.5, and 0.7, to cover low, medium, and high correlations for the markers to be combined. The combined score for each of the scenarios was evaluated using both the kernel-based and the empirical estimate of the AUC on an independent testing dataset, where the sample size for both groups was 10,000. Using an independent testing dataset of a high sample size allows us to gauge how the combined score would perform in the population, rather than just in the sample at hand. It is important to evaluate the results from independent data in order to avoid overfitting that is seen when only looking at training data.
When the biomarkers are simulated from normal distributions and the sample size is 150, the estimated AUCs for the combined score range from 0.6964 to 0.7539. Results for the normal scenarios are displayed in Figure 2, 3 and Table 1 of the Web Appendix. The method performs poorly when the allocation of subjects to each group is highly unbalanced, and it performs better when the allocation of subjects is balanced. When the correlation is equal to 0.7, the combination struggles to achieve an AUC that is higher than 0.7. When only 10% of the subjects achieve a positive outcome, the combined score had an estimated AUC of 0.6964, which is below the AUC of the individual markers. At best, when the correlation is 0.7, the combined score achieved an estimated AUC of 0.7111, which is 0.0111 higher than the individual biomarkers. This demonstrates minimal improvement over the individual scores. When the correlation is equal to 0.3, the combined score achieves an estimated AUC between 0.7181 and 0.7539. This shows that when the biomarkers have low correlation, it is possible to achieve better performance than the individual scores. When the sample size is 200, we see improved performance compared to when the sample size is 150. In all instances, the estimated AUC of the combined score is above 0.7. When the correlation is equal to 0.7, the combined score achieved an estimated AUC between 0.7019 and 0.7138. Again, this demonstrates that combinations of biomarkers with high correlations struggle to provide additional information compared to the individual markers. When the correlation is equal to 0.3, the combined score achieved an estimated AUC between 0.7492 and 0.7580. This corresponds to an AUC that is between 7.0% and 8.3% higher than the individual markers, depending on the proportion of subjects who achieved a positive outcome. The kernel-based estimates of the AUCs of the combined scores when the data are generated from normal distributions. The individual scores each have an AUC of 0.7. The empirical-based estimates of the AUCs of the combined scores when the data are generated from normal distributions. The individual scores each have an AUC of 0.7.

When the biomarkers are simulated from lognormal distributions and the sample size is 150, the estimated AUCs are between 0.6809 and 0.7308 (results for the lognormal distributions are presented in Table 1 of the Web Appendix). Again, we see that as the correlation between the biomarkers increases, the AUC of the combined marker is smaller. When the correlation is 0.7, the combination failed to outperform the individual biomarkers, with the highest AUC being 0.6988. When the correlation is 0.3, the estimated AUCs range from 0.7171 to 0.7308. This again shows that when the sample size is 150, the benefit from combining biomarkers is limited. When the sample size is 200, the estimated AUCs are between 0.6853 and 0.7333. This shows only minor improvement compared to when the sample size is 150.
When the biomarkers are simulated from gamma distributions and the sample size is 150, the estimated AUCs are between 0.6922 and 0.7445 (results for the gamma distributions are presented in Tables 2 and 3 of the Web appendix). When the correlation was equal to 0.7, the combined score provided an AUC of 0.7062 when the sample sizes were not equal. This is not a substantial improvement over the individual scores. When the correlation was equal to 0.3, the pseudoscore saw an AUC that was between 4.9% and 6.4% higher than the individual markers. When the sample size is 200, the estimated AUCs are between 0.6959 and 0.7465. This demonstrates only minor improvement above when the sample size was 150. We see that when the correlation is 0.7 and the percent of participants with a positive outcome is 10%, the estimated AUC is 0.6959, which is lower than the individual marker AUCs of 0.7. When the correlation is equal to 0.3, and the percent of participants with a positive outcome is 50%, the estimated AUC is 0.7465. This corresponds to an AUC that is 6.6% higher than the individual markers.
In summary, the performance of the pseudoscore is highly dependent on the correlation of the biomarkers, as well as the proportion of participants who saw positive outcomes. When the scores are generated from normal distributions, the sample size also played a large role in the performance of the pseodoscore. For the scenarios generated from lognormal or gamma distributions, the sample size appeared to play less of a role in the performance of the combined score, although the scenarios with sample sizes of 200 provided higher AUCs than the corresponding scenarios where the sample size was 150. Lower correlations and a more even sample size for positive/negative outcomes led to higher estimated AUCs. When the correlation was equal to 0.7, the pseudoscore struggled to outperform the individual markers in all scenarios, and was frequently below 0.7, and was as low as 0.6853.
Time trajectory of the biomarkers through GEE to compare two dose levels
Reduced model
To evaluate this method for the time-dependent trajectory of a biomarker between two dose levels, we considered a model for the first four timepoints that are equally spaced. For illustration purposes we focus on the unfavorable outcome group and consider a model to detect dose effects (analogous to focusing only on the setting presented in the top-right panel of Figure 1). For the sake of simplicity we consider only two doses, with various sample sizes, to explore the power for detecting
We evaluated
We ran 1000 Monte Carlo simulations to estimate the variance, bias, coverage probability, confidence interval width, and power. For each iteration, we simulated values of the response variable y at each timepoint for the specified number of subjects in each dose group. Values were generated using values of
Using the estimated values of
The Monte Carlo estimates, variance, MSE, power, coverage probability, and width of the 95% confidence interval for Power results for a simulation regarding the bootstrap-based inference around 
The power for
Full model
We also considered the full model (18) in evaluating the time trajectory of the biomarkers among those with a favorable outcome and those with an unfavorable outcome between two dose levels. We considered the outcome at two doses levels with various sample sizes, to explore the power for detecting
We ran 1000 Monte Carlo simulations to estimate the variance, bias, coverage probability, confidence interval width, and power of
The Monte Carlo estimates, variance, MSE, power, size, coverage probability, and width of the 95% confidence interval for Estimated power and variance for a simulation regarding the bootstrap based inference around 
Discussion
Our motivating clinical trial and data sets focuses on the patients with severe TBI. A question that needs to be investigated is whether the Total brain tissue hypoperfusion exposure is associated with biomarkers of astrocytic (GFAP) and axonal (UCH-L1, total Tau and NF-L) injury. The behavior of these biomarkers needs to be explored in view of allowing investigators for prognostic and predictive purposes. The HOBIT trial will also allow us to explore and compare different doses of HBOT for TBI. Assessing the involved comparisons and biomarker performance involves a variety of different statistical techniques and methods that operate under different assumptions which we discuss in this paper.
Biomarker studies that refer to prognostic and predictive biomarkers are often described in the literature. Even though statistical methodologies are available for specific settings that include prognostic and predictive biomarkers, a comprehensive framework of biomarker data analysis for such studies is not readily available. In this paper we discuss the integration of techniques such as ROC analysis, logistic regression, Cox modeling, combinations of biomarkers, and time-dependent modeling with the purpose of providing a framework of data analysis in biomarker studies that involve predictive and prognostic biomarkers. We distinguish the analysis that can be performed at baseline and present alternative approaches for accommodating covariates. We discuss the relevant ROC analysis and the underlying logistic regression model and present the link between them. We further illustrate the framework for exploring the time-dependent trajectories of the markers through the use of GEE and mixed modeling and discuss the advantages and disadvantages of each approach. We illustrate the performance of the discussed approaches through extensive simulations and refer to the underlying assumptions of each. We also comment on available software/routines for applying the presented methods that can help practitioners to incorporate them to their data.
We highlight that the notions of prognosis and prediction are not mutually exclusive. A baseline marker could be prognostic at high and low levels, which could reliably predict bad and good outcomes in a manner that is not influenced at either end by exposure to the treatment. If intermediate levels were not influenced by treatment, the marker would be only prognostic. If intermediate levels led to good outcomes with treatment, but poor outcomes without treatment, the marker would also be predictive. This implies that baseline values should be explored for both prognostic and predictive uses. Similarly, repeated measures may be prognostic, predictive, both or neither, as can be seen and described in the caption of Figure 1.
An issue that is not covered in this paper refers to censored biomarker values due to limits of detection. Kernel-based imputation approaches for the construction of the underlying ROC for the whole spectrum of FPRs (even for that region that is affected by the LOD) is given in Bantis, et al. 26 (the relevant routines are available at https://www.leobantis.net). We also note that the work of Cai 27 and Heagerty and Pepe 28 that involve a setting in which interest lies on the time-dependent sensitivity and specificity of biomarkers describes a more general framework.
Supplemental Material
Supplemental Material - Statistical assessment of the prognostic and the predictive value of biomarkers- A biomarker assessment framework with applications to traumatic brain injury biomarker studies
Supplemental Material for Statistical assessment of the prognostic and the predictive value of biomarkers- A biomarker assessment fframework with applications to traumatic brain injury biomarker studies by Leonidas E Bantis, Kate J Young, John V Tsimikas, Brian R Mosier, Byron Gajewski, Sharon Yeatts, Renee L Martin, William Barsan, Robert Silbergleit, Gaylan Rockswold and Frederick K Korley in Research Methods in Medicine & Health Sciences
Footnotes
Declaration of conflicting interests
Funding
Supplemental Material
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
