Abstract
Background
Microarray gene expression analysis has in several studies been applied to elucidate the relation between clinical outcome and gene expression patterns in breast cancer and has demonstrated improvement of recurrence prediction.1–14 In some studies, genes in such profiles might be fully or partially missing in the test data used for validation due to the choice of microarray platform or the presence of missing values associated with a given probe. Furthermore, an obtained gene list can have none or few genes in common with other gene lists addressing the same clinical outcome,15,16 due to usage of different microarray platforms, different methods for measuring mRNA expression levels, variation in patient sampling, 15 lab variation/measurement noise, 17 and differences in data processing such as different normalization methods. 18 Furthermore, a wide array of feature selection methods is available for gene selection, which also affects the constitution of such final gene lists. 15 These feature selection methods encompass filter approaches; selection of top features from a ranked list of genes; wrapper methods where model selection algorithms are wrapped in the search process of feature subsets (ie, the Gini index in random forest); 19 and embedded methods where the feature selection is an integrated part of the classification method, such as iteratively eliminating redundant features with minimal information regarding classification performance. 20 More recent approaches to gene selection include recursive feature elimination based on support vector machines (SVM-RFE). This approach uses the coefficient of the weight vector to compute a feature ranking score, from which features with the smallest ranking scores are built into the model, for example, a leave-out-N number of genes approach. 21 The advanced version combines SVM-RFE with a minimum-redundancy and maximum-relevancy filter, where relevance of each feature is determined by the mutual information among genes and class labels, and the redundancy is given by the mutual information among the genes. 22
In addition to the above mentioned factors, the choice of classification method also impacts the final model and gene lists. Furthermore, the validations of such gene lists in independent data are very heterogeneous, with the majority testing significant differences in survival, which barely reflect the actual classification accuracy, while few studies conducts validations in terms of classification accuracies.
To overcome the above obstacles, individual genes could be considered part of a larger network, that is, their expression being controlled by the expression level of other genes or that a group of genes belong to a specific pathway performing a well-defined task. These genes may be controlled by the same transcription factor or located in the same chromosomal region. Such grouping has been collected in public databases such as the Kyoto Encyclopedia of Genes and Genomes (KEGG) 23 the Molecular Signature Database (MsigDB), 24 and the Gene Ontology database (GO). 25 In relation to breast cancer, for example, cell cycle upregulation or deregulation of other pathways are associated with metastasis2,3,26,27 Furthermore, it has been shown that metastasis progression 28 and tumor grading 29 in breast cancer are associated with accumulated mutations in several genes, leading to amplification or inactivation of genes, and even large genomic losses or gains in specific chromosomal regions affecting gene expression levels.
Our previous studies showed that the expression levels of such specific entities, called metagenes (MGs), are significantly associated with metastatic outcome in breast cancer across eight different datasets.30,31 Several studies have defined metagene/gene modules derived from microarray data using various methods such as penalized matrix decomposition which clusters similar genes but without similar expression profiles 32 hierarchical clustering, 33 correlation, 34 or combining a priori protein-protein interactions with microarray gene expression data defining interaction networks as features.35,36 Few studies have attempted to use such predefined gene sets for prediction models. One such study used a compendia of microarray cancer genes for defining metagene/gene modules by performing hierarchical clustering of these genes expressions and seeding genes within the clusters into gene sets annotated in the public databases. 33 A second study defined metagene/gene modules as sets of significantly correlated or anticorrelated genes combined with prior information about the genes. 34 One of the strengths of using gene sets as features is that this circumvents the necessity of sharing all genes between studies. Furthermore, grouping the genes together also reduces the dimensionality of the datasets and thus functions as feature reduction. Therefore, profiles consisting of MGs might be used for developing predictive classifiers that can be validated in independent data.
This study systematically assesses and compares the performance of MG- and SG-(single gene) feature sets and MG- and SG-classifiers extracted from the same samples in predicting metastasis outcome among lymph node–negative breast cancer patients who have not been treated with adjuvant therapy.
These comparisons were first made by model building and classification within the same dataset using 10-fold cross validation. Furthermore, the comparisons were also done across datasets in two ways: (1) application of the entire classifier on the test data and (2) only the features from the classifier are transferred to the test data for model building and testing and evaluated by leave-one-out cross validation (LOOCV). In each case, we also examined two possible scenarios. In the first, the validations were conducted between studies using the same microarray platform (Affymetrix classifier/feature set validated on an independent Affymetrix dataset), while the second, encompassed validations across studies with different platforms (Agilent-developed classifier/feature set validated on an independent Affymetrix dataset).
Results
Features and models
The 71 metagenes used in this study wer determined as gene sets covering similar biological pathways having common transcription factor binding sites or genes located in the same chromosomal region (Supplementary Table 1), which in previous studies have proven to be associated with breast cancer metastasis across eight different datasets using a rank-based method (Fig. 1).30,31 An overview of the eight datasets is shown in Table 1. The final MGs consist, on average, of 21 genes, with the smallest MGs consisting of only 2 genes and the largest, of 65 genes. This rank-based method was also applied to each gene across the same eight datasets, which led to identification of 283 rank-significant SGs (Supplementary Table 2) shared by the four datasets to be used later. Amongst the 283 SGs, 119 genes were also present on the gene lists underlying the MGs (data not shown). These 71 MGs and 283 SGs were used for selecting the optimal MG- and SG-feature sets and development of their corresponding MG- and SG-classifiers.
Overview of datasets.
designate used as training only when validating AF3;
designate used for training only when validating AF2.
The number of metagene and single genes features in the 24 models.

Metagene and single gene selection procedure.
Two (AG1 and AF1) of the eight datasets used to define the features were, therefore, solely used for training purposes (Table 1). The AF1 corresponds to the Affymetrix-based Rotterdam dataset and AG1 is a subset of the Agilent-based Amsterdam dataset. Both dataset containing only node-negative samples. The AF2 and AF3 datasets are both based on the Affymetrix platform and function as test sets for classifiers and feature sets developed and selected by the AG1 and AF1 datasets (Table 1). Furthermore, AF2 was used for training to validate on AF3 and vice versa (Table 1). The following three classification methods were used for model building: random forest (RF) and support vector machines with a radial-based kernel (R-SVM) or a sigmoid-based kernel (S-SVM). These were optimized to achieve the best mean of sensitivity and specificity, referred to as balanced accuracy (bAcc).
In order to build the models, the SGs or MGs within each training dataset were ranked according to their random forest importance values. For a given feature, this value reports the standardized drop in prediction accuracy when the class labels are permuted. 37 This rank was then used for model building by subsequently adding one feature at a time in a top-down forward wrapper approach starting with the top two features. To avoid creating bias during gene selection and training of the final classifier and on classification performance, 10 times repeated 10-fold cross validation bAcc was used as optimization measure, as this optimization metric has previously been shown to give an excellent bias-variance balance. 38 The above described procedure led to a total of 24 models, with 12 composed of MG and SG features, respectively (Table 2). In the metagene models, each metagene contributes one input value calculated as the median expression level of the genes underlying that particular metagene. We, therefore, considered a metagene model composed of four metagenes to have four inputs or four features. These MG- and SG-models varied in complexity, consisting of 4 to 122 features (Table 2). Comparison of the number of features in each model displays a slightly higher complexity of SG-models with MG-models ranging from 4 to 67 features having, on average, 26 features per model (Table 2), while SG-models varied from 14 to 122 features with an average of 34 features per model (Table 2). This suggests that each SG-feature is less informative compared to the MG-features and thus a larger number of SG-features is a requirement for reaching optimal performance.
Internal performance of MG-and SG-models
To reduce variability and complexity and keeping validation parameters as constant as possible, the performance of MG- and SG-models were evaluated within the same dataset from which they were initially developed. This internal model performance was evaluated using 10 times repeated 10-fold cross validation (Fig. 2). This validation scheme partitions the training data into 10 nearly equally sized folds. Subsequently, 10 iterations of training and validation are performed. During each of these iterations, a different fold of the training data is left out for validation, and the remaining folds are used for learning. The mean accuracy of all 10 folds validated is thus the 10-fold cross validated accuracy of the model. By repeating this process 10 times, a more robust and unbiased estimation of the generalization performance is obtained. 39 It should, therefore, also be noted that the individual classification performances are artificially elevated due to information leakage caused by using the entire dataset for ranking. However, as metagenes are a simple linear combination of single genes, we assume that the comparisons between the MG- and SG-model performances are similarly affected by this leakage.

The internal dataset validation procedure.
As indicated by the internal classification performances, the models predicted outcome with high accuracy (Fig. 3). Comparison across the three classification methods within the majority of each training dataset showed that MG-models perform slightly better than SG-models (Fig. 3). However, these differences were only minor and nonsignificant, suggesting that these results tend to converge to a common optimization level independent of feature type.

Internal classification performance.
Feature set transferability
The feature sets within the MG- and SG-models were then transferred to independent datasets to examine if these features could be used to build a model for predicting outcome within the test data as illustrated in Figure 4A and B. The transferability was assessed by leave-one-out cross validation (LOOCV). Briefly, in LOOCV, one sample is left out at a time and used for testing, while the remaining samples are used for training a classifier, which is used to classify the left out sample. This process is repeated until all samples have been left out for testing.

Between study classifier or feature set validation. (
The results show that transfer of the feature sets are able to classify samples with a mean LOOCV bAcc ranging from 54.0% to 72.0%, and where all the MG- and SG-gene lists have a mean LOOCV bAcc of 65.7% and 63.4%, respectively (Fig. 5). This suggests that the majority of these gene lists perform significantly better than random and that the selected MG- and SG-feature sets, being optimal in one dataset, display transferability across studies and can train and build a predictive model in independent samples.

Exported feature set classification performance.
Comparison of classifier performance based on MG- and SG-feature sets in independent datasets
Comparison of the classification performance between the transferred MG- and SG-feature sets between different platforms (AG1-feature sets validated in AF2 or AF3) (Fig. 4A) showed MG features significantly outperformed SG-features using R-SVM (
Classifier transferability and performance
We next used the entire classifier developed in the training sets, based on the features and rules associated with the classifier, to classify the independent samples in the entire test data and, therefore, to determine if the classifier can be exported and used in the test data (Fig. 4A and B). These results showed that the classifiers are less transferable than the feature sets, which is reflected by the weak mean bAcc ranging from 52.5% to 61.5%, with MG- and SG-classifiers having an overall mean bAcc of 56.0% and 58.2%, respectively (Fig. 6).

Exported classifier classification performance.
Comparison of classifier performance
Comparison of MG- and SG-classifiers external validation performance, when the classifier was developed on the Agilent dataset (AG1) and validated on the Affymetrix datasets (AF2 and AF3), showed that MG- and SG-classifiers performed equally well, with the exception of using S-SVM, which favored the MG-classifiers (Fig. 6) (
Comparison of MG- and SG-classifiers developed from an Affymetrix dataset and validated on another independent Affymetrix dataset showed that SG-classifiers performed significantly better than MG-classifiers for each of the three classification methods (RF,
Discussion
A strong prediction model should be robust, reproducible, and ideally exportable to allow validation in independent datasets. In this study, predictive models based on metagenes or single genes being significantly associated with breast cancer metastasis outcome were developed. The metagene features are composed of gene lists sharing a biological consensus, while the single genes are features representing the expression of one gene. The study examined the transferability of MG- and SG-feature sets and classifiers and compared their classification performance on independent datasets. The genes present in the MG-feature sets were always present in the validation, as we had assured that the single genes and metagenes were shared by all four datasets. However, such sharing of gene lists between classifiers derived from different studies is rarely the case. The reason that different studies derive gene expression based classifiers having few or no genes in common is probably due to the selection bias caused by the microarray platform used for measuring gene expression, patient sampling, and the way the data were analyzed. In this respect, a study by Ein-Dor and coworkers showed that the same dataset could derive classifiers having comparable performances but differing substantially with respect to which genes are contained in these classifiers, thus also leading to a selection bias. 15 In our study, the problem with missing common genes was circumvented by using features shared by all four datasets used in the study.
We found that both MG- and SG-feature sets could be transferred to independent datasets and were able to build a predictive model achieving good results for classifying node-negative breast cancer metastasis outcome.
The finding that the SG-feature sets are exportable agrees well with few predictive gene lists being exportable to independent data. One such gene list is the 70-gene Mammaprint prognosis gene list embedded in the 70-gene correlation classifier originally developed to predict metastasis within five years (defined as poor prognosis), or no metastasis within five years (good prognosis). 2 Transfer of this 70-gene Mammaprint list for model training and testing in independent datasets have proved successful using the original correlation classification method7,40 or by SVM, 7 while another study only showed random performance when validating this gene list in independent data. 35 Due to missing genes, other studies have only validated a subset of the 70 genes successfully by correlation, 8 VFI-classification, 8 and centroid classification. 41 A second frequently validated gene list is the 76-gene list underlying the 76-gene classifier based on a regression determined risk score trained to predict distant metastasis within five years among lymph node–negative breast cancer patients. 3 Transfer of the 76-gene list for model building and testing in fully independent datasets have been proven with limited success in one study by using SVM achieving balanced accuracies of 37% to 64%, 35 while a subset of 46 of the 76 genes performed well in predicting metastasis from low-malignant breast cancer, achieving balanced accuracy of 75%. 27 A third list is a “wound signature” containing 512 genes, which is able to predict increased risk of metastasis in three types of cancers, including breast cancer, 42 The entire wound signature gene list has been transferred to independent data for model building and testing, obtaining results in the range of our transferred SG-gene lists using decision trees, 43 but also using a subset of 252 genes has provided similar performance when predicting metastasis outcome among low- malignant breast cancer patients using SVM. 27 A fourth study derived a 70-gene Cox-ranked gene list, which was developed to achieve optimal significant hazard ratios with respect to survival analysis in a cohort of patients with early breast cancer. 5 Subsets of this list were transferred to two independent datasets for validation by model training and testing using nearest centroid classification obtaining balanced accuracies of 72% and 59%. 5
Validation of the classifiers with some of the gene lists mentioned above using the original classification method and cutoffs for class/outcome prediction has been conducted for the 70-gene profile achieving balanced accuracies of 63%, 44 while validation of the 76-gene profile has achieved balanced accuracies of 53% to 65%, 45 59%, 44 and 62%. 46 These reported balanced classification accuracies are similar or fairly lower than those obtained by the MG- and SG-classifiers in our study. Although, these MG- and SG-classifier performances are far from optimal to be used in the clinic, they do perform better for long-term outcome prediction than clinicopathological risk criteria illustrated by the findings from three studies. The study by Daemen and coworkers showed that the St. Gallen, NIH, and the Nottingham prognostic index assessors predicted metastasis outcome in a group of 147 breast cancer patients with balanced accuracies of 51%, 52%, and 57%. 47 Another study by Sun and coworkers demonstrated that the St. Gallen assessor predicted 5-year relapse free survival in 97 node-negative breast cancer patients with only 51% balanced accuracy. 48 Furthermore, the study by Schmidt and coworkers showed that the St. Gallen assessor predicted 5-year and 10-year distant metastasis free survival with a balanced accuracy of 57% and 54%, respectively, and that the Adjuvant! Online algorithm predicted the same endpoints with 56% bAcc in 410 node-negative patients. 49
Interestingly, although the 76-gene profile was developed to predict outcome among lymph node–negative patients, as in our study, the majority of MG- and SG-classifiers obtained in our study actually performed slightly better than those reported by the above studies. However, the reported low performance is likely caused by the classifiers being specific to the dataset from which they were developed, and the results, therefore, cannot be generalized. Despite the fact that the primary tumors from both the training and test sets all are lymph node–negative, they might still not be very representative of each other, due to, for example, biological variation, follow-up time of the studies (affecting outcome-coding), and cross-study/lab variation, thus impairing classifier external classification performance. The lack of classifier transferability has been shown in other studies, for example, in the classification of normal versus tumor tissue 50 and in the prediction of pathologically complete response to breast cancer chemotherapy treatment. 51 This stresses the importance of homogeneity between the samples used for building classifiers and those used for validation.
We evaluated and compared the classification performance between MG- and SG-models within the same datasets. We found that MG- and SG-models had equal performance during internal model building and performance testing, with a trend of MG-models performing slightly better than SG-models. However, the model sizes could imply that SG- models need to contain more features than the MG-models to obtain a similar performance, suggesting that the individual MG-features are more informative than each SG-feature.
The study found no significant difference in performance only between exported MG- and SG-feature sets when used for training and testing on independent datasets. In this setup, the models based on transferred features are trained and tested in the same independent dataset, rendering the measurements underlying the intratraining and testing iterations comparable. One explanation for this similar performance could be that there is a great deal of overlap between the genes constituting the metagenes and those underlying the list of 283 single genes, as the 119 genes are shared. A second explanation could concern the way the metagenes were defined and scored. In our study, the MGs cover lists of genes having a consensus. The first advantage of being defined as such is that the metagenes are conserved and robust across microarray platforms. A second advantage is that the metagenes are narrowed down to the gene set enrichment analysis (GSEA) leading edge genes, thereby picking the best genes within the predefined lists. However, the limitations of the metagenes are that they only consider interactions with members within the defined metagene, but do not take interactions with genes beyond the members of the defined pathways and gene sets. Furthermore, although the human genome has been sequenced, there are still many genes with unknown functions, and, thus, these have still not been annotated to a specified functional gene set. Such undiscovered networks might be discovered by single gene classifiers composed of mixed biological predictors, which might be more easily discovered when using the same microarray technology, reducing the variation caused by the shift in microarray platforms.
In this study, the scoring of the metagenes was based on the median expression values among the GSEA leading edge within each predefined metagene. The fact that only a weak difference in performance between MG- and SG-models was detected suggests that the definition of metagenes and/or their expression calculation might not be able to detect such a significant difference in performance, at least not for metastasis outcome among lymph node–negative breast cancer patients. Other studies have scored gene module/metagene activity in a different way compared with our median expression score by either using the arithmetic mean, 52 the sum of discrete values within each module/set, 33 the expression values of a median gene in a module/gene set (defined as the gene within the gene set having the smallest sum of distances to other genes within the given gene set/module), 53 average rank score (calculated as the average rank of the relative expression levels in a pathway normalized by the total number of genes), 54 or by probabilistic inference using log-likelihood ratios. 55
Another reason could be that exported MG- and SG-feature sets used for training and testing, and that we only used random forest and support vector machines as classification methods. The above confinement implied, as some of our unpublished results suggest, that classifiers based on random forest and support vector machines have a better classification performance when validated in completely independent datasets compared with other classification methods such as logistic regression or neural networks. In this respect, an option is to validate if the MG- and SG-feature set performances also are similar when using other classification methods.
Interestingly, other studies have found a similarly slight performance difference between models composed of gene modules or individual genes. These studies have also addressed breast cancer metastasis outcome, but using a different module definition and scoring. The gene modules in one study were defined as a subset of genes from gene compendia with correlated expression across arrays and showed that classifiers consisting of gene set modules had a slightly better classification performance, compared with classifiers of individual genes. 33 A second similar study, comparing the performance of classifiers consisting of gene sets defined by MsigDB, with classifiers consisting of individual genes, found that the two groups of classifiers had similar performance, but that gene set classifiers were more stable. 56 A third study by Blazadonakis and coworkers 57 used the 70-gene Mammaprint gene list 2 and a previously determined 59-gene gene list 58 to extract the presence of significant gene ontology biological processes (GOBPs). The underlying genes in each of these GOBPS, beyond those present in the gene lists, were used to construct of pool of genes for constructing new gene-lists. Interestingly, comparison of classification performance between the original gene-list and the GOBP-derived gene lists in model training and testing within the validation datasets showed that using the GOBP-derived gene-list performed slightly better than the gene lists from which they were derived. 57 However, compared with our study, the validations in these three studies were confined to using a single classification method (Bayes classifier), 33 or centroid classification.56,57 Also, the predicted outcome differed compared with our study, being either defined as a “good” or “bad” outcome relative to 5-year time to metastasis33,56 or 5-year breast cancer survival. 57
The results from applying the classifiers developed in the training sets directly upon the test sets revealed that SG-classifiers trained on an Affymetrix dataset and validated on an independent Affymetrix dataset performed significantly better than MG-classifiers. This suggests that the single gene expression values are better for defining classification functions to classify independent data. Therefore, although the genes underlying each MG have been limited to the leading edge, signals from highly predictive genes might be diluted within the metagenes by the median expression scoring across the metagenes, and therefore lose predictive power compared with sets of single genes, which has been picked by feature selection, making each of them highly predictive. This performance difference could also be due to classifier functions/rules based on single gene expression measurements being more transferable than leading edge median expression measurements.
In contrast, no significant difference in performance was found between the MG- and SG-classifiers when the classifier was trained on an Agilent dataset and tested on an Affymetrix dataset. This suggests that when switching platforms, measurements underlying the classifier functions and rules differ significantly between the training and test set, that is, the Agilent and Affymetrix measurements being defined as log ratios and log intensities, respectively. The finding that the MG- and SG-classifiers are equally impaired suggests that Agilent-defined rules are not applicable to an Affymetrix dataset either when using median expression measures or individual gene expression measures, which agrees well with a previous finding showing poor correlations between corresponding measurements conducted on the Agilent and Affymetrix platforms. 59 Interestingly, a previous study has shown that Agilent log ratios show a bigger variability compared with Affymetrix log intensities, even after correcting the Agilent variance using log ratios. 60 The higher Agilent variability suggests that it would be less feasible to conduct Agilent to Agilent validations compared with Affymetrix to Affymetrix validations.
Conclusions
In this study we compared the performance of metagene-or single gene-based feature sets and classifiers. As the function of the genes within breast cancer predictive profiles are frequently conserved, but not the individual genes, as we expected, gene sets having a biological consensus would both have predictive power and potentially also better validation performance than classical single gene lists when validated in new samples. Surprisingly, the metagene- and single gene-based features had equal performance. When comparing classifier performance in independent datasets, we found only a significant difference between MG- and SG-classifier performances when validation was conducted on datasets measured upon the same microarray platform from which the classifiers were developed. In this situation, SG-classifiers significantly outperformed MG-classifiers.
Methods
Datasets used in this study
This study used a total often different datasets. Eight datasets were used for defining the metagene and single gene features. These samples samples from the studies,1,11–14 and samples from the Gene Expression Omnibus (GEO-) series GSE2034, 3 GSE4796, 7 GSE3494. 61 In the further study, the GSE2034 (abbreviated AFI) and a subset of 151 node-negative samples from the Amsterdam dataset by van de Vijver 14 (abbreviated AGI) were used for internal validation within the same dataset by 10 times repeated 10-fold cross validation. Furthermore, these two datasets were also used for defining gene sets used to build and train a classifier within the independent test datasets and also for building classifiers to be validated in our independent datasets. The following samples from two datasets were used as independent test datasets: 147 samples from GSE7390 40 (abbreviated AF2) and all samples from GSE11121 62 (abbreviated AF3).
Dataset Processing
The normalizations performed in the studies were retained because the authors found these methods optimal for the eight datasets and because initial ranking of metagenes and single genes was performed separately in each data set. Furthermore, the normalizations of the four datasets, AG1, AF1, AF2, and AF3, were retained for the reasons mentioned above. However, all four datasets were standardized, having a mean of zero and a standard deviation of one. Calculations and classification were also conducted using the R-environment. For random forest and support vector machines, we used the randomForest and e1071 packages, respectively.
Single gene and metagene features
To determine which single genes should be used to build single gene-based gene expression profiles, we focussed on the eight publicly available datasets used in our two previous studies.30,31 The determination of single genes was done by applying the microarray meta-analysis described in our previous study
30
upon the same individual gene expression values of each individual probe/gene used to derive the metagenes in the eight datasets. This method ranks each individual gene in each dataset according to its signal-to-noise ratio (SNR). The SNR finds the features that will discriminate between two classes by calculating a score that gives the highest score to those features whose expression levels differ most on average in the two groups while favoring those with small deviations in scores in the respective classes. The SNR for a feature
In this formula,
Following gene ranking within each dataset, the meta-analysis calculates the genes’ mean rank across datasets and determines if this mean rank is significantly high or low, according to a significance cutoff at FDR ≤ 0.05. Using gene symbols, 283 genes (Supplementary Table 2) were found significant and shared by the AG1, AF1, AF2, and AF3 datasets and used in the further analysis. The MGs are 71 selected gene sets covering a specific biological pathway, chromosomal region, or a gene set sharing a DNA transcription factor binding motif. In a previous meta-analysis across eight breast cancer microarray datasets using individual gene enrichment score ranks calculated by GSEA. The 71 metagenes were shown to be associated with breast cancer metastasis30,31 These are listed in Supplementary Table 1. The gene sets covering biological pathways are defined by KEGG (http://www.genome.ad.jp/KEGG), GenMapp (http://www.genmapp.org), and Biocarta (http://www.biocarta.com), while the transcription factor motifs are collected from TransFac (http://www.gene-regulation.com) and sets of genes regulated by the same microRNA from the mirBase (http://microrna.sanger.ac.uk). The gene sets belonging to these were defined using bioinformatic prediction as described in our previous study. 30 However, as only a fraction of the genes within each gene set display differential expression between the classes, we limited the final gene sets to those that constituted the leading edge in a GSEA analysis. These leading edge genes are considered to be the core of genes driving the enrichment signal. 63 These leading edge genes thus form each of the final metagenes, and the score of each metagene is defined as the median expression of these leading edge genes.
Classifier Building
SGs or MGs within each training dataset were ranked according to their random forest importance value. For each feature, this value reports the standardized drop in prediction accuracy when the class labels are imputed.
37
For each feature, this rank was used for model building by subsequently adding one feature at a time in a top-down forward-selection wrapper-approach starting with the top two features. To avoid creating bias during gene selection and training of the final classifier or on classification performance, we used 10 times repeated 10-fold cross validation accuracies as a performance measure as this metric has previously been shown to give an excellent bias-variance balance.
38
In this study, the models were developed to achieve the best mean sensitivity and specificity thus forcing the overall accuracy to give a balanced sensitivity and specificity. Three different classification methods were used for model building which included Random Forest (RF)
37
and SVM with a radial-based kernal (R-SVM) and a sigmoid-based kernel (S-SVM).
64
As all the classification methods have hyperparameters, we determined the optimal combination of these parameters using a grid search built into the 10-fold cross validation procedure. For this purpose the
Classification performance assessment
We compared the bAcc of SG- and MG-profiles developed from the four datasets (AG1, AF1, AF2, and AF3) at three levels. At each level, we report the classification accuracy as the mean of sensitivity and specificity, which is referred to as the balanced accuracy (bAcc). Within the same dataset (AG1 or AF1) this was done by 10 times repeated 10-fold cross validation classification accuracies. For each combination of either MG or SG and classification method, this led to generation of 100 different models. Therefore, the internal performance for either MG-or SG-models was defined as the mean performance of these 100 models. When transferring features (defined by AG1, AF1, AF2, or AF3) for classifier building and testing in independent data, leave-one-out cross validation bAcc was used as a performance metric. Although the 10 times repeated 10-fold cross validation is known to be a robust performance metric, we wanted to classify each patient in the validation datasets one by one to mimic the clinical situation and, therefore, the LOOCV bAcc was chosen as a performance metric.
When transferring the best trained classifier (defined by AG1, AF1, AF2, or AF3) from the training sets to classify the independent samples, the classification corresponding to one-fold validation upon all test samples is reported. During transfer of features and classifiers, we examined between different platform and between similar platform classification performances. The between platform (Agilent validated on Affymetrix) covers AG1 validations on AF2 or AF3. The similar platform covers validation of using the same microarray platform as the external validation set (Affymetrix validated on Affymetrix), that is, AF1 on AF2 or AF3, but also AF2 validated on AF3 and vice versa.
Endpoint/outcome
Several studies that have developed classifiers predicting a dichotomous endpoint, have all optimized classifier performance using a particular longitudinal cutoff,2,3,8,22 thus excluding patients not experiencing an event within the longitudinal cutoff and excluding patients experiencing events after this particular time point. These two circumstances could enhance the performance of these classifiers with respect to classifying early metastasis events, but they are prone to perform poorly when classifying late metastasis events.40,46 However, a study by Thomassen and coworkers used full follow-up time for classifier development, addressing if a patient would ever develop a metastasis for the length of the entire study 7 and thus have the strength of giving equal weight to both early and late metastasis events. Furthermore, time-to-event analysis sometimes can be misleading when considering classification, and transformation of time-to-event into an binary outcome can blur prediction of the classes. 65 In addition, genes being significantly correlated with survival time are not always optimal for classification. 66
The outcome differs in the eight datasets used for defining the single genes and metagenes, that is, local and regional recurrences are included in some studies. However, non-metastatic relapses constitute a minority in clinical cohorts. Therefore, the outcome is defined as metastasis or no metastasis after time of diagnosis.
Comparison of external validation performance
To test the significance of the performance difference between MG- and SG-features/classifiers, we used a repeated down-sampling approach consisting of the following five steps: (1) The MG- and SG-features/classifiers classification results upon the entire test data were initially converted into a balanced test-result by down-sampling. Down-sampling obtains a class-balanced dataset from an imbalanced dataset by removing a subset of randomly selected samples from the majority class, where the number of samples removed equals the sample size of the minor class. In this study the majority class is the non-metastasis class. (2) The number of samples correctly classified by MGs but incorrectly by SGs and vice versa is counted; (3) the significance of the difference in these counts in determined using a χ
2
test;
67
(4) The
List of abbreviations used
AF1, Affymetrix dataset 1 (Rotterdam dataset); AF2, Affymetrix dataset 2 (Transbig dataset); AF3, Affymetrix dataset 3 (Mainz dataset); AG1, Agilent dataset 1 (node-negative Amsterdam samples); bAcc, balanced accuracy; GEO, Gene Expression Omnibus; GO, Gene Ontology; GOBP, Gene Ontology Biological Process; GSEA, gene set enrichment analysis; KEGG, Kyoto Encyclopedia of Genes and Genomes; LOOCV, leave-one-out cross validation; MG, metagenes; MsigDB, Molecular Signature Database; mRNA, messenger RNA; RF, random forest; R-SVM, support vector machine with a radialbasis kernel; SG, single gene; S-SVM, support vector machine with a sigmoid kernel.
Author Contributions
Designed the study: MB, MT, QT, TK. Performed scripting, calculations, and data analysis: MB. Developed methods for statistical analysis: MB, QT. Provided the metagene datasets: MT. Wrote the manuscript: MB. Proofread the manuscript: MT, QT, TK.
Funding
Author(s) disclose no funding sources.
Competing Interests
Author(s) disclose no potential conflicts of interest.
Disclosures and Ethics
As a requirement of publication author(s) have provided to the publisher signed confirmation of compliance with legal and ethical obligations including but not limited to the following: authorship and contributorship, conflicts of interest, privacy and confidentiality and (where applicable) protection of human and animal research subjects. The authors have read and confirmed their agreement with the ICMJE authorship and conflict of interest criteria. The authors have also confirmed that this article is unique and not under consideration or published in any other publication, and that they have permission.
