Meta-analysis using SIAMCAT

About This Vignette

In this vignette, we want to demonstrate how SIAMCAT can facilitate metagenomic meta-analyses, focussing both on association testing and ML workflows. As an example, we use five different studies of Crohn’s disease (CD), since we have taxonomic profiles from five different metagenomic datasets available. Those studies are:

  1. metaHIT
  2. Lewis et al. 2015
  3. He et al. 2017
  4. Franzosa et al. 2019
  5. HMP2

Setup

library("tidyverse")
library("SIAMCAT")

First, we load the data for all studies, which are available for download from Zenodo. The raw data have been preprocessed and taxonomically profiled with mOTUs2 and were then aggregated at genus level.

# base url for data download
data.loc <- 'https://zenodo.org/api/files/d81e429c-870f-44e0-a44a-2a4aa541b6c1/'
# datasets
datasets <- c('metaHIT', 'Lewis_2015', 'He_2017', 'Franzosa_2019', 'HMP2')
# metadata
meta.all <- read_tsv(paste0(data.loc, 'meta_all_cd.tsv'))
# features
feat <- read.table(paste0(data.loc, 'feat_genus_cd.tsv'), 
                check.names = FALSE, stringsAsFactors = FALSE, quote = '', 
                sep='\t')
feat <- as.matrix(feat)
# check that metadata and features agree
stopifnot(all(colnames(feat) == meta.all$Sample_ID))

Let us have a look at the distribution of groups across the studies

table(meta.all$Study, meta.all$Group)
##                
##                  CD CTR
##   Franzosa_2019  88  56
##   HMP2          583 357
##   He_2017        49  53
##   Lewis_2015    294  25
##   metaHIT        21  71

Some of the studies contain more than one sample for the same subject. For example, the HMP2 publication focussed on the longitudinal aspect of CD. Therefore. we want to take this into account when training and evaluating the machine learning model (see the vignette about Machine learning pitfalls) and when performing the association testing. Thus, it will be convenient to create a second metadata table containing a single entry for each individual.

meta.ind <- meta.all %>% 
    group_by(Individual_ID) %>% 
    filter(Timepoint==min(Timepoint)) %>% 
    ungroup()

Compare Associations

Compute Associations with SIAMCAT

To test for associations, we can encapsulate each dataset into a different SIAMCAT object and use the check.associations function:

assoc.list <- list()
for (d in datasets){
    # filter metadata and convert to dataframe
    meta.train <- meta.ind %>% 
        filter(Study==d) %>% 
        as.data.frame()
    rownames(meta.train) <- meta.train$Sample_ID

    # create SIAMCAT object
    sc.obj <- siamcat(feat=feat, meta=meta.train, label='Group', case='CD')
    # test for associations
    sc.obj <- check.associations(sc.obj, log.n0=1e-05, 
                                feature.type = 'original')
    # extract the associations and save them in the assoc.list
    temp <- associations(sc.obj)
    temp$genus <- rownames(temp)
    assoc.list[[d]] <- temp %>% 
        select(genus, fc, auc, p.adj) %>% 
        mutate(Study=d)
}
# combine all associations
df.assoc <- bind_rows(assoc.list)
df.assoc <- df.assoc %>% filter(genus!='unclassified')
head(df.assoc)
##                                             genus fc auc p.adj   Study
## 159730 Thermovenabulum...1 159730 Thermovenabulum  0 0.5   NaN metaHIT
## 42447 Anaerobranca...2         42447 Anaerobranca  0 0.5   NaN metaHIT
## 1562 Desulfotomaculum...3   1562 Desulfotomaculum  0 0.5   NaN metaHIT
## 60919 Sanguibacter...4         60919 Sanguibacter  0 0.5   NaN metaHIT
## 357 Agrobacterium...5           357 Agrobacterium  0 0.5   NaN metaHIT
## 392332 Geoalkalibacter...6 392332 Geoalkalibacter  0 0.5   NaN metaHIT

Plot Heatmap for Interesting Genera

Now, we can compare the associations stored in the df.assoc tibble. For example, we can extract features which are very strongly associated with the label (single-feature AUROC > 0.75 or < 0.25) in at least one of the studies and plot the generalized fold change as heatmap.

genera.of.interest <- df.assoc %>% 
    group_by(genus) %>% 
    summarise(m=mean(auc), n.filt=any(auc < 0.25 | auc > 0.75), 
        .groups='keep') %>% 
    filter(n.filt) %>% 
    arrange(m)

After we extracted the genera, we plot them:

df.assoc %>% 
    # take only genera of interest
    filter(genus %in% genera.of.interest$genus) %>% 
    # convert to factor to enforce an ordering by mean AUC
    mutate(genus=factor(genus, levels = rev(genera.of.interest$genus))) %>% 
    # convert to factor to enforce ordering again
    mutate(Study=factor(Study, levels = datasets)) %>% 
    # annotate the cells in the heatmap with stars
    mutate(l=case_when(p.adj < 0.01~'*', TRUE~'')) %>%  
    ggplot(aes(y=genus, x=Study, fill=fc)) + 
        geom_tile() + 
        scale_fill_gradient2(low = '#3B6FB6', high='#D41645', mid = 'white', 
            limits=c(-2.7, 2.7), name='Generalized\nfold change') + 
        theme_minimal() + 
        geom_text(aes(label=l)) +
        theme(panel.grid = element_blank()) + 
        xlab('') + ylab('') +
        theme(axis.text = element_text(size=6))

Study as Confounding Factor

Additionally, we can check how differences between studies might influence the variance of specific genera. To do so, we create a singel SIAMCAT object which holds the complete datasets and then we run the check.confounder function.

df.meta <- meta.ind %>% 
    as.data.frame()
rownames(df.meta) <- df.meta$Sample_ID
sc.obj <- siamcat(feat=feat, meta=df.meta, label='Group', case='CD')
## + starting create.label
## Label used as case:
##    CD
## Label used as control:
##    CTR
## + finished create.label.from.metadata in 0.001 s
## + starting validate.data
## +++ checking overlap between labels and features
## + Keeping labels of 504 sample(s).
## +++ checking sample number per class
## +++ checking overlap between samples and metadata
## + finished validate.data in 0.032 s
check.confounders(sc.obj, fn.plot = './confounder_plot_cd_meta.pdf',
                feature.type='original')
## Finished checking metadata for confounders, results plotted to: ./confounder_plot_cd_meta.pdf

The resulting variance plot shows that some genera are strongly impacated by differences between studies, other genera not so much. Of note, the genera that vary most with the label (CD vs controls) do not show a lot of variance across studies.

ML Meta-analysis

Train LASSO Models

Lastly, we can perform the machine learning (ML) meta-analysis: we first train one model for each datasets and then apply it to the other datasets using the holdout testing functionality of SIAMCAT. For datasets with repeated samples across subjects, we block the cross-validation for subjects in order not to bias the results (see also the vignette about Machine learning pitfalls).

# create tibble to store all the predictions
auroc.all <- tibble(study.train=character(0), 
                    study.test=character(0),
                    AUC=double(0))
# and a list to save the trained SIAMCAT objects
sc.list <- list()
for (i in datasets){
    # restrict to a single study
    meta.train <- meta.all %>% 
        filter(Study==i) %>% 
        as.data.frame()
    rownames(meta.train) <- meta.train$Sample_ID

    ## take into account repeated sampling by including a parameters
    ## in the create.data.split function
    ## For studies with repeated samples, we want to block the
    ## cross validation by the column 'Individual_ID'
    block <- NULL
    if (i %in% c('metaHIT', 'Lewis_2015', 'HMP2')){
        block <- 'Individual_ID'
        if (i == 'HMP2'){ 
            # for the HMP2 dataset, the number of repeated sample per subject 
            # need to be reduced, because some subjects have been sampled 
            # 20 times, other only 5 times
            meta.train <- meta.all %>% 
                filter(Study=='HMP2') %>% 
                group_by(Individual_ID) %>% 
                sample_n(5, replace = TRUE) %>% 
                distinct() %>% 
                as.data.frame()
            rownames(meta.train) <- meta.train$Sample_ID
        }
    }
    # create SIAMCAT object
    sc.obj.train <- siamcat(feat=feat, meta=meta.train, 
                            label='Group', case='CD')
    # normalize features
    sc.obj.train <- normalize.features(sc.obj.train, norm.method = 'log.std',
        norm.param=list(log.n0=1e-05, sd.min.q=0),feature.type = 'original')
    # Create data split
    sc.obj.train <- create.data.split(sc.obj.train,
        num.folds = 10, num.resample = 10, inseparable = block)
    # train LASSO model
    sc.obj.train <- train.model(sc.obj.train, method='lasso')

    ## apply trained models to other datasets

    # loop through datasets again
    for (i2 in datasets){
        if (i == i2){
            # make and evaluate cross-validation predictions (same dataset)
            sc.obj.train <- make.predictions(sc.obj.train)
            sc.obj.train <- evaluate.predictions(sc.obj.train)
            auroc.all <- auroc.all %>% 
                add_row(study.train=i, study.test=i,
                    AUC=eval_data(sc.obj.train)$auroc %>% as.double())
        } else {
            # make and evaluate on the external datasets
            # use meta.ind here, since we want only one sample per subject!
            meta.test <- meta.ind %>% 
                filter(Study==i2) %>%
                as.data.frame()
            rownames(meta.test) <- meta.test$Sample_ID
            sc.obj.test <- siamcat(feat=feat, meta=meta.test,
                                    label='Group', case='CD')
            # make holdout predictions
            sc.obj.test <- make.predictions(sc.obj.train, 
                                            siamcat.holdout = sc.obj.test)
            sc.obj.test <- evaluate.predictions(sc.obj.test)
            auroc.all <- auroc.all %>% 
                add_row(study.train=i, study.test=i2,
                    AUC=eval_data(sc.obj.test)$auroc %>% as.double())
        }
    }
    # save the trained model
    sc.list[[i]] <- sc.obj.train
}

After we trained and applied all models, we can calculate the test average for each dataset:

test.average <- auroc.all %>% 
    filter(study.train!=study.test) %>% 
    group_by(study.test) %>% 
    summarise(AUC=mean(AUC), .groups='drop') %>% 
    mutate(study.train="Average")

Now that we have the AUROC values, we can plot them into a nice heatmap:

# combine AUROC values with test average
bind_rows(auroc.all, test.average) %>% 
    # highlight cross validation versus transfer results
    mutate(CV=study.train == study.test) %>%
    # for facetting later
    mutate(split=case_when(study.train=='Average'~'Average', TRUE~'none')) %>% 
    mutate(split=factor(split, levels = c('none', 'Average'))) %>% 
    # convert to factor to enforce ordering
    mutate(study.train=factor(study.train, levels=c(datasets, 'Average'))) %>% 
    mutate(study.test=factor(study.test, 
                            levels=c(rev(datasets),'Average'))) %>% 
    ggplot(aes(y=study.test, x=study.train, fill=AUC, size=CV, color=CV)) +
        geom_tile() + theme_minimal() +
        # text in tiles
        geom_text(aes_string(label="format(AUC, digits=2)"), 
            col='white', size=2)+
        # color scheme
        scale_fill_gradientn(colours=rev(c('darkgreen','forestgreen', 
                                        'chartreuse3','lawngreen', 
                                        'yellow')), limits=c(0.5, 1)) +
        # axis position/remove boxes/ticks/facet background/etc.
        scale_x_discrete(position='top') + 
        theme(axis.line=element_blank(), 
                axis.ticks = element_blank(), 
                axis.text.x.top = element_text(angle=45, hjust=.1), 
                panel.grid=element_blank(), 
                panel.border=element_blank(), 
                strip.background = element_blank(), 
                strip.text = element_blank()) + 
        xlab('Training Set') + ylab('Test Set') + 
        scale_color_manual(values=c('#FFFFFF00', 'grey'), guide=FALSE) + 
        scale_size_manual(values=c(0, 1), guide=FALSE) + 
        facet_grid(~split, scales = 'free', space = 'free')
## Warning: `aes_string()` was deprecated in ggplot2 3.0.0.
## ℹ Please use tidy evaluation idioms with `aes()`.
## ℹ See also `vignette("ggplot2-in-packages")` for more information.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
## Warning: The `guide` argument in `scale_*()` cannot be `FALSE`. This was deprecated in
## ggplot2 3.3.4.
## ℹ Please use "none" instead.
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.

Investigate Feature Weights

Now that we the trained models (and we saved them in the sc.list object), we can also extract the model weights using SIAMCAT and compare to the associations we computed above.

weight.list <- list()
for (d in datasets){
    sc.obj.train <- sc.list[[d]]
    # extract the feature weights out of the SIAMCAT object
    temp <- feature_weights(sc.obj.train)
    temp$genus <- rownames(temp)
    # save selected info in the weight.list
    weight.list[[d]] <- temp %>% 
        select(genus, median.rel.weight, mean.rel.weight, percentage) %>% 
        mutate(Study=d) %>% 
        mutate(r.med=rank(-abs(median.rel.weight)), 
            r.mean=rank(-abs(mean.rel.weight)))
}
# combine all feature weights into a single tibble
df.weights <- bind_rows(weight.list)
df.weights <- df.weights %>% filter(genus!='unclassified')

Using this, we can plot another heatmap with the weights, focussing on the genera of interest for which we plotted the associations as heatmap above.

# compute absolute feature weights
abs.weights <- df.weights %>% 
    group_by(Study) %>% 
    summarise(sum.median=sum(abs(median.rel.weight)),
                sum.mean=sum(abs(mean.rel.weight)),
                .groups='drop')

df.weights %>% 
    full_join(abs.weights) %>% 
    # normalize by the absolute model size
    mutate(median.rel.weight=median.rel.weight/sum.median) %>% 
    # only include genera of interest
    filter(genus %in% genera.of.interest$genus) %>% 
    # highlight feature rank for the top 20 features
    mutate(r.med=case_when(r.med > 20~NA_real_, TRUE~r.med)) %>%
    # enforce the correct ordering by converting to factors again
    mutate(genus=factor(genus, levels = rev(genera.of.interest$genus))) %>% 
    mutate(Study=factor(Study, levels = datasets)) %>% 
    ggplot(aes(y=genus, x=Study, fill=median.rel.weight)) + 
        geom_tile() + 
        scale_fill_gradientn(colours=rev(
            c('#007A53', '#009F4D', "#6CC24A", 'white',
            "#EFC06E", "#FFA300", '#BE5400')), 
            limits=c(-0.15, 0.15)) +
        theme_minimal() + 
        geom_text(aes(label=r.med), col='black', size= 2) +
        theme(panel.grid = element_blank()) + 
        xlab('') + ylab('') +
        theme(axis.text = element_text(size=6))
## Joining with `by = join_by(Study)`

Session Info

sessionInfo()
## R version 4.4.2 (2024-10-31)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 24.04.1 LTS
## 
## Matrix products: default
## BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
## LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## time zone: Etc/UTC
## tzcode source: system (glibc)
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] ggpubr_0.6.0     SIAMCAT_2.11.0   phyloseq_1.51.0  mlr3_0.21.1     
##  [5] lubridate_1.9.3  forcats_1.0.0    stringr_1.5.1    dplyr_1.1.4     
##  [9] purrr_1.0.2      readr_2.1.5      tidyr_1.3.1      tibble_3.2.1    
## [13] ggplot2_3.5.1    tidyverse_2.0.0  BiocStyle_2.35.0
## 
## loaded via a namespace (and not attached):
##   [1] RColorBrewer_1.1-3      sys_3.4.3               jsonlite_1.8.9         
##   [4] shape_1.4.6.1           magrittr_2.0.3          farver_2.1.2           
##   [7] corrplot_0.95           nloptr_2.1.1            rmarkdown_2.29         
##  [10] zlibbioc_1.52.0         vctrs_0.6.5             multtest_2.63.0        
##  [13] minqa_1.2.8             PRROC_1.3.1             rstatix_0.7.2          
##  [16] htmltools_0.5.8.1       progress_1.2.3          curl_6.0.1             
##  [19] broom_1.0.7             Rhdf5lib_1.29.0         Formula_1.2-5          
##  [22] rhdf5_2.51.0            pROC_1.18.5             sass_0.4.9             
##  [25] parallelly_1.39.0       bslib_0.8.0             plyr_1.8.9             
##  [28] palmerpenguins_0.1.1    mlr3tuning_1.2.0        cachem_1.1.0           
##  [31] uuid_1.2-1              buildtools_1.0.0        igraph_2.1.1           
##  [34] lifecycle_1.0.4         iterators_1.0.14        pkgconfig_2.0.3        
##  [37] Matrix_1.7-1            R6_2.5.1                fastmap_1.2.0          
##  [40] GenomeInfoDbData_1.2.13 future_1.34.0           digest_0.6.37          
##  [43] numDeriv_2016.8-1.1     colorspace_2.1-1        S4Vectors_0.45.2       
##  [46] mlr3misc_0.15.1         vegan_2.6-8             labeling_0.4.3         
##  [49] fansi_1.0.6             timechange_0.3.0        httr_1.4.7             
##  [52] abind_1.4-8             mgcv_1.9-1              compiler_4.4.2         
##  [55] beanplot_1.3.1          bit64_4.5.2             withr_3.0.2            
##  [58] backports_1.5.0         carData_3.0-5           ggsignif_0.6.4         
##  [61] LiblineaR_2.10-24       MASS_7.3-61             biomformat_1.35.0      
##  [64] permute_0.9-7           tools_4.4.2             ape_5.8                
##  [67] glue_1.8.0              lgr_0.4.4               nlme_3.1-166           
##  [70] rhdf5filters_1.19.0     grid_4.4.2              checkmate_2.3.2        
##  [73] gridBase_0.4-7          cluster_2.1.6           reshape2_1.4.4         
##  [76] ade4_1.7-22             generics_0.1.3          gtable_0.3.6           
##  [79] tzdb_0.4.0              data.table_1.16.2       hms_1.1.3              
##  [82] car_3.1-3               utf8_1.2.4              XVector_0.47.0         
##  [85] BiocGenerics_0.53.3     foreach_1.5.2           pillar_1.9.0           
##  [88] vroom_1.6.5             bbotk_1.3.0             splines_4.4.2          
##  [91] lattice_0.22-6          bit_4.5.0               survival_3.7-0         
##  [94] tidyselect_1.2.1        maketools_1.3.1         Biostrings_2.75.1      
##  [97] knitr_1.49              infotheo_1.2.0.1        gridExtra_2.3          
## [100] IRanges_2.41.1          stats4_4.4.2            xfun_0.49              
## [103] Biobase_2.67.0          matrixStats_1.4.1       stringi_1.8.4          
## [106] UCSC.utils_1.3.0        yaml_2.3.10             boot_1.3-31            
## [109] evaluate_1.0.1          codetools_0.2-20        BiocManager_1.30.25    
## [112] cli_3.6.3               munsell_0.5.1           jquerylib_0.1.4        
## [115] mlr3learners_0.8.0      Rcpp_1.0.13-1           GenomeInfoDb_1.43.1    
## [118] globals_0.16.3          parallel_4.4.2          prettyunits_1.2.0      
## [121] paradox_1.0.1           lme4_1.1-35.5           listenv_0.9.1          
## [124] glmnet_4.1-8            lmerTest_3.1-3          scales_1.3.0           
## [127] crayon_1.5.3            rlang_1.1.4             mlr3measures_1.0.0