--- title: "BERT-Vignette" output: BiocStyle::html_document: toc: true vignette: > %\VignetteIndexEntry{BERT-Vignette} %\VignetteEncoding{UTF-8} %\VignetteEngine{knitr::rmarkdown} editor_options: markdown: wrap: sentence --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) ``` # Introduction BERT (Batch-Effect Removal with Trees) offers flexible and efficient batch effect correction of omics data, while providing maximum tolerance to missing values. Tested on multiple datasets from proteomic analyses, BERT offered a typical 5-10x runtime improvement over existing methods, while retaining more numeric values and preserving batch effect reduction quality. As such, BERT is a valuable preprocessing tool for data analysis workflows, in particular for proteomic data. By providing BERT via Bioconductor, we make this tool available to a wider research community. An accompanying research paper is currently under preparation and will be made public soon. BERT addresses the same fundamental data integration challenges than the [HarmonizR][] package, which is released on Bioconductor in November 2023. However, various algorithmic modications and optimizations of BERT provide better execution time and better data coverage than HarmonizR. Moreover, BERT offers a more user-friendly design and a less error-prone input format. **Please note that our package *BERT* is neither affiliated with nor related to *Bidirectional Encoder Representations from Transformers* as published by Google.** _Please report any questions and issues in the GitHub forum, the BioConductor forum or directly contact [the authors](mailto:schumany@hsu-hh.de,schlumbohm@hsu-hh.de),_ # Installation Please download and install a current version of R ([Windows binaries](https://cran.r-project.org/bin/windows/base/release.html)). You might want to consider installing a development environment as well, e.g. [RStudio](https://posit.co/downloads/). Finally, BERT can be installed via Bioconductor using ```{r, eval=FALSE} if (!require("BiocManager", quietly = TRUE)){ install.packages("BiocManager") } BiocManager::install("BERT") ``` which will install all required dependencies. To install the development version of BERT, you can use devtools as follows ```{r, eval=FALSE} devtools::install_github("HSU-HPC/BERT") ``` which may require the manual installation of the dependencies `sva` and `limma`. ```{r, eval=FALSE} if (!require("BiocManager", quietly = TRUE)){ install.packages("BiocManager") } BiocManager::install("sva") BiocManager::install("limma") ``` # Data Preparation {#data-preparation} As input, BERT requires a dataframe[^1] with samples in rows and features in columns. For each sample, the respective batch should be indicated by an integer or string in a corresponding column labelled *Batch*. Missing values should be labelled as `NA`. A valid example dataframe could look like this: [^1]: Matrices and SummarizedExperiments work as well, but will automatically be converted to dataframes. ```{r} example = data.frame(feature_1 = stats::rnorm(5), feature_2 = stats::rnorm(5), Batch=c(1,1,2,2,2)) example ``` Note that each batch should contain at least two samples. Optional columns that can be passed are - `Label` A column with integers or strings indicating the (known) class for each sample. `NA` is not allowed. BERT may use this columns and `Batch` to compute quality metrics after batch effect correction. - `Sample` A sample name. This column is ignored by BERT and can be used to provide meta-information for further processing. - `Cov_1`, `Cov_2`, ..., `Cov_x`: One or multiple columns with integers, indicating one or several covariate levels. `NA` is not allowed. If this(these) column(s) is present, BERT will pass them as covariates to the the underlying batch effect correction method. As an example, this functionality can be used to preserve differences between healthy/tumorous samples, if some of the batches exhibit strongly variable class distributions. Note that BERT requires at least two numeric values per batch and unique covariate level to adjust a feature. Features that don't satisfy this condition in a specific batch are set to `NA` for that batch. - `Reference` A column with integers or strings from $\mathbb{N}_0$ that indicate, whether a sample should be used for "learning" the transformation for batch effect correction or whether the sample should be co-adjusted using the learned transformation from the other samples.`NA` is not allowed. This feature can be used, if some batches contain unique classes or samples with unknown classes which would prohibit the usage of covariate columns. If the column contains a `0` for a sample, this sample will be co-adjusted. Otherwise, the sample should contain the respective class (encoded as integer or string). Note that BERT requires at least two references of common class per adjustment step and that the `Reference` column is mutually exclusive with covariate columns. Note that BERT tries to find all metadata information for a `SummarizedExperiment`, including the mandatory batch information, using `colData`. For instance, a valid `SummarizedExperiment` might be defined as ```{r} nrows <- 200 ncols <- 8 expr_values <- matrix(runif(nrows * ncols, 1, 1e4), nrows) # colData also takes all other metadata information, such as Label, Sample, # Covariables etc. colData <- data.frame(Batch=c(1,1,1,1,2,2,2,2), Reference=c(1,1,0,0,1,1,0,0)) dataset_raw = SummarizedExperiment::SummarizedExperiment(assays=list(expr=expr_values), colData=colData) ``` # Basic Usage BERT can be invoked by importing the `BERT` library and calling the `BERT` function. The batch effect corrected data is returned as a dataframe that mirrors the input dataframe[^2]. [^2]: In particular, the row and column names are in the same order and the optional columns are preserved. ```{r} library(BERT) # generate test data with 10% missing values as provided by the BERT library dataset_raw <- generate_dataset(features=60, batches=10, samplesperbatch=10, mvstmt=0.1, classes=2) # apply BERT dataset_adjusted <- BERT(dataset_raw) ``` BERT uses the `logging` library to convey live information to the user during the adjustment procedure. The algorithm first verifies the shape and suitability of the input dataframe (lines 1-6) before continuing with the actual batch effect correction (lines 8-14). BERT measure batch effects before and after the correction step by means of the average silhouette score (ASW) with respect to batch and labels (lines 7 and 15). The ASW Label should increase in a successful batch effect correction, whereas low values ($\leq 0$) are desireable for the ASW Batch[^3]. Finally, BERT prints the total function execution time (including the computation time for the quality metrics). [^3]: The optimum of ASW Label is 1, which is typically however not achieved on real-world datasets. Also, the optimum of ASW Batch can vary, depending on the class distributions of the batches. # Advanced Options ## Parameters BERT offers a large number of parameters to customize the batch effect adjustment. The full function call, including all defaults is ``` r BERT(data, cores = NULL, combatmode = 1, corereduction=2, stopParBatches=2, backend="default", method="ComBat", qualitycontrol=TRUE, verify=TRUE, labelname="Label", batchname="Batch", referencename="Reference", samplename="Sample", covariatename=NULL, BPPARAM=NULL, assayname=NULL) ``` In the following, we list the respective meaning of each parameter: - `data`: The input dataframe/matrix/SummarizedExperiment to adjust. See [Data Preparation](#data-preparation) for detailed formatting instructions. - `data` The data for batch-effect correction. Must contain at least two samples per batch and 2 features. - `cores`: BERT uses [BiocParallel](https://bioconductor.org/packages/release/bioc/html/BiocParallel.html) for parallelization. If the user specifies a value `cores`, BERT internally creates and uses a new instance of `BiocParallelParam`, which is however not exhibited to the user. Setting this parameter can speed up the batch effect adjustment considerably, in particular for large datasets and on unix-based operating systems. A value between $2$ and $4$ is a reasonable choice for typical commodity hardware. Multi-node computations are not supported as of now. If, however, `cores` is not specified, BERT will default to `BiocParallel::bpparam()`, which may have been set by the user or the system. Additionally, the user can directly specify a specific instance of `BiocParallelParam` to be used via the `BPPARAM` argument. - `combatmode` An integer that encodes the parameters to use for ComBat. | Value | par.prior | mean.only | |-------|-----------|-----------| | 1 | TRUE | FALSE | | 2 | TRUE | TRUE | | 3 | FALSE | FALSE | | 4 | FALSE | TRUE | The value of this parameter will be ignored, if `method!="ComBat"`. - `corereduction` Positive integer indicating the factor by which the number of processes should be reduced, once no further adjustment is possible for the current number of batches.[^4] This parameter is used only, if the user specified a custom value for parameter `cores`. - `stopParBatches` Positive integer indicating the minimum number of batches required at a hierarchy level to proceed with parallelized adjustment. If the number of batches is smaller, adjustment will be performed sequentially to avoid communication overheads. - `backend`: The backend to use for inter-process communication. Possible choices are `default` and `file`, where the former refers to the default communication backend of the requested parallelization mode and the latter will create temporary `.rds` files for data communication. 'default' is usually faster for small to medium sized datasets. - `method`: The method to use for the underlying batch effect correction steps. Should be either `ComBat`, `limma` for `limma::removeBatchEffects` or `ref` for adjustment using specified references (cf. [Data Preparation](#data-preparation)). The underlying batch effect adjustment method for `ref` is a modified version of the `limma` method. - `qualitycontrol`: A boolean to (de)activate the ASW computation. Deactivating the ASW computations accelerates the computations. - `verify`: A boolean to (de)activate the initial format check of the input data. Deactivating this verification step accelerates the computations. - `labelname`: A string containing the name of the column to use as class labels. The default is "Label". - `batchname`: A string containing the name of the column to use as batch labels. The default is "Batch". - `referencename`: A string containing the name of the column to use as reference labels. The default is "Reference". - `covariatename`: A vector containing the names of columns with categorical covariables.The default is NULL, in which case all column names are matched agains the pattern "Cov". - `BPPARAM`: An instance of `BiocParallelParam` that will be used for parallelization. The default is null, in which case the value of `cores` determines the behaviour of BERT. - `assayname`: If the user chooses to pass a `SummarizedExperiment` object, they need to specify the name of the assay that they want to apply BERT to here. BERT then returns the input `SummarizedExperiment` with an additional assay labeled `assayname_BERTcorrected`. [^4]: E.g. consider a BERT call with 8 batches and 8 processes. Further adjustment is not possible with this number of processes, since batches are always processed in pairs. With `corereduction=2`, the number of processes for the following adjustment steps would be set to $8/2=4$, which is the maximum number of usable processes for this example. ## Verbosity BERT utilizes the `logging` package for output. The user can easily specify the verbosity of BERT by setting the global logging level in the script. For instance ```{r, eval=FALSE} logging::setLevel("WARN") # set level to warn and upwards result <- BERT(data,cores = 1) # BERT executes silently ``` ## Choosing the Optimal Number of Cores BERT exhibits a large number of parameters for parallelisation as to provide users with maximum flexibility. For typical scenarios, however, the default parameters are well suited. For very large experiments ($>15$ batches), we recommend to increase the number of cores (a reasonable value is $4$ but larger values may be possible on your hardware). Most users should leave all parameters to their respective default. # Examples In the following, we present simple cookbook examples for BERT usage. Note that ASWs (and runtime) will most likely differ on your machine, since the data generating process involves multiple random choices. ## Sequential Adjustment with limma Here, BERT uses limma as underlying batch effect correction algorithm (`method='limma'`) and performs all computations on a single process (`cores` parameter is left on default). ```{r} # import BERT library(BERT) # generate data with 30 batches, 60 features, 15 samples per batch, 15% missing values and 2 classes dataset_raw <- generate_dataset(features=60, batches=20, samplesperbatch=15, mvstmt=0.15, classes=2) # BERT dataset_adjusted <- BERT(dataset_raw, method="limma") ``` ## Parallel Batch Effect Correction with ComBat Here, BERT uses ComBat as underlying batch effect correction algorithm (`method` is left on default) and performs all computations on a 2 processes (`cores=2`). ```{r} # import BERT library(BERT) # generate data with 30 batches, 60 features, 15 samples per batch, 15% missing values and 2 classes dataset_raw <- generate_dataset(features=60, batches=20, samplesperbatch=15, mvstmt=0.15, classes=2) # BERT dataset_adjusted <- BERT(dataset_raw, cores=2) ``` ## Batch Effect Correction Using SummarizedExperiment Here, BERT takes the input data using a `SummarizedExperiment` instead. Batch effect correction is then performed using ComBat as underlying algorithm (`method` is left on default) and all computations are performed on a single process (`cores` parameter is left on default). ```{r} nrows <- 200 ncols <- 8 # SummarizedExperiments store samples in columns and features in rows (in contrast to BERT). # BERT will automatically account for this. expr_values <- matrix(runif(nrows * ncols, 1, 1e4), nrows) # colData also takes further metadata information, such as Label, Sample, # Reference or Covariables colData <- data.frame("Batch"=c(1,1,1,1,2,2,2,2), "Label"=c(1,2,1,2,1,2,1,2), "Sample"=c(1,2,3,4,5,6,7,8)) dataset_raw = SummarizedExperiment::SummarizedExperiment(assays=list(expr=expr_values), colData=colData) dataset_adjusted = BERT(dataset_raw, assayname = "expr") ``` ## BERT with Covariables BERT can utilize categorical covariables that are specified in columns `Cov_1, Cov_2, ...`. These columns are automatically detected and integrated into the batch effect correction process. ```{r} # import BERT library(BERT) # set seed for reproducibility set.seed(1) # generate data with 5 batches, 60 features, 30 samples per batch, 15% missing values and 2 classes dataset_raw <- generate_dataset(features=60, batches=5, samplesperbatch=30, mvstmt=0.15, classes=2) # create covariable column with 2 possible values, e.g. male/female condition dataset_raw["Cov_1"] = sample(c(1,2), size=dim(dataset_raw)[1], replace=TRUE) # BERT dataset_adjusted <- BERT(dataset_raw) ``` ## BERT with references In rare cases, class distributions across experiments may be severely skewed. In particular, a batch might contain classes that other batches don't contain. In these cases, samples of common conditions may serve as references (*bridges*) between the batches (`method="ref"`). BERT utilizes those samples as references that have a condition specified in the "Reference" column of the input. All other samples are co-adjusted. Please note, that this strategy implicitly uses limma as underlying batch effect correction algorithm. ```{r} # import BERT library(BERT) # generate data with 4 batches, 6 features, 15 samples per batch, 15% missing values and 2 classes dataset_raw <- generate_dataset(features=6, batches=4, samplesperbatch=15, mvstmt=0.15, classes=2) # create reference column with default value 0. The 0 indicates, that the respective sample should be co-adjusted only. dataset_raw[, "Reference"] <- 0 # randomly select 2 references per batch and class - in practice, this choice will be determined by external requirements (e.g. class known for only these samples) batches <- unique(dataset_raw$Batch) # all the batches for(b in batches){ # iterate over all batches # references from class 1 ref_idx = sample(which((dataset_raw$Batch==b)&(dataset_raw$Label==1)), size=2, replace=FALSE) dataset_raw[ref_idx, "Reference"] <- 1 # references from class 2 ref_idx = sample(which((dataset_raw$Batch==b)&(dataset_raw$Label==2)), size=2, replace=FALSE) dataset_raw[ref_idx, "Reference"] <- 2 } # BERT dataset_adjusted <- BERT(dataset_raw, method="ref") ``` # Issues Issues can be reported in the GitHub forum, the BioConductor forum or directly to [the authors](mailto:schumany@hsu-hh.de,schlumbohm@hsu-hh.de). # License This code is published under the GPLv3.0 License and is available for non-commercial academic purposes. # Reference Please cite our manuscript, if you use BERT for your research: *Yannis Schumann, Simon Schlumbohm et al., BERT - Batch Effect Reduction Trees with Missing Value Tolerance, 2023* # Session Info ```{r} sessionInfo() ```