BERT-Vignette

Introduction

BERT (Batch-Effect Removal with Trees) offers flexible and efficient batch effect correction of omics data, while providing maximum tolerance to missing values. Tested on multiple datasets from proteomic analyses, BERT offered a typical 5-10x runtime improvement over existing methods, while retaining more numeric values and preserving batch effect reduction quality.

As such, BERT is a valuable preprocessing tool for data analysis workflows, in particular for proteomic data. By providing BERT via Bioconductor, we make this tool available to a wider research community. An accompanying research paper is currently under preparation and will be made public soon.

BERT addresses the same fundamental data integration challenges than the [HarmonizR][https://github.com/HSU-HPC/HarmonizR] package, which is released on Bioconductor in November 2023. However, various algorithmic modications and optimizations of BERT provide better execution time and better data coverage than HarmonizR. Moreover, BERT offers a more user-friendly design and a less error-prone input format.

Please note that our package BERT is neither affiliated with nor related to Bidirectional Encoder Representations from Transformers as published by Google.

Please report any questions and issues in the GitHub forum, the BioConductor forum or directly contact the authors,

Installation

Please download and install a current version of R (Windows binaries). You might want to consider installing a development environment as well, e.g. RStudio. Finally, BERT can be installed via Bioconductor using

if (!require("BiocManager", quietly = TRUE)){
    install.packages("BiocManager")
}
BiocManager::install("BERT")

which will install all required dependencies. To install the development version of BERT, you can use devtools as follows

devtools::install_github("HSU-HPC/BERT")

which may require the manual installation of the dependencies sva and limma.

if (!require("BiocManager", quietly = TRUE)){
    install.packages("BiocManager")
}
BiocManager::install("sva")
BiocManager::install("limma")

Data Preparation

As input, BERT requires a dataframe1 with samples in rows and features in columns. For each sample, the respective batch should be indicated by an integer or string in a corresponding column labelled Batch. Missing values should be labelled as NA. A valid example dataframe could look like this:

example = data.frame(feature_1 = stats::rnorm(5), feature_2 = stats::rnorm(5), Batch=c(1,1,2,2,2))
example
#>     feature_1   feature_2 Batch
#> 1  0.01386623 -0.62925270     1
#> 2 -0.24159216 -0.28920177     1
#> 3  1.12482991 -0.10920596     2
#> 4 -0.83095285  0.09400311     2
#> 5  0.17607188  0.69546424     2

Note that each batch should contain at least two samples. Optional columns that can be passed are

  • Label A column with integers or strings indicating the (known) class for each sample. NA is not allowed. BERT may use this columns and Batch to compute quality metrics after batch effect correction.

  • Sample A sample name. This column is ignored by BERT and can be used to provide meta-information for further processing.

  • Cov_1, Cov_2, …, Cov_x: One or multiple columns with integers, indicating one or several covariate levels. NA is not allowed. If this(these) column(s) is present, BERT will pass them as covariates to the the underlying batch effect correction method. As an example, this functionality can be used to preserve differences between healthy/tumorous samples, if some of the batches exhibit strongly variable class distributions. Note that BERT requires at least two numeric values per batch and unique covariate level to adjust a feature. Features that don’t satisfy this condition in a specific batch are set to NA for that batch.

  • Reference A column with integers or strings from 0 that indicate, whether a sample should be used for “learning” the transformation for batch effect correction or whether the sample should be co-adjusted using the learned transformation from the other samples.NA is not allowed. This feature can be used, if some batches contain unique classes or samples with unknown classes which would prohibit the usage of covariate columns. If the column contains a 0 for a sample, this sample will be co-adjusted. Otherwise, the sample should contain the respective class (encoded as integer or string). Note that BERT requires at least two references of common class per adjustment step and that the Reference column is mutually exclusive with covariate columns.

Note that BERT tries to find all metadata information for a SummarizedExperiment, including the mandatory batch information, using colData. For instance, a valid SummarizedExperiment might be defined as

nrows <- 200
ncols <- 8
expr_values <- matrix(runif(nrows * ncols, 1, 1e4), nrows)
# colData also takes all other metadata information, such as Label, Sample,
# Covariables etc.
colData <- data.frame(Batch=c(1,1,1,1,2,2,2,2), Reference=c(1,1,0,0,1,1,0,0))
dataset_raw = SummarizedExperiment::SummarizedExperiment(assays=list(expr=expr_values), colData=colData)

Basic Usage

BERT can be invoked by importing the BERT library and calling the BERT function. The batch effect corrected data is returned as a dataframe that mirrors the input dataframe2.

library(BERT)
# generate test data with 10% missing values as provided by the BERT library
dataset_raw <- generate_dataset(features=60, batches=10, samplesperbatch=10, mvstmt=0.1, classes=2)
# apply BERT
dataset_adjusted <- BERT(dataset_raw)
#> 2025-02-28 07:05:13.666586 INFO::Formatting Data.
#> 2025-02-28 07:05:13.672952 INFO::Replacing NaNs with NAs.
#> 2025-02-28 07:05:13.67901 INFO::Removing potential empty rows and columns
#> 2025-02-28 07:05:13.868685 INFO::Found  600  missing values.
#> 2025-02-28 07:05:13.878993 INFO::Introduced 0 missing values due to singular proteins at batch/covariate level.
#> 2025-02-28 07:05:13.879446 INFO::Done
#> 2025-02-28 07:05:13.879819 INFO::Acquiring quality metrics before batch effect correction.
#> 2025-02-28 07:05:13.889518 INFO::Starting hierarchical adjustment
#> 2025-02-28 07:05:13.890156 INFO::Found  10  batches.
#> 2025-02-28 07:05:13.890545 INFO::Cores argument is not defined or BPPARAM has been specified. Argument corereduction will not be used.
#> 2025-02-28 07:05:15.64557 INFO::Using default BPPARAM
#> 2025-02-28 07:05:15.646084 INFO::Processing subtree level 1
#> 2025-02-28 07:05:16.951647 INFO::Processing subtree level 2
#> 2025-02-28 07:05:18.259906 INFO::Adjusting the last 1 batches sequentially
#> 2025-02-28 07:05:18.261234 INFO::Done
#> 2025-02-28 07:05:18.261615 INFO::Acquiring quality metrics after batch effect correction.
#> 2025-02-28 07:05:18.264859 INFO::ASW Batch was 0.525886433538828 prior to batch effect correction and is now -0.125485842708554 .
#> 2025-02-28 07:05:18.265285 INFO::ASW Label was 0.287724609911867 prior to batch effect correction and is now 0.814909470314648 .
#> 2025-02-28 07:05:18.266023 INFO::Total function execution time is  4.6131763458252  s and adjustment time is  4.37130951881409 s ( 94.76 )

BERT uses the logging library to convey live information to the user during the adjustment procedure. The algorithm first verifies the shape and suitability of the input dataframe (lines 1-6) before continuing with the actual batch effect correction (lines 8-14). BERT measure batch effects before and after the correction step by means of the average silhouette score (ASW) with respect to batch and labels (lines 7 and 15). The ASW Label should increase in a successful batch effect correction, whereas low values ( ≤ 0) are desireable for the ASW Batch3. Finally, BERT prints the total function execution time (including the computation time for the quality metrics).

Advanced Options

Parameters

BERT offers a large number of parameters to customize the batch effect adjustment. The full function call, including all defaults is

BERT(data, cores = NULL, combatmode = 1, corereduction=2, stopParBatches=2, backend="default", method="ComBat", qualitycontrol=TRUE, verify=TRUE, labelname="Label", batchname="Batch", referencename="Reference", samplename="Sample", covariatename=NULL, BPPARAM=NULL, assayname=NULL)

In the following, we list the respective meaning of each parameter: - data: The input dataframe/matrix/SummarizedExperiment to adjust. See Data Preparation for detailed formatting instructions. - data The data for batch-effect correction. Must contain at least two samples per batch and 2 features.

  • cores: BERT uses BiocParallel for parallelization. If the user specifies a value cores, BERT internally creates and uses a new instance of BiocParallelParam, which is however not exhibited to the user. Setting this parameter can speed up the batch effect adjustment considerably, in particular for large datasets and on unix-based operating systems. A value between 2 and 4 is a reasonable choice for typical commodity hardware. Multi-node computations are not supported as of now. If, however, cores is not specified, BERT will default to BiocParallel::bpparam(), which may have been set by the user or the system. Additionally, the user can directly specify a specific instance of BiocParallelParam to be used via the BPPARAM argument.
  • combatmode An integer that encodes the parameters to use for ComBat.
Value par.prior mean.only
1 TRUE FALSE
2 TRUE TRUE
3 FALSE FALSE
4 FALSE TRUE

The value of this parameter will be ignored, if method!="ComBat".

  • corereduction Positive integer indicating the factor by which the number of processes should be reduced, once no further adjustment is possible for the current number of batches.4 This parameter is used only, if the user specified a custom value for parameter cores.

  • stopParBatches Positive integer indicating the minimum number of batches required at a hierarchy level to proceed with parallelized adjustment. If the number of batches is smaller, adjustment will be performed sequentially to avoid communication overheads.

  • backend: The backend to use for inter-process communication. Possible choices are default and file, where the former refers to the default communication backend of the requested parallelization mode and the latter will create temporary .rds files for data communication. ‘default’ is usually faster for small to medium sized datasets.

  • method: The method to use for the underlying batch effect correction steps. Should be either ComBat, limma for limma::removeBatchEffects or ref for adjustment using specified references (cf. Data Preparation). The underlying batch effect adjustment method for ref is a modified version of the limma method.

  • qualitycontrol: A boolean to (de)activate the ASW computation. Deactivating the ASW computations accelerates the computations.

  • verify: A boolean to (de)activate the initial format check of the input data. Deactivating this verification step accelerates the computations.

  • labelname: A string containing the name of the column to use as class labels. The default is “Label”.

  • batchname: A string containing the name of the column to use as batch labels. The default is “Batch”.

  • referencename: A string containing the name of the column to use as reference labels. The default is “Reference”.

  • covariatename: A vector containing the names of columns with categorical covariables.The default is NULL, in which case all column names are matched agains the pattern “Cov”.

  • BPPARAM: An instance of BiocParallelParam that will be used for parallelization. The default is null, in which case the value of cores determines the behaviour of BERT.

  • assayname: If the user chooses to pass a SummarizedExperiment object, they need to specify the name of the assay that they want to apply BERT to here. BERT then returns the input SummarizedExperiment with an additional assay labeled assayname_BERTcorrected.

Verbosity

BERT utilizes the logging package for output. The user can easily specify the verbosity of BERT by setting the global logging level in the script. For instance

logging::setLevel("WARN") # set level to warn and upwards
result <- BERT(data,cores = 1) # BERT executes silently

Choosing the Optimal Number of Cores

BERT exhibits a large number of parameters for parallelisation as to provide users with maximum flexibility. For typical scenarios, however, the default parameters are well suited. For very large experiments ( > 15 batches), we recommend to increase the number of cores (a reasonable value is 4 but larger values may be possible on your hardware). Most users should leave all parameters to their respective default.

Examples

In the following, we present simple cookbook examples for BERT usage. Note that ASWs (and runtime) will most likely differ on your machine, since the data generating process involves multiple random choices.

Sequential Adjustment with limma

Here, BERT uses limma as underlying batch effect correction algorithm (method='limma') and performs all computations on a single process (cores parameter is left on default).

# import BERT
library(BERT)
# generate data with 30 batches, 60 features, 15 samples per batch, 15% missing values and 2 classes
dataset_raw <- generate_dataset(features=60, batches=20, samplesperbatch=15, mvstmt=0.15, classes=2)
# BERT
dataset_adjusted <- BERT(dataset_raw, method="limma")
#> 2025-02-28 07:05:18.303092 INFO::Formatting Data.
#> 2025-02-28 07:05:18.3036 INFO::Replacing NaNs with NAs.
#> 2025-02-28 07:05:18.304358 INFO::Removing potential empty rows and columns
#> 2025-02-28 07:05:18.306092 INFO::Found  2700  missing values.
#> 2025-02-28 07:05:18.324271 INFO::Introduced 0 missing values due to singular proteins at batch/covariate level.
#> 2025-02-28 07:05:18.324646 INFO::Done
#> 2025-02-28 07:05:18.325021 INFO::Acquiring quality metrics before batch effect correction.
#> 2025-02-28 07:05:18.332837 INFO::Starting hierarchical adjustment
#> 2025-02-28 07:05:18.333333 INFO::Found  20  batches.
#> 2025-02-28 07:05:18.333673 INFO::Cores argument is not defined or BPPARAM has been specified. Argument corereduction will not be used.
#> 2025-02-28 07:05:18.334083 INFO::Using default BPPARAM
#> 2025-02-28 07:05:18.334422 INFO::Processing subtree level 1
#> 2025-02-28 07:05:18.647633 INFO::Processing subtree level 2
#> 2025-02-28 07:05:18.946677 INFO::Processing subtree level 3
#> 2025-02-28 07:05:19.256489 INFO::Adjusting the last 1 batches sequentially
#> 2025-02-28 07:05:19.258234 INFO::Done
#> 2025-02-28 07:05:19.258724 INFO::Acquiring quality metrics after batch effect correction.
#> 2025-02-28 07:05:19.269151 INFO::ASW Batch was 0.476884899601833 prior to batch effect correction and is now -0.152987220699363 .
#> 2025-02-28 07:05:19.269592 INFO::ASW Label was 0.325605567235534 prior to batch effect correction and is now 0.785303041511127 .
#> 2025-02-28 07:05:19.270212 INFO::Total function execution time is  0.96719217300415  s and adjustment time is  0.924987554550171 s ( 95.64 )

Parallel Batch Effect Correction with ComBat

Here, BERT uses ComBat as underlying batch effect correction algorithm (method is left on default) and performs all computations on a 2 processes (cores=2).

# import BERT
library(BERT)
# generate data with 30 batches, 60 features, 15 samples per batch, 15% missing values and 2 classes
dataset_raw <- generate_dataset(features=60, batches=20, samplesperbatch=15, mvstmt=0.15, classes=2)
# BERT
dataset_adjusted <- BERT(dataset_raw, cores=2)
#> 2025-02-28 07:05:19.295856 INFO::Formatting Data.
#> 2025-02-28 07:05:19.296398 INFO::Replacing NaNs with NAs.
#> 2025-02-28 07:05:19.297146 INFO::Removing potential empty rows and columns
#> 2025-02-28 07:05:19.298758 INFO::Found  2700  missing values.
#> 2025-02-28 07:05:19.317012 INFO::Introduced 0 missing values due to singular proteins at batch/covariate level.
#> 2025-02-28 07:05:19.317394 INFO::Done
#> 2025-02-28 07:05:19.31774 INFO::Acquiring quality metrics before batch effect correction.
#> 2025-02-28 07:05:19.325937 INFO::Starting hierarchical adjustment
#> 2025-02-28 07:05:19.326441 INFO::Found  20  batches.
#> 2025-02-28 07:05:19.827247 INFO::Set up parallel execution backend with 2 workers
#> 2025-02-28 07:05:19.828343 INFO::Processing subtree level 1 with 20 batches using 2 cores.
#> 2025-02-28 07:05:21.759526 INFO::Adjusting the last 2 batches sequentially
#> 2025-02-28 07:05:21.760543 INFO::Adjusting sequential tree level 1 with 2 batches
#> 2025-02-28 07:05:22.799623 INFO::Done
#> 2025-02-28 07:05:22.800094 INFO::Acquiring quality metrics after batch effect correction.
#> 2025-02-28 07:05:22.807284 INFO::ASW Batch was 0.460921474145384 prior to batch effect correction and is now -0.141553250480144 .
#> 2025-02-28 07:05:22.807628 INFO::ASW Label was 0.321144135754644 prior to batch effect correction and is now 0.848133169319522 .
#> 2025-02-28 07:05:22.808086 INFO::Total function execution time is  3.51231074333191  s and adjustment time is  3.47306108474731 s ( 98.88 )

Batch Effect Correction Using SummarizedExperiment

Here, BERT takes the input data using a SummarizedExperiment instead. Batch effect correction is then performed using ComBat as underlying algorithm (method is left on default) and all computations are performed on a single process (cores parameter is left on default).

nrows <- 200
ncols <- 8
# SummarizedExperiments store samples in columns and features in rows (in contrast to BERT).
# BERT will automatically account for this.
expr_values <- matrix(runif(nrows * ncols, 1, 1e4), nrows)
# colData also takes further metadata information, such as Label, Sample,
# Reference or Covariables
colData <- data.frame("Batch"=c(1,1,1,1,2,2,2,2), "Label"=c(1,2,1,2,1,2,1,2), "Sample"=c(1,2,3,4,5,6,7,8))
dataset_raw = SummarizedExperiment::SummarizedExperiment(assays=list(expr=expr_values), colData=colData)
dataset_adjusted = BERT(dataset_raw, assayname = "expr")
#> 2025-02-28 07:05:22.843132 INFO::Formatting Data.
#> 2025-02-28 07:05:22.843592 INFO::Recognized SummarizedExperiment
#> 2025-02-28 07:05:22.843905 INFO::Typecasting input to dataframe.
#> 2025-02-28 07:05:22.86383 INFO::Replacing NaNs with NAs.
#> 2025-02-28 07:05:22.864534 INFO::Removing potential empty rows and columns
#> 2025-02-28 07:05:22.866558 INFO::Found  0  missing values.
#> 2025-02-28 07:05:22.87042 INFO::Introduced 0 missing values due to singular proteins at batch/covariate level.
#> 2025-02-28 07:05:22.870745 INFO::Done
#> 2025-02-28 07:05:22.871066 INFO::Acquiring quality metrics before batch effect correction.
#> 2025-02-28 07:05:22.873516 INFO::Starting hierarchical adjustment
#> 2025-02-28 07:05:22.873961 INFO::Found  2  batches.
#> 2025-02-28 07:05:22.874271 INFO::Cores argument is not defined or BPPARAM has been specified. Argument corereduction will not be used.
#> 2025-02-28 07:05:22.874612 INFO::Using default BPPARAM
#> 2025-02-28 07:05:22.874898 INFO::Adjusting the last 2 batches sequentially
#> 2025-02-28 07:05:22.875452 INFO::Adjusting sequential tree level 1 with 2 batches
#> 2025-02-28 07:05:22.909021 INFO::Done
#> 2025-02-28 07:05:22.909466 INFO::Acquiring quality metrics after batch effect correction.
#> 2025-02-28 07:05:22.9124 INFO::ASW Batch was 0.00565746401776073 prior to batch effect correction and is now -0.0851979186383523 .
#> 2025-02-28 07:05:22.912806 INFO::ASW Label was -0.00486800867426994 prior to batch effect correction and is now 0.0102896056050668 .
#> 2025-02-28 07:05:22.913364 INFO::Total function execution time is  0.0702228546142578  s and adjustment time is  0.0351474285125732 s ( 50.05 )

BERT with Covariables

BERT can utilize categorical covariables that are specified in columns Cov_1, Cov_2, .... These columns are automatically detected and integrated into the batch effect correction process.

# import BERT
library(BERT)
# set seed for reproducibility
set.seed(1)
# generate data with 5 batches, 60 features, 30 samples per batch, 15% missing values and 2 classes
dataset_raw <- generate_dataset(features=60, batches=5, samplesperbatch=30, mvstmt=0.15, classes=2)
# create covariable column with 2 possible values, e.g. male/female condition
dataset_raw["Cov_1"] = sample(c(1,2), size=dim(dataset_raw)[1], replace=TRUE)
# BERT
dataset_adjusted <- BERT(dataset_raw)
#> 2025-02-28 07:05:22.935888 INFO::Formatting Data.
#> 2025-02-28 07:05:22.936399 INFO::Replacing NaNs with NAs.
#> 2025-02-28 07:05:22.937109 INFO::Removing potential empty rows and columns
#> 2025-02-28 07:05:22.938637 INFO::Found  1350  missing values.
#> 2025-02-28 07:05:22.93928 INFO::BERT requires at least 2 numeric values per batch/covariate level. This may reduce the number of adjustable features considerably, depending on the quantification technique.
#> 2025-02-28 07:05:22.971034 INFO::Introduced 0 missing values due to singular proteins at batch/covariate level.
#> 2025-02-28 07:05:22.971489 INFO::Done
#> 2025-02-28 07:05:22.971791 INFO::Acquiring quality metrics before batch effect correction.
#> 2025-02-28 07:05:22.974938 INFO::Starting hierarchical adjustment
#> 2025-02-28 07:05:22.975363 INFO::Found  5  batches.
#> 2025-02-28 07:05:22.975663 INFO::Cores argument is not defined or BPPARAM has been specified. Argument corereduction will not be used.
#> 2025-02-28 07:05:22.976029 INFO::Using default BPPARAM
#> 2025-02-28 07:05:22.976336 INFO::Processing subtree level 1
#> 2025-02-28 07:05:23.138969 INFO::Adjusting the last 2 batches sequentially
#> 2025-02-28 07:05:23.140505 INFO::Adjusting sequential tree level 1 with 2 batches
#> 2025-02-28 07:05:23.182359 INFO::Done
#> 2025-02-28 07:05:23.182916 INFO::Acquiring quality metrics after batch effect correction.
#> 2025-02-28 07:05:23.18665 INFO::ASW Batch was 0.492773245691086 prior to batch effect correction and is now -0.0377157224767566 .
#> 2025-02-28 07:05:23.187077 INFO::ASW Label was 0.40854766060101 prior to batch effect correction and is now 0.895560693013661 .
#> 2025-02-28 07:05:23.187648 INFO::Total function execution time is  0.251779079437256  s and adjustment time is  0.207047939300537 s ( 82.23 )

BERT with references

In rare cases, class distributions across experiments may be severely skewed. In particular, a batch might contain classes that other batches don’t contain. In these cases, samples of common conditions may serve as references (bridges) between the batches (method="ref"). BERT utilizes those samples as references that have a condition specified in the “Reference” column of the input. All other samples are co-adjusted. Please note, that this strategy implicitly uses limma as underlying batch effect correction algorithm.

# import BERT
library(BERT)
# generate data with 4 batches, 6 features, 15 samples per batch, 15% missing values and 2 classes
dataset_raw <- generate_dataset(features=6, batches=4, samplesperbatch=15, mvstmt=0.15, classes=2)
# create reference column with default value 0.  The 0 indicates, that the respective sample should be co-adjusted only.
dataset_raw[, "Reference"] <- 0
# randomly select 2 references per batch and class - in practice, this choice will be determined by external requirements (e.g. class known for only these samples)
batches <- unique(dataset_raw$Batch) # all the batches
for(b in batches){ # iterate over all batches
    # references from class 1
    ref_idx = sample(which((dataset_raw$Batch==b)&(dataset_raw$Label==1)), size=2, replace=FALSE)
    dataset_raw[ref_idx, "Reference"] <- 1
    # references from class 2
    ref_idx = sample(which((dataset_raw$Batch==b)&(dataset_raw$Label==2)), size=2, replace=FALSE)
    dataset_raw[ref_idx, "Reference"] <- 2
}
# BERT
dataset_adjusted <- BERT(dataset_raw, method="ref")
#> 2025-02-28 07:05:23.224037 INFO::Formatting Data.
#> 2025-02-28 07:05:23.22459 INFO::Replacing NaNs with NAs.
#> 2025-02-28 07:05:23.225237 INFO::Removing potential empty rows and columns
#> 2025-02-28 07:05:23.225974 INFO::Found  60  missing values.
#> 2025-02-28 07:05:23.228869 INFO::Introduced 0 missing values due to singular proteins at batch/covariate level.
#> 2025-02-28 07:05:23.229271 INFO::Done
#> 2025-02-28 07:05:23.229628 INFO::Acquiring quality metrics before batch effect correction.
#> 2025-02-28 07:05:23.231772 INFO::Starting hierarchical adjustment
#> 2025-02-28 07:05:23.232292 INFO::Found  4  batches.
#> 2025-02-28 07:05:23.232647 INFO::Cores argument is not defined or BPPARAM has been specified. Argument corereduction will not be used.
#> 2025-02-28 07:05:23.233105 INFO::Using default BPPARAM
#> 2025-02-28 07:05:23.233479 INFO::Processing subtree level 1
#> 2025-02-28 07:05:23.308564 INFO::Adjusting the last 2 batches sequentially
#> 2025-02-28 07:05:23.310099 INFO::Adjusting sequential tree level 1 with 2 batches
#> 2025-02-28 07:05:23.330261 INFO::Done
#> 2025-02-28 07:05:23.330746 INFO::Acquiring quality metrics after batch effect correction.
#> 2025-02-28 07:05:23.333049 INFO::ASW Batch was 0.440355021914032 prior to batch effect correction and is now -0.087480278736629 .
#> 2025-02-28 07:05:23.333488 INFO::ASW Label was 0.373906827748893 prior to batch effect correction and is now 0.919791677398366 .
#> 2025-02-28 07:05:23.334024 INFO::Total function execution time is  0.110060214996338  s and adjustment time is  0.09806227684021 s ( 89.1 )

Issues

Issues can be reported in the GitHub forum, the BioConductor forum or directly to the authors.

License

This code is published under the GPLv3.0 License and is available for non-commercial academic purposes.

Reference

Please cite our manuscript, if you use BERT for your research: Schumann Y, Gocke A, Neumann J (2024). Computational Methods for Data Integration and Imputation of Missing Values in Omics Datasets. PROTEOMICS. ISSN 1615-9861, doi:10.1002/pmic.202400100

Session Info

sessionInfo()
#> R version 4.4.2 (2024-10-31)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.2 LTS
#> 
#> Matrix products: default
#> BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0
#> 
#> locale:
#>  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
#>  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
#>  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
#>  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
#>  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
#> 
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#> 
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base     
#> 
#> other attached packages:
#> [1] BERT_1.3.6       BiocStyle_2.35.0
#> 
#> loaded via a namespace (and not attached):
#>  [1] tidyselect_1.2.1            blob_1.2.4                 
#>  [3] Biostrings_2.75.4           fastmap_1.2.0              
#>  [5] janitor_2.2.1               XML_3.99-0.18              
#>  [7] digest_0.6.37               timechange_0.3.0           
#>  [9] lifecycle_1.0.4             cluster_2.1.8              
#> [11] statmod_1.5.0               survival_3.8-3             
#> [13] KEGGREST_1.47.0             invgamma_1.1               
#> [15] RSQLite_2.3.9               magrittr_2.0.3             
#> [17] genefilter_1.89.0           compiler_4.4.2             
#> [19] rlang_1.1.5                 sass_0.4.9                 
#> [21] tools_4.4.2                 yaml_2.3.10                
#> [23] knitr_1.49                  S4Arrays_1.7.3             
#> [25] bit_4.5.0.1                 DelayedArray_0.33.6        
#> [27] abind_1.4-8                 BiocParallel_1.41.2        
#> [29] BiocGenerics_0.53.6         sys_3.4.3                  
#> [31] grid_4.4.2                  stats4_4.4.2               
#> [33] xtable_1.8-4                edgeR_4.5.2                
#> [35] iterators_1.0.14            logging_0.10-108           
#> [37] SummarizedExperiment_1.37.0 cli_3.6.4                  
#> [39] rmarkdown_2.29              crayon_1.5.3               
#> [41] generics_0.1.3              httr_1.4.7                 
#> [43] DBI_1.2.3                   cachem_1.1.0               
#> [45] stringr_1.5.1               splines_4.4.2              
#> [47] parallel_4.4.2              AnnotationDbi_1.69.0       
#> [49] BiocManager_1.30.25         XVector_0.47.2             
#> [51] matrixStats_1.5.0           vctrs_0.6.5                
#> [53] Matrix_1.7-2                jsonlite_1.9.0             
#> [55] sva_3.55.0                  comprehenr_0.6.10          
#> [57] IRanges_2.41.3              S4Vectors_0.45.4           
#> [59] bit64_4.6.0-1               maketools_1.3.2            
#> [61] locfit_1.5-9.11             foreach_1.5.2              
#> [63] limma_3.63.5                jquerylib_0.1.4            
#> [65] annotate_1.85.0             glue_1.8.0                 
#> [67] codetools_0.2-20            lubridate_1.9.4            
#> [69] stringi_1.8.4               GenomeInfoDb_1.43.4        
#> [71] GenomicRanges_1.59.1        UCSC.utils_1.3.1           
#> [73] htmltools_0.5.8.1           GenomeInfoDbData_1.2.13    
#> [75] R6_2.6.1                    evaluate_1.0.3             
#> [77] lattice_0.22-6              Biobase_2.67.0             
#> [79] png_0.1-8                   memoise_2.0.1              
#> [81] snakecase_0.11.1            bslib_0.9.0                
#> [83] SparseArray_1.7.6           nlme_3.1-167               
#> [85] mgcv_1.9-1                  xfun_0.51                  
#> [87] MatrixGenerics_1.19.1       buildtools_1.0.0

  1. Matrices and SummarizedExperiments work as well, but will automatically be converted to dataframes.↩︎

  2. In particular, the row and column names are in the same order and the optional columns are preserved.↩︎

  3. The optimum of ASW Label is 1, which is typically however not achieved on real-world datasets. Also, the optimum of ASW Batch can vary, depending on the class distributions of the batches.↩︎

  4. E.g. consider a BERT call with 8 batches and 8 processes. Further adjustment is not possible with this number of processes, since batches are always processed in pairs. With corereduction=2, the number of processes for the following adjustment steps would be set to 8/2 = 4, which is the maximum number of usable processes for this example.↩︎