This vignette explains how to specify non-default machine learning frameworks and their hyperparameters when applying Infinity Flow. We will assume here that the basic usage of Infinity Flow has already been read, if you are not familiar with this material I suggest you first look at the basic usage vignette
This vignette will cover:
regression_functions
argumentextra_args_regression_params
argumentHere is a single R code chunk that recapitulates all of the data preparation covered in the basic usage vignette.
if(!require(devtools)){
install.packages("devtools")
}
if(!require(infinityFlow)){
library(devtools)
install_github("ebecht/infinityFlow")
}
library(infinityFlow)
data(steady_state_lung)
data(steady_state_lung_annotation)
data(steady_state_lung_backbone_specification)
dir <- file.path(tempdir(), "infinity_flow_example")
input_dir <- file.path(dir, "fcs")
write.flowSet(steady_state_lung, outdir = input_dir)
#> [1] "/tmp/Rtmp2X9tic/infinity_flow_example/fcs"
write.csv(steady_state_lung_backbone_specification, file = file.path(dir, "backbone_selection_file.csv"), row.names = FALSE)
path_to_fcs <- file.path(dir, "fcs")
path_to_output <- file.path(dir, "output")
path_to_intermediary_results <- file.path(dir, "tmp")
backbone_selection_file <- file.path(dir, "backbone_selection_file.csv")
targets <- steady_state_lung_annotation$Infinity_target
names(targets) <- rownames(steady_state_lung_annotation)
isotypes <- steady_state_lung_annotation$Infinity_isotype
names(isotypes) <- rownames(steady_state_lung_annotation)
input_events_downsampling <- 1000
prediction_events_downsampling <- 500
cores = 1L
The infinity_flow()
function which encapsulates the
complete Infinity Flow computational pipeline uses two arguments to
respectively select regression models and their hyperparameters. These
two arguments are both lists, and should have the same length. The idea
is that the first list, regression_functions
will be a list
of model templates (XGBoost, Neural Networks, SVMs…) to train, while the
second will be used to specify their hyperparameters. The list of
templates is then fit to the data using parallel computing with
socketing (using the parallel
package through the
pbapply
package), which is more memory efficient.
regression_functions
argumentThis argument is a list of functions which specifies how many models
to train per well and which ones. Each type of machine learning model is
supported through a wrapper in the infinityFlow package, and
has a name of the form fitter_*
. See below for the complete
list:
print(grep("fitter_", ls("package:infinityFlow"), value = TRUE))
#> [1] "fitter_glmnet" "fitter_linear" "fitter_nn" "fitter_svm"
#> [5] "fitter_xgboost"
fitter_ function | Backend | Model type |
---|---|---|
fitter_xgboost | XGBoost | Gradient boosted trees |
fitter_nn | Tensorflow/Keras | Neural networks |
fitter_svm | e1071 | Support vector machines |
fitter_glmnet | glmnet | Generalized linear and polynomial models |
fitter_lm | stats | Linear and polynomial models |
These functions rely on optional package dependencies (so that you do not need to install e.g. Keras if you are not planning to use it). We need to make sure that these dependencies are however met:
optional_dependencies <- c("glmnetUtils", "e1071")
unmet_dependencies <- setdiff(optional_dependencies, rownames(installed.packages()))
if(length(unmet_dependencies) > 0){
install.packages(unmet_dependencies)
}
for(pkg in optional_dependencies){
library(pkg, character.only = TRUE)
}
In this vignette we will train all of these models. Note that if you
do it on your own data, it make take quite a bit of memory (remember
that the output expression matrix will be a numeric matrix of size
(prediction_events_downsampling x number of wells) rows x (number of wells x number of models)
.
To train multiple models we create a list of these fitter_* functions
and assign this to the regression_functions
argument that
will be fed to the infinity_flow
function. The names of
this list will be used to name your models.
extra_args_regression_params
argumentThis argument is a list of list (so of the form
list(list(...), list(...), etc.)
) of length
length(regression_functions)
. Each element of the
extra_args_regression_params object is thus a list. This lower-level
list will be used to pass named arguments to the machine learning
fitting function. The list of extra_args_regression_params
is matched with the list of machine learning models
regression_functions
using the order of the elements in
these two lists (e.g. the first regression model is matched with the
first element of the list of arguments, then the seconds elements are
matched together, etc…).
backbone_size <- table(read.csv(backbone_selection_file)[,"type"])["backbone"]
extra_args_regression_params <- list(
## Passed to the first element of `regression_functions`, e.g. XGBoost. See ?xgboost for which parameters can be passed through this list
list(nrounds = 500, eta = 0.05),
# ## Passed to the second element of `regression_functions`, e.g. neural networks through keras::fit. See https://keras.rstudio.com/articles/tutorial_basic_regression.html
# list(
# object = { ## Specifies the network's architecture, loss function and optimization method
# model = keras_model_sequential()
# model %>%
# layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
# layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
# layer_dense(units = 1, activation = "linear")
# model %>%
# compile(loss = "mean_squared_error", optimizer = optimizer_sgd(lr = 0.005))
# serialize_model(model)
# },
# epochs = 1000, ## Number of maximum training epochs. The training is however stopped early if the loss on the validation set does not improve for 20 epochs. This early stopping is hardcoded in fitter_nn.
# validation_split = 0.2, ## Fraction of the training data used to monitor validation loss
# verbose = 0,
# batch_size = 128 ## Size of the minibatches for training.
# ),
# Passed to the third element, SVMs. See help(svm, "e1071") for possible arguments
list(type = "nu-regression", cost = 8, nu=0.5, kernel="radial"),
# Passed to the fourth element, fitter_glmnet. This should contain a mandatory argument `degree` which specifies the degree of the polynomial model (1 for linear, 2 for quadratic etc...). Here we use degree = 2 corresponding to our LASSO2 model Other arguments are passed to getS3method("cv.glmnet", "formula"),
list(alpha = 1, nfolds=10, degree = 2),
# Passed to the fourth element, fitter_linear. This only accepts a degree argument specifying the degree of the polynomial model. Here we use degree = 1 corresponding to a linear model.
list(degree = 1)
)
We can now run the pipeline with these custom arguments to train all the models.
if(length(regression_functions) != length(extra_args_regression_params)){
stop("Number of models and number of lists of hyperparameters mismatch")
}
imputed_data <- infinity_flow(
regression_functions = regression_functions,
extra_args_regression_params = extra_args_regression_params,
path_to_fcs = path_to_fcs,
path_to_output = path_to_output,
path_to_intermediary_results = path_to_intermediary_results,
backbone_selection_file = backbone_selection_file,
annotation = targets,
isotype = isotypes,
input_events_downsampling = input_events_downsampling,
prediction_events_downsampling = prediction_events_downsampling,
verbose = TRUE,
cores = cores
)
#> Using directories...
#> input: /tmp/Rtmp2X9tic/infinity_flow_example/fcs
#> intermediary: /tmp/Rtmp2X9tic/infinity_flow_example/tmp
#> subset: /tmp/Rtmp2X9tic/infinity_flow_example/tmp/subsetted_fcs
#> rds: /tmp/Rtmp2X9tic/infinity_flow_example/tmp/rds
#> annotation: /tmp/Rtmp2X9tic/infinity_flow_example/tmp/annotation.csv
#> output: /tmp/Rtmp2X9tic/infinity_flow_example/output
#> Parsing and subsampling input data
#> Downsampling to 1000 events per input file
#> Concatenating expression matrices
#> Writing to disk
#> Logicle-transforming the data
#> Backbone data
#> Exploratory data
#> Writing to disk
#> Transforming expression matrix
#> Writing to disk
#> Harmonizing backbone data
#> Scaling expression matrices
#> Writing to disk
#> Fitting regression models
#> Randomly selecting 50% of the subsetted input files to fit models
#> Fitting...
#> XGBoost
#>
#> 6.910216 seconds
#> SVM
#>
#> 0.8790326 seconds
#> LASSO2
#>
#> 4.272688 seconds
#> LM
#>
#> 0.07119823 seconds
#> Imputing missing measurements
#> Randomly drawing events to predict from the test set
#> Imputing...
#> XGBoost
#>
#> 0.5974102 seconds
#> SVM
#>
#> 0.6611042 seconds
#> LASSO2
#>
#> 0.6033077 seconds
#> LM
#>
#> 0.03700638 seconds
#> Concatenating predictions
#> Writing to disk
#> Performing dimensionality reduction
#> 08:35:12 UMAP embedding parameters a = 1.262 b = 1.003
#> 08:35:12 Read 5000 rows and found 17 numeric columns
#> 08:35:12 Using Annoy for neighbor search, n_neighbors = 15
#> 08:35:12 Building Annoy index with metric = euclidean, n_trees = 50
#> 0% 10 20 30 40 50 60 70 80 90 100%
#> [----|----|----|----|----|----|----|----|----|----|
#> **************************************************|
#> 08:35:13 Writing NN index file to temp file /tmp/Rtmp2X9tic/file105b21b5943b
#> 08:35:13 Searching Annoy index using 1 thread, search_k = 1500
#> 08:35:13 Annoy recall = 100%
#> 08:35:14 Commencing smooth kNN distance calibration using 1 thread with target n_neighbors = 15
#> 08:35:14 Initializing from normalized Laplacian + noise (using irlba)
#> 08:35:14 Commencing optimization for 1000 epochs, with 101762 positive edges using 1 thread
#> 08:35:21 Optimization finished
#> Exporting results
#> Transforming predictions back to a linear scale
#> Exporting FCS files (1 per well)
#> Plotting
#> Chopping off the top and bottom 0.005 quantiles
#> Shuffling the order of cells (rows)
#> Producing plot
#> Background correcting
#> Transforming background-corrected predictions. (Use logarithm to visualize)
#> Exporting FCS files (1 per well)
#> Plotting
#> Chopping off the top and bottom 0.005 quantiles
#> Shuffling the order of cells (rows)
#> Producing plot
Our model names are appended to the predicted markers in the output. For more discussion about the outputs (including output files written to disk and plots), see the basic usage vignette
print(imputed_data$bgc[1:2, ])
#> FSC-A FSC-H FSC-W SSC-A SSC-H SSC-W CD69-CD301b
#> 2 83701.14 1.003590 2.0971782 3891.54 -0.342262 1.7557083 0.7013766
#> 4 42699.96 -0.618584 -0.5407421 2580.14 -0.658474 -0.9282587 -0.9260009
#> Zombie MHCII CD4 CD44 CD8 CD11c CD11b
#> 2 189.17816 1.1951794 -0.07933989 -0.5638311 -0.2042948 0.2372857 -0.79507468
#> 4 -23.56643 0.7030321 -0.38494943 -0.1014641 -0.5803698 -2.2500122 0.06054808
#> F480 Ly6C Lineage CD45a488 FJComp-PE(yg)-A CD24
#> 2 -0.8521883 -0.5664014 0.16517319 0.4685077 0.3075018 1.2638631
#> 4 -0.6408979 -1.3133402 0.03089091 0.2742081 0.5367503 0.6763594
#> CD103 Time CD137.LASSO2_bgc CD137.LM_bgc CD137.SVM_bgc
#> 2 -1.5088670 2643.0068 0.14272468 -0.17840523 -0.07316507
#> 4 -0.7173135 841.1022 0.01398339 -0.08699039 0.64268015
#> CD137.XGBoost_bgc CD28.LASSO2_bgc CD28.LM_bgc CD28.SVM_bgc CD28.XGBoost_bgc
#> 2 0.5721219 -0.1909882 -0.55579351 -0.5073503 -0.2842640
#> 4 0.2177440 -0.1256295 -0.06556825 -0.3711287 -0.2891188
#> CD49b(pan-NK).LASSO2_bgc CD49b(pan-NK).LM_bgc CD49b(pan-NK).SVM_bgc
#> 2 -0.1547419 -0.5186678 0.3795304
#> 4 -0.1424706 -0.2297050 -0.6072535
#> CD49b(pan-NK).XGBoost_bgc KLRG1.LASSO2_bgc KLRG1.LM_bgc KLRG1.SVM_bgc
#> 2 -0.5064793 -0.2695766 -0.04078163 0.2696200
#> 4 -0.3729103 -0.2878668 -0.35536081 -0.2837179
#> KLRG1.XGBoost_bgc Ly-49c/F/I/H.LASSO2_bgc Ly-49c/F/I/H.LM_bgc
#> 2 -0.06787265 -0.04537662 0.1693365
#> 4 -0.46895977 -0.03439971 -0.2385264
#> Ly-49c/F/I/H.SVM_bgc Ly-49c/F/I/H.XGBoost_bgc Podoplanin.LASSO2_bgc
#> 2 -0.335342 -0.3592746 -0.1019954
#> 4 -0.539743 -0.1477841 -0.5296539
#> Podoplanin.LM_bgc Podoplanin.SVM_bgc Podoplanin.XGBoost_bgc SHIgG.LASSO2_bgc
#> 2 0.3320326 -0.4110926 -0.05816504 -1.272078e-16
#> 4 -0.6828789 -0.9613203 -0.83495415 -1.272078e-16
#> SHIgG.LM_bgc SHIgG.SVM_bgc SHIgG.XGBoost_bgc SSEA-3.LASSO2_bgc SSEA-3.LM_bgc
#> 2 5.960279e-17 -1.272078e-16 -5.960279e-17 0.1574289 0.23750846
#> 4 -9.740645e-17 -1.272078e-16 9.740645e-17 0.1076609 -0.02336495
#> SSEA-3.SVM_bgc SSEA-3.XGBoost_bgc TCR Vg3.LASSO2_bgc TCR Vg3.LM_bgc
#> 2 0.06904159 0.06144702 0.2872816 0.1023300
#> 4 -0.40655506 -0.09298442 -0.1457136 -0.3676173
#> TCR Vg3.SVM_bgc TCR Vg3.XGBoost_bgc rIgM.LASSO2_bgc rIgM.LM_bgc
#> 2 0.4061311 0.228236347 1.272078e-16 -1.123071e-16
#> 4 0.5845448 -0.001624498 1.272078e-16 -1.123071e-16
#> rIgM.SVM_bgc rIgM.XGBoost_bgc UMAP1 UMAP2 PE_id
#> 2 1.272078e-16 0 577.7376 637.4236 1
#> 4 1.272078e-16 0 311.2962 944.6063 1
Neural networks won’t build in knitr for me but here is an example of the syntax if you want to use them.
Note: there is an issue with serialization of the neural networks and socketing since I updated to R-4.0.1. If you want to use neural networks, please make sure to set
optional_dependencies <- c("keras", "tensorflow")
unmet_dependencies <- setdiff(optional_dependencies, rownames(installed.packages()))
if(length(unmet_dependencies) > 0){
install.packages(unmet_dependencies)
}
for(pkg in optional_dependencies){
library(pkg, character.only = TRUE)
}
invisible(eval(try(keras_model_sequential()))) ## avoids conflicts with flowCore...
if(!is_keras_available()){
install_keras() ## Instal keras unsing the R interface - can take a while
}
if (!requireNamespace("BiocManager", quietly = TRUE)){
install.packages("BiocManager")
}
BiocManager::install("infinityFlow")
library(infinityFlow)
data(steady_state_lung)
data(steady_state_lung_annotation)
data(steady_state_lung_backbone_specification)
dir <- file.path(tempdir(), "infinity_flow_example")
input_dir <- file.path(dir, "fcs")
write.flowSet(steady_state_lung, outdir = input_dir)
write.csv(steady_state_lung_backbone_specification, file = file.path(dir, "backbone_selection_file.csv"), row.names = FALSE)
path_to_fcs <- file.path(dir, "fcs")
path_to_output <- file.path(dir, "output")
path_to_intermediary_results <- file.path(dir, "tmp")
backbone_selection_file <- file.path(dir, "backbone_selection_file.csv")
targets <- steady_state_lung_annotation$Infinity_target
names(targets) <- rownames(steady_state_lung_annotation)
isotypes <- steady_state_lung_annotation$Infinity_isotype
names(isotypes) <- rownames(steady_state_lung_annotation)
input_events_downsampling <- 1000
prediction_events_downsampling <- 500
## Passed to fitter_nn, e.g. neural networks through keras::fit. See https://keras.rstudio.com/articles/tutorial_basic_regression.html
regression_functions <- list(NN = fitter_nn)
backbone_size <- table(read.csv(backbone_selection_file)[,"type"])["backbone"]
extra_args_regression_params <- list(
list(
object = { ## Specifies the network's architecture, loss function and optimization method
model = keras_model_sequential()
model %>%
layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
layer_dense(units = backbone_size, activation = "relu", input_shape = backbone_size) %>%
layer_dense(units = 1, activation = "linear")
model %>%
compile(loss = "mean_squared_error", optimizer = optimizer_sgd(lr = 0.005))
serialize_model(model)
},
epochs = 1000, ## Number of maximum training epochs. The training is however stopped early if the loss on the validation set does not improve for 20 epochs. This early stopping is hardcoded in fitter_nn.
validation_split = 0.2, ## Fraction of the training data used to monitor validation loss
verbose = 0,
batch_size = 128 ## Size of the minibatches for training.
)
)
imputed_data <- infinity_flow(
regression_functions = regression_functions,
extra_args_regression_params = extra_args_regression_params,
path_to_fcs = path_to_fcs,
path_to_output = path_to_output,
path_to_intermediary_results = path_to_intermediary_results,
backbone_selection_file = backbone_selection_file,
annotation = targets,
isotype = isotypes,
input_events_downsampling = input_events_downsampling,
prediction_events_downsampling = prediction_events_downsampling,
verbose = TRUE,
cores = 1L
)
Thank you for following this vignette, I hope you made it through the end without too much headache and that it was informative. General questions about proper usage of the package are best asked on the Bioconductor support site to maximize visibility for future users. If you encounter bugs, feel free to raise an issue on infinityFlow’s github.
sessionInfo()
#> R version 4.4.1 (2024-06-14)
#> Platform: x86_64-pc-linux-gnu
#> Running under: Ubuntu 24.04.1 LTS
#>
#> Matrix products: default
#> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so; LAPACK version 3.12.0
#>
#> Random number generation:
#> RNG: L'Ecuyer-CMRG
#> Normal: Inversion
#> Sample: Rejection
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_US.UTF-8 LC_COLLATE=C
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> time zone: Etc/UTC
#> tzcode source: system (glibc)
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] e1071_1.7-16 glmnetUtils_1.1.9 infinityFlow_1.17.0
#> [4] flowCore_2.17.0 rmarkdown_2.28
#>
#> loaded via a namespace (and not attached):
#> [1] sass_0.4.9 generics_0.1.3 class_7.3-22
#> [4] gtools_3.9.5 shape_1.4.6.1 lattice_0.22-6
#> [7] digest_0.6.37 evaluate_1.0.1 grid_4.4.1
#> [10] iterators_1.0.14 fastmap_1.2.0 xgboost_1.7.8.1
#> [13] foreach_1.5.2 jsonlite_1.8.9 Matrix_1.7-1
#> [16] glmnet_4.1-8 survival_3.7-0 pbapply_1.7-2
#> [19] codetools_0.2-20 jquerylib_0.1.4 cli_3.6.3
#> [22] rlang_1.1.4 RProtoBufLib_2.17.0 Biobase_2.67.0
#> [25] RcppAnnoy_0.0.22 uwot_0.2.2 matlab_1.0.4.1
#> [28] splines_4.4.1 cachem_1.1.0 yaml_2.3.10
#> [31] cytolib_2.19.0 tools_4.4.1 raster_3.6-30
#> [34] parallel_4.4.1 BiocGenerics_0.53.0 buildtools_1.0.0
#> [37] R6_2.5.1 png_0.1-8 proxy_0.4-27
#> [40] matrixStats_1.4.1 stats4_4.4.1 lifecycle_1.0.4
#> [43] S4Vectors_0.43.2 irlba_2.3.5.1 terra_1.7-83
#> [46] bslib_0.8.0 data.table_1.16.2 Rcpp_1.0.13
#> [49] xfun_0.48 sys_3.4.3 knitr_1.48
#> [52] htmltools_0.5.8.1 maketools_1.3.1 compiler_4.4.1
#> [55] sp_2.1-4