Title: | TRONCO, an R package for TRanslational ONCOlogy |
---|---|
Description: | The TRONCO (TRanslational ONCOlogy) R package collects algorithms to infer progression models via the approach of Suppes-Bayes Causal Network, both from an ensemble of tumors (cross-sectional samples) and within an individual patient (multi-region or single-cell samples). The package provides parallel implementation of algorithms that process binary matrices where each row represents a tumor sample and each column a single-nucleotide or a structural variant driving the progression; a 0/1 value models the absence/presence of that alteration in the sample. The tool can import data from plain, MAF or GISTIC format files, and can fetch it from the cBioPortal for cancer genomics. Functions for data manipulation and visualization are provided, as well as functions to import/export such data to other bioinformatics tools for, e.g, clustering or detection of mutually exclusive alterations. Inferred models can be visualized and tested for their confidence via bootstrap and cross-validation. TRONCO is used for the implementation of the Pipeline for Cancer Inference (PICNIC). |
Authors: | Marco Antoniotti [ctb], Giulio Caravagna [aut], Luca De Sano [cre, aut] , Alex Graudenzi [aut], Giancarlo Mauri [ctb], Bud Mishra [ctb], Daniele Ramazzotti [aut] |
Maintainer: | Luca De Sano <[email protected]> |
License: | GPL-3 |
Version: | 2.39.0 |
Built: | 2024-11-08 06:16:03 UTC |
Source: | https://github.com/bioc/TRONCO |
This file contains a TRONCO compliant dataset
data(aCML)
data(aCML)
TRONCO compliant dataset
A standard TRONCO object
Luca De Sano
data from http://www.nature.com/ng/journal/v45/n1/full/ng.2495.html
AND hypothesis
AND(...)
AND(...)
... |
Atoms of the co-occurance pattern given either as labels or as partielly lifted vectors. |
Vector to be added to the lifted genotype resolving the co-occurance pattern
Annotate a description on the selected dataset
annotate.description(x, label)
annotate.description(x, label)
x |
A TRONCO compliant dataset. |
label |
A string |
A TRONCO compliant dataset.
data(test_dataset) annotate.description(test_dataset, 'new description')
data(test_dataset) annotate.description(test_dataset, 'new description')
Annotate stage information on the selected dataset
annotate.stages(x, stages, match.TCGA.patients = FALSE)
annotate.stages(x, stages, match.TCGA.patients = FALSE)
x |
A TRONCO compliant dataset. |
stages |
A list of stages. Rownames must match samples list of x |
match.TCGA.patients |
Match using TCGA notations (only first 12 characters) |
A TRONCO compliant dataset.
data(test_dataset) data(stage) test_dataset = annotate.stages(test_dataset, stage) as.stages(test_dataset)
data(test_dataset) data(stage) test_dataset = annotate.stages(test_dataset, stage) as.stages(test_dataset)
Extract the adjacency matrix of a TRONCO model. The matrix is indexed with colnames/rownames which
represent genotype keys - these can be resolved with function keysToNames
. It is possible to
specify a subset of events to build the matrix, a subset of models if multiple reconstruction have
been performed. Also, either the prima facie matrix or the post-regularization matrix can be extracted.
as.adj.matrix(x, events = as.events(x), models = names(x$model), type = "fit")
as.adj.matrix(x, events = as.events(x), models = names(x$model), type = "fit")
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
type |
Either the prima facie ('pf') or the post-regularization ('fit') matrix, 'fit' by default. |
The adjacency matrix of a TRONCO model.
data(test_model) as.adj.matrix(test_model) as.adj.matrix(test_model, events=as.events(test_model)[5:15,]) as.adj.matrix(test_model, events=as.events(test_model)[5:15,], type='pf')
data(test_model) as.adj.matrix(test_model) as.adj.matrix(test_model, events=as.events(test_model)[5:15,]) as.adj.matrix(test_model, events=as.events(test_model)[5:15,], type='pf')
Return a dataset where all events for a gene are merged in a unique event, i.e.,
a total of gene-level alterations diregarding the event type. Input 'x' is checked
to be a TRONCO compliant dataset - see is.compliant
.
as.alterations(x, new.type = "Alteration", new.color = "khaki", silent = FALSE)
as.alterations(x, new.type = "Alteration", new.color = "khaki", silent = FALSE)
x |
A TRONCO compliant dataset. |
new.type |
The types label of the new event type, 'Alteration' by default. |
new.color |
The color of the event |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant dataset with alteration profiles.
data(muts) as.alterations(muts)
data(muts) as.alterations(muts)
Returns a dataframe with all the bootstrap score in a TRONCO model. It is possible to specify a subset of events or models if multiple reconstruction have been performed.
as.bootstrap.scores(x, events = as.events(x), models = names(x$model))
as.bootstrap.scores(x, events = as.events(x), models = names(x$model))
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
All the bootstrap scores in a TRONCO model
data(test_model) as.bootstrap.scores(test_model) as.bootstrap.scores(test_model, events=as.events(test_model)[5:15,])
data(test_model) as.bootstrap.scores(test_model) as.bootstrap.scores(test_model, events=as.events(test_model)[5:15,])
Return the colors associated to each type of event in 'x', which should be a
TRONCO compliant dataset - see is.compliant
.
as.colors(x)
as.colors(x)
x |
A TRONCO compliant dataset. |
A named vector of colors.
data(test_dataset) as.colors(test_dataset)
data(test_dataset) as.colors(test_dataset)
Extract the conditional probabilities from a TRONCO model. The return matrix is indexed with rownames which
represent genotype keys - these can be resolved with function keysToNames
. It is possible to
specify a subset of events to build the matrix, a subset of models if multiple reconstruction have
been performed. Also, either the observed or fit probabilities can be extracted.
as.conditional.probs( x, events = as.events(x), models = names(x$model), type = "observed" )
as.conditional.probs( x, events = as.events(x), models = names(x$model), type = "observed" )
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
type |
observed ('observed') |
#' @examples data(test_model) as.conditional.probs(test_model) as.conditional.probs(test_model, events=as.events(test_model)[5:15,])
The conditional probabilities in a TRONCO model.
Return confidence information for a TRONCO model. Available information are: temporal priority (tp),
probability raising (pr), hypergeometric test (hg), parametric (pb), non parametric (npb) or
statistical (sb) bootstrap, entropy loss (eloss), prediction error (prederr).
Confidence is available only once a model has been reconstructed with any of the algorithms implemented
in TRONCO. If more than one model has been reconstructed - for instance via multiple regularizations -
confidence information is appropriately nested. The requested confidence is specified via
vector parameter conf
.
as.confidence(x, conf, models = names(x$model))
as.confidence(x, conf, models = names(x$model))
x |
A TRONCO model. |
conf |
A vector with any of 'tp', 'pr', 'hg', 'npb', 'pb', 'sb', 'eloss', 'prederr' or 'posterr'. |
models |
The name of the models to extract, all by default. |
A list of matrices with the event-to-event confidence.
data(test_model) as.confidence(test_model, conf='tp') as.confidence(test_model, conf=c('tp', 'hg'))
data(test_model) as.confidence(test_model, conf='tp') as.confidence(test_model, conf=c('tp', 'hg'))
Return the description annotating the dataset, if any. Input 'x' should be
a TRONCO compliant dataset - see is.compliant
.
as.description(x)
as.description(x)
x |
A TRONCO compliant dataset. |
The description annotating the dataset, if any.
data(test_dataset) as.description(test_dataset)
data(test_dataset) as.description(test_dataset)
Return all events involving certain genes and of a certain type in 'x', which should be a
TRONCO compliant dataset - see is.compliant
.
as.events(x, genes = NA, types = NA, keysToNames = FALSE)
as.events(x, genes = NA, types = NA, keysToNames = FALSE)
x |
A TRONCO compliant dataset. |
genes |
The genes to consider, if NA all available genes are used. |
types |
The types of events to consider, if NA all available types are used. |
keysToNames |
If TRUE return a list of mnemonic name composed by type + gene |
A matrix with 2 columns (event type, gene name) for the events found.
data(test_dataset) as.events(test_dataset) as.events(test_dataset, types='ins_del') as.events(test_dataset, genes = 'TET2') as.events(test_dataset, types='Missing')
data(test_dataset) as.events(test_dataset) as.events(test_dataset, types='ins_del') as.events(test_dataset, genes = 'TET2') as.events(test_dataset, types='Missing')
Return the list of events present in selected patterns
as.events.in.patterns(x, patterns = NULL)
as.events.in.patterns(x, patterns = NULL)
x |
A TRONCO compliant dataset. |
patterns |
A list of patterns for which the list will be returned |
A list of events present in patterns which consitute CAPRI's hypotheses
data(test_dataset) as.events.in.patterns(test_dataset) as.events.in.patterns(test_dataset, patterns='XOR_EZH2')
data(test_dataset) as.events.in.patterns(test_dataset) as.events.in.patterns(test_dataset, patterns='XOR_EZH2')
Return a list of events which are observed in the input samples list
as.events.in.sample(x, sample)
as.events.in.sample(x, sample)
x |
A TRONCO compliant dataset |
sample |
Vector of sample names |
A list of events which are observed in the input samples list
data(test_dataset) as.events.in.sample(test_dataset, c('patient 1', 'patient 7'))
data(test_dataset) as.events.in.sample(test_dataset, c('patient 1', 'patient 7'))
Return the genotypes for a certain set of genes and type of events. Input 'x' should be a
TRONCO compliant dataset - see is.compliant
. In this case column names are substituted
with events' types.
as.gene(x, genes, types = NA)
as.gene(x, genes, types = NA)
x |
A TRONCO compliant dataset. |
genes |
The genes to consider, if NA all available genes are used. |
types |
The types of events to consider, if NA all available types are used. |
A matrix, subset of as.genotypes(x)
with colnames substituted with events' types.
data(test_dataset) as.gene(test_dataset, genes = c('EZH2', 'ASXL1'))
data(test_dataset) as.gene(test_dataset, genes = c('EZH2', 'ASXL1'))
Return all gene symbols for which a certain type of event exists in 'x', which should be a
TRONCO compliant dataset - see is.compliant
.
as.genes(x, types = NA)
as.genes(x, types = NA)
x |
A TRONCO compliant dataset. |
types |
The types of events to consider, if NA all available types are used. |
A vector of gene symbols for which a certain type of event exists
data(test_dataset) as.genes(test_dataset)
data(test_dataset) as.genes(test_dataset)
Return the list of genes present in selected patterns
as.genes.in.patterns(x, patterns = NULL)
as.genes.in.patterns(x, patterns = NULL)
x |
A TRONCO compliant dataset. |
patterns |
A list of patterns for which the list will be returned |
A list of genes present in patterns which consitute CAPRI's hypotheses
data(test_dataset) as.genes.in.patterns(test_dataset) as.genes.in.patterns(test_dataset, patterns='XOR_EZH2')
data(test_dataset) as.genes.in.patterns(test_dataset) as.genes.in.patterns(test_dataset, patterns='XOR_EZH2')
Return all genotypes for input 'x', which should be a TRONCO compliant dataset
see is.compliant
.
Function keysToNames
can be used to translate colnames to events.
as.genotypes(x)
as.genotypes(x)
x |
A TRONCO compliant dataset. |
A TRONCO genotypes matrix.
data(test_dataset) as.genotypes(test_dataset)
data(test_dataset) as.genotypes(test_dataset)
Return the hypotheses in the dataset which constitute CAPRI's hypotheses.
as.hypotheses(x, cause = NA, effect = NA)
as.hypotheses(x, cause = NA, effect = NA)
x |
A TRONCO compliant dataset. |
cause |
A list of genes to use as causes |
effect |
A list of genes to use as effects |
The hypotheses in the dataset which constitute CAPRI's hypotheses.
data(test_dataset) as.hypotheses(test_dataset)
data(test_dataset) as.hypotheses(test_dataset)
Extract the joint probabilities from a TRONCO model. The return matrix is indexed with rownames/colnames which
represent genotype keys - these can be resolved with function keysToNames
. It is possible to
specify a subset of events to build the matrix, a subset of models if multiple reconstruction have
been performed. Also, either the observed or fit probabilities can be extracted.
as.joint.probs( x, events = as.events(x), models = names(x$model), type = "observed" )
as.joint.probs( x, events = as.events(x), models = names(x$model), type = "observed" )
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
type |
observed |
The joint probabilities in a TRONCO model.
data(test_model) as.joint.probs(test_model) as.joint.probs(test_model, events=as.events(test_model)[5:15,])
data(test_model) as.joint.probs(test_model) as.joint.probs(test_model, events=as.events(test_model)[5:15,])
Returns a dataframe with all the average/stdev entropy loss score of a TRONCO model. It is possible to specify models if multiple reconstruction have been performed.
as.kfold.eloss(x, models = names(x$model), values = FALSE)
as.kfold.eloss(x, models = names(x$model), values = FALSE)
x |
A TRONCO model. |
models |
A subset of reconstructed models, all by default. |
values |
If you want to see also the values |
All the bootstrap scores in a TRONCO model
data(test_model_kfold) as.kfold.eloss(test_model_kfold) as.kfold.eloss(test_model_kfold, models='capri_aic')
data(test_model_kfold) as.kfold.eloss(test_model_kfold) as.kfold.eloss(test_model_kfold, models='capri_aic')
Returns a dataframe with all the posterior classification error score in a TRONCO model. It is possible to specify a subset of events or models if multiple reconstruction have been performed.
as.kfold.posterr( x, events = as.events(x), models = names(x$model), values = FALSE, table = FALSE )
as.kfold.posterr( x, events = as.events(x), models = names(x$model), values = FALSE, table = FALSE )
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
values |
If you want to see also the values |
table |
Keep the original table (defaul false) |
All the posterior classification error scores in a TRONCO model
data(test_model_kfold) data(test_model) as.kfold.posterr(test_model_kfold) as.kfold.posterr(test_model_kfold, events=as.events(test_model)[5:15,])
data(test_model_kfold) data(test_model) as.kfold.posterr(test_model_kfold) as.kfold.posterr(test_model_kfold, events=as.events(test_model)[5:15,])
Returns a dataframe with all the prediction error score in a TRONCO model. It is possible to specify a subset of events or models if multiple reconstruction have been performed.
as.kfold.prederr( x, events = as.events(x), models = names(x$model), values = FALSE, table = FALSE )
as.kfold.prederr( x, events = as.events(x), models = names(x$model), values = FALSE, table = FALSE )
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
values |
If you want to see also the values |
table |
Keep the original table (defaul false) |
All the bootstrap scores in a TRONCO model
data(test_model_kfold) as.kfold.prederr(test_model_kfold) as.kfold.prederr(test_model_kfold, models='capri_aic')
data(test_model_kfold) as.kfold.prederr(test_model_kfold) as.kfold.prederr(test_model_kfold, models='capri_aic')
Extract the marginal probabilities from a TRONCO model. The return matrix is indexed with rownames which
represent genotype keys - these can be resolved with function keysToNames
. It is possible to
specify a subset of events to build the matrix, a subset of models if multiple reconstruction have
been performed. Also, either the observed or fit probabilities can be extracted.
as.marginal.probs( x, events = as.events(x), models = names(x$model), type = "observed" )
as.marginal.probs( x, events = as.events(x), models = names(x$model), type = "observed" )
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
type |
observed. |
The marginal probabilities in a TRONCO model.
data(test_model) as.marginal.probs(test_model) as.marginal.probs(test_model, events=as.events(test_model)[5:15,])
data(test_model) as.marginal.probs(test_model) as.marginal.probs(test_model, events=as.events(test_model)[5:15,])
Extract the models from a reconstructed object.
as.models(x, models = names(x$model))
as.models(x, models = names(x$model))
x |
A TRONCO model. |
models |
The name of the models to extract, e.g. 'bic', 'aic', 'caprese', all by default. |
The models in a reconstructed object.
data(test_model) as.models(test_model)
data(test_model) as.models(test_model)
Get parameters of a model
as.parameters(x)
as.parameters(x)
x |
A TRONCO model. |
A list of parameters
data(test_model) as.parameters(test_model)
data(test_model) as.parameters(test_model)
Given a cohort and a pathway, return the cohort with events restricted to genes involved in the pathway. This might contain a new 'pathway' genotype with an alteration mark if any of the involved genes are altered.
as.pathway( x, pathway.genes, pathway.name, pathway.color = "yellow", aggregate.pathway = TRUE, silent = FALSE )
as.pathway( x, pathway.genes, pathway.name, pathway.color = "yellow", aggregate.pathway = TRUE, silent = FALSE )
x |
A TRONCO compliant dataset. |
pathway.genes |
Gene (symbols) involved in the pathway. |
pathway.name |
Pathway name for visualization. |
pathway.color |
Pathway color for visualization. |
aggregate.pathway |
If TRUE drop the events for the genes in the pathway. |
silent |
A parameter to disable/enable verbose messages. |
Extract the subset of events for genes which are part of a pathway.
data(test_dataset) p = as.pathway(test_dataset, c('ASXL1', 'TET2'), 'test_pathway')
data(test_dataset) p = as.pathway(test_dataset, c('ASXL1', 'TET2'), 'test_pathway')
Return the patterns in the dataset which constitute CAPRI's hypotheses.
as.patterns(x)
as.patterns(x)
x |
A TRONCO compliant dataset. |
The patterns in the dataset which constitute CAPRI's hypotheses.
data(test_dataset) as.patterns(test_dataset)
data(test_dataset) as.patterns(test_dataset)
Return all sample IDs for input 'x', which should be a TRONCO compliant dataset - see is.compliant
.
as.samples(x)
as.samples(x)
x |
A TRONCO compliant dataset. |
A vector of sample IDs
data(test_dataset) as.samples(test_dataset)
data(test_dataset) as.samples(test_dataset)
Returns a dataframe with all the selective advantage relations in a
TRONCO model. Confidence is also shown - see as.confidence
. It is possible to
specify a subset of events or models if multiple reconstruction have
been performed.
as.selective.advantage.relations( x, events = as.events(x), models = names(x$model), type = "fit" )
as.selective.advantage.relations( x, events = as.events(x), models = names(x$model), type = "fit" )
x |
A TRONCO model. |
events |
A subset of events as of |
models |
A subset of reconstructed models, all by default. |
type |
Either Prima Facie ('pf') or fit ('fit') probabilities, 'fit' by default. |
All the selective advantage relations in a TRONCO model
data(test_model) as.selective.advantage.relations(test_model) as.selective.advantage.relations(test_model, events=as.events(test_model)[5:15,]) as.selective.advantage.relations(test_model, events=as.events(test_model)[5:15,], type='pf')
data(test_model) as.selective.advantage.relations(test_model) as.selective.advantage.relations(test_model, events=as.events(test_model)[5:15,]) as.selective.advantage.relations(test_model, events=as.events(test_model)[5:15,], type='pf')
Return the association sample -> stage, if any. Input 'x' should be a
TRONCO compliant dataset - see is.compliant
.
as.stages(x)
as.stages(x)
x |
A TRONCO compliant dataset. |
A matrix with 1 column annotating stages and rownames as sample IDs.
data(test_dataset) data(stage) test_dataset = annotate.stages(test_dataset, stage) as.stages(test_dataset)
data(test_dataset) data(stage) test_dataset = annotate.stages(test_dataset, stage) as.stages(test_dataset)
Return the types of events for a set of genes which are in 'x', which should be a
TRONCO compliant dataset - see is.compliant
.
as.types(x, genes = NA)
as.types(x, genes = NA)
x |
A TRONCO compliant dataset. |
genes |
A list of genes to consider, if NA all genes are used. |
A matrix with 1 column annotating stages and rownames as sample IDs.
data(test_dataset) as.types(test_dataset) as.types(test_dataset, genes='TET2')
data(test_dataset) as.types(test_dataset) as.types(test_dataset, genes='TET2')
Return the list of types present in selected patterns
as.types.in.patterns(x, patterns = NULL)
as.types.in.patterns(x, patterns = NULL)
x |
A TRONCO compliant dataset. |
patterns |
A list of patterns for which the list will be returned |
A list of types present in patterns which consitute CAPRI's hypotheses
data(test_dataset) as.types.in.patterns(test_dataset) as.types.in.patterns(test_dataset, patterns='XOR_EZH2')
data(test_dataset) as.types.in.patterns(test_dataset) as.types.in.patterns(test_dataset, patterns='XOR_EZH2')
Change the color of an event type
change.color(x, type, new.color)
change.color(x, type, new.color)
x |
A TRONCO compliant dataset. |
type |
An event type |
new.color |
The new color (either HEX or R Color) |
A TRONCO complian dataset.
data(test_dataset) dataset = change.color(test_dataset, 'ins_del', 'red')
data(test_dataset) dataset = change.color(test_dataset, 'ins_del', 'red')
Verify if the input data are consolidate, i.e., if there are events with 0 or 1 probability or indistinguishable in terms of observations
consolidate.data(x, print = FALSE)
consolidate.data(x, print = FALSE)
x |
A TRONCO compliant dataset. |
print |
A boolean value stating whether to print of not the summary |
The list of any 0 probability, 1 probability and indistinguishable.
data(test_dataset) consolidate.data(test_dataset)
data(test_dataset) consolidate.data(test_dataset)
This dataset contains an example of GISTIC input of a crc cohort of patients
data(crc_gistic)
data(crc_gistic)
GISTIC score
A gistic file
Daniele Ramazzotti
data from http://www.nature.com/nature/journal/v487/n7407/full/nature11252.html
This dataset contains an example of MAF input of a crc cohort of patients
data(crc_maf)
data(crc_maf)
Manual Annotated Format
A MAF file
Daniele Ramazzotti
data from http://www.nature.com/nature/journal/v487/n7407/full/nature11252.html
This dataset contains an example of plain input of a crc cohort of patients
data(crc_plain)
data(crc_plain)
plain data
A plain input
Daniele Ramazzotti
data from http://www.nature.com/nature/journal/v487/n7407/full/nature11252.html
Delete an event from the dataset
delete.event(x, gene, type)
delete.event(x, gene, type)
x |
A TRONCO compliant dataset. |
gene |
The name of the gene to delete. |
type |
The name of the type to delete. |
A TRONCO complian dataset.
data(test_dataset) test_dataset = delete.event(test_dataset, 'TET2', 'ins_del')
data(test_dataset) test_dataset = delete.event(test_dataset, 'TET2', 'ins_del')
Delete a gene
delete.gene(x, gene)
delete.gene(x, gene)
x |
A TRONCO compliant dataset. |
gene |
The name of the gene to delete. |
A TRONCO complian dataset.
data(test_dataset) test_dataset = delete.gene(test_dataset, 'TET2')
data(test_dataset) test_dataset = delete.gene(test_dataset, 'TET2')
Delete an hypothesis from the dataset based on a selected event. Check if the selected event exist in the dataset and delete his associated hypothesis
delete.hypothesis(x, event = NA, cause = NA, effect = NA)
delete.hypothesis(x, event = NA, cause = NA, effect = NA)
x |
A TRONCO compliant dataset. |
event |
Can be an event or pattern name |
cause |
Can be an event or pattern name |
effect |
Can be an event or pattern name |
A TRONCO complian dataset.
data(test_dataset) delete.hypothesis(test_dataset, event='TET2') delete.hypothesis(test_dataset, cause='EZH2') delete.hypothesis(test_dataset, event='XOR_EZH2')
data(test_dataset) delete.hypothesis(test_dataset, event='TET2') delete.hypothesis(test_dataset, cause='EZH2') delete.hypothesis(test_dataset, event='XOR_EZH2')
Delete a reconstructed model from the dataset
delete.model(x)
delete.model(x)
x |
A TRONCO compliant dataset. |
A TRONCO complian dataset.
data(test_model) model = delete.model(test_model) has.model(model)
data(test_model) model = delete.model(test_model) has.model(model)
Delete a pattern and every associated hypotheses from the dataset
delete.pattern(x, pattern)
delete.pattern(x, pattern)
x |
A TRONCO compliant dataset. |
pattern |
A pattern name |
A TRONCO complian dataset.
data(test_dataset) delete.pattern(test_dataset, pattern='XOR_EZH2')
data(test_dataset) delete.pattern(test_dataset, pattern='XOR_EZH2')
Delete samples from selected dataset
delete.samples(x, samples)
delete.samples(x, samples)
x |
A TRONCO compliant dataset. |
samples |
An array of samples name |
A TRONCO complian dataset.
data(test_dataset) dataset = delete.samples(test_dataset, c('patient 1', 'patient 4'))
data(test_dataset) dataset = delete.samples(test_dataset, c('patient 1', 'patient 4'))
Delete an event type
delete.type(x, type)
delete.type(x, type)
x |
A TRONCO compliant dataset. |
type |
The name of the type to delete. |
A TRONCO complian dataset.
data(test_dataset) test_dataset = delete.type(test_dataset, 'Pattern')
data(test_dataset) test_dataset = delete.type(test_dataset, 'Pattern')
Return the events duplicated in x
, if any. Input 'x' should be
a TRONCO compliant dataset - see is.compliant
.
duplicates(x)
duplicates(x)
x |
A TRONCO compliant dataset. |
A subset of as.events(x)
with duplicated events.
data(test_dataset) duplicates(test_dataset)
data(test_dataset) duplicates(test_dataset)
Binds events from one or more datasets, which must be defined over the same set of samples.
ebind(..., silent = FALSE)
ebind(..., silent = FALSE)
... |
the input datasets |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO complian dataset.
Convert the internal reprensentation of genotypes to numeric, if not.
enforce.numeric(x)
enforce.numeric(x)
x |
A TRONCO compliant dataset. |
Convert the internal reprensentation of genotypes to numeric, if not.
data(test_dataset) test_dataset = enforce.numeric(test_dataset)
data(test_dataset) test_dataset = enforce.numeric(test_dataset)
Convert the internal representation of genotypes to character, if not.
enforce.string(x)
enforce.string(x)
x |
A TRONCO compliant dataset. |
Convert the internal reprensentation of genotypes to character, if not.
data(test_dataset) test_dataset = enforce.string(test_dataset)
data(test_dataset) test_dataset = enforce.string(test_dataset)
select a subset of the input genotypes 'x'. Selection can be done by frequency and gene symbols.
events.selection( x, filter.freq = NA, filter.in.names = NA, filter.out.names = NA, silent = FALSE )
events.selection( x, filter.freq = NA, filter.in.names = NA, filter.out.names = NA, silent = FALSE )
x |
A TRONCO compliant dataset. |
filter.freq |
[0,1] value which constriants the minimum frequence of selected events |
filter.in.names |
gene symbols which will be included |
filter.out.names |
gene symbols which will NOT be included |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant dataset.
data(test_dataset) dataset = events.selection(test_dataset, 0.3)
data(test_dataset) dataset = events.selection(test_dataset, 0.3)
Create a graphML object which can be imported in cytoscape This function is based on the tronco.plot fuction
export.graphml(x, file, ...)
export.graphml(x, file, ...)
x |
A TRONCO compliant dataset |
file |
Where to save the output |
... |
parameters for tronco.plot |
data(test_model) export.graphml(test_model, file='text.xml', scale.nodes=0.3)
data(test_model) export.graphml(test_model, file='text.xml', scale.nodes=0.3)
Create an input file for MUTEX (ref: https://code.google.com/p/mutex/ )
export.mutex( x, filename = "tronco_to_mutex", filepath = "./", label.mutation = "SNV", label.amplification = list("High-level Gain"), label.deletion = list("Homozygous Loss") )
export.mutex( x, filename = "tronco_to_mutex", filepath = "./", label.mutation = "SNV", label.amplification = list("High-level Gain"), label.deletion = list("Homozygous Loss") )
x |
A TRONCO compliant dataset. |
filename |
The name of the file |
filepath |
The path where to save the file |
label.mutation |
The event type to use as mutation |
label.amplification |
The event type to use as amplification (can be a list) |
label.deletion |
The event type to use as amplification (can be a list) |
A MUTEX example matrix
data(crc_gistic) dataset = import.GISTIC(crc_gistic) export.mutex(dataset)
data(crc_gistic) dataset = import.GISTIC(crc_gistic) export.mutex(dataset)
Create a .mat file which can be used with NBS clustering (ref: http://chianti.ucsd.edu/~mhofree/wordpress/?page_id=26)
export.nbs.input(x, map_hugo_entrez, file = "tronco_to_nbs.mat")
export.nbs.input(x, map_hugo_entrez, file = "tronco_to_nbs.mat")
x |
A TRONCO compliant dataset. |
map_hugo_entrez |
Hugo_Symbol-Entrez_Gene_Id map |
file |
output file name |
Extract a map Hugo_Symbol -> Entrez_Gene_Id from a MAF input file. If some genes map to ID 0 a warning is raised.
extract.MAF.HuGO.Entrez.map(file, sep = "\t")
extract.MAF.HuGO.Entrez.map(file, sep = "\t")
file |
MAF filename |
sep |
MAF separator, default \'\t\' |
A mapHugo_Symbol -> Entrez_Gene_Id.
Generate PDF and laex tables
genes.table.report( x, name, dir = getwd(), maxrow = 33, font = 10, height = 11, width = 8.5, fill = "lightblue", silent = FALSE )
genes.table.report( x, name, dir = getwd(), maxrow = 33, font = 10, height = 11, width = 8.5, fill = "lightblue", silent = FALSE )
x |
A TRONCO compliant dataset. |
name |
filename |
dir |
working directory |
maxrow |
maximum number of row per page |
font |
document fontsize |
height |
table height |
width |
table width |
fill |
fill color |
silent |
A parameter to disable/enable verbose messages. |
LaTEX code
Return true if there are duplicated events in the TRONCO dataset 'x', which should be
a TRONCO compliant dataset - see is.compliant
. Events are identified by a gene
name, e.g., a HuGO_Symbol, and a type label, e.g., c('SNP', 'KRAS')
has.duplicates(x)
has.duplicates(x)
x |
A TRONCO compliant dataset. |
TRUE if there are duplicated events in x
.
data(test_dataset) has.duplicates(test_dataset)
data(test_dataset) has.duplicates(test_dataset)
Return true if there is a reconstructed model in the TRONCO dataset 'x', which should be
a TRONCO compliant dataset - see is.compliant
.
has.model(x)
has.model(x)
x |
A TRONCO compliant dataset. |
TRUE if there is a reconstructed model in x
.
data(test_dataset) has.model(test_dataset)
data(test_dataset) has.model(test_dataset)
Return true if the TRONCO dataset 'x', which should be a TRONCO compliant dataset
- see is.compliant
- has stage annotations for samples. Some sample stages
might be annotated as NA, but not all.
has.stages(x)
has.stages(x)
x |
A TRONCO compliant dataset. |
TRUE if the TRONCO dataset has stage annotations for samples.
data(test_dataset) has.stages(test_dataset) data(stage) test_dataset = annotate.stages(test_dataset, stage) has.stages(test_dataset)
data(test_dataset) has.stages(test_dataset) data(stage) test_dataset = annotate.stages(test_dataset, stage) has.stages(test_dataset)
Add a new hypothesis by creating a new event and adding it to the compliant genotypes
hypothesis.add( data, pattern.label, lifted.pattern, pattern.effect = "*", pattern.cause = "*" )
hypothesis.add( data, pattern.label, lifted.pattern, pattern.effect = "*", pattern.cause = "*" )
data |
A TRONCO compliant dataset. |
pattern.label |
Label of the new hypothesis. |
lifted.pattern |
Vector to be added to the lifted genotype resolving the pattern related to the new hypothesis |
pattern.effect |
Possibile effects for the pattern. |
pattern.cause |
Possibile causes for the pattern. |
A TRONCO compliant object with the added hypothesis
Add all the hypotheses related to a group of events
hypothesis.add.group( x, FUN, group, pattern.cause = "*", pattern.effect = "*", dim.min = 2, dim.max = length(group), min.prob = 0, silent = FALSE )
hypothesis.add.group( x, FUN, group, pattern.cause = "*", pattern.effect = "*", dim.min = 2, dim.max = length(group), min.prob = 0, silent = FALSE )
x |
A TRONCO compliant dataset. |
FUN |
Type of pattern to be added, e.g., co-occurance, soft or hard exclusivity. |
group |
Group of events to be considered. |
pattern.cause |
Possibile causes for the pattern. |
pattern.effect |
Possibile effects for the pattern. |
dim.min |
Minimum cardinality of the subgroups to be considered. |
dim.max |
Maximum cardinality of the subgroups to be considered. |
min.prob |
Minimum probability associated to each valid group. |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant object with the added hypotheses
Add all the hypotheses related to homologou events
hypothesis.add.homologous( x, pattern.cause = "*", pattern.effect = "*", genes = as.genes(x), silent = FALSE )
hypothesis.add.homologous( x, pattern.cause = "*", pattern.effect = "*", genes = as.genes(x), silent = FALSE )
x |
A TRONCO compliant dataset. |
pattern.cause |
Possibile causes for the pattern. |
pattern.effect |
Possibile effects for the pattern. |
genes |
List of genes to be considered as possible homologous. For these genes, all the types of mutations will be considered functionally equivalent. |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant object with the added hypotheses
Import a matrix of 0/1 alterations as a TRONCO compliant dataset. Input "geno" can be either a dataframe or a file name. In any case the dataframe or the table stored in the file must have a column for each altered gene and a rows for each sample. Colnames will be used to determine gene names, if data is loaded from file the first column will be assigned as rownames. For details and examples regarding the loading functions provided by the package we refer to the Vignette Section 3.
import.genotypes(geno, event.type = "variant", color = "Darkgreen")
import.genotypes(geno, event.type = "variant", color = "Darkgreen")
geno |
Either a dataframe or a filename |
event.type |
Any 1 in "geno" will be interpreted as a an observed alteration labeled with type "event.type" |
color |
This is the color used for visualization of events labeled as of "event.type" |
A TRONCO compliant dataset
Transform GISTIC scores for CNAs in a TRONCO compliant object. Input can be either a matrix, with columns for each altered gene and rows for each sample; in this case colnames/rownames mut be provided. If input is a character an attempt to load a table from file is performed. In this case the input table format should be constitent with TCGA data for focal CNA; there should hence be: one column for each sample, one row for each gene, a column Hugo_Symbol with every gene name and a column Entrez_Gene_Id with every gene\'s Entrez ID. A valid GISTIC score should be any value of: "Homozygous Loss" (-2), "Heterozygous Loss" (-1), "Low-level Gain" (+1), "High-level Gain" (+2). For details and examples regarding the loading functions provided by the package we refer to the Vignette Section 3.
import.GISTIC( x, filter.genes = NULL, filter.samples = NULL, silent = FALSE, trim = TRUE, rna.seq.data = NULL, rna.seq.up = NULL, rna.seq.down = NULL )
import.GISTIC( x, filter.genes = NULL, filter.samples = NULL, silent = FALSE, trim = TRUE, rna.seq.data = NULL, rna.seq.up = NULL, rna.seq.down = NULL )
x |
Either a dataframe or a filename |
filter.genes |
A list of genes |
filter.samples |
A list of samples |
silent |
A parameter to disable/enable verbose messages. |
trim |
Remove the events without occurrence |
rna.seq.data |
Either a dataframe or a filename |
rna.seq.up |
TODO |
rna.seq.down |
TODO |
A TRONCO compliant representation of the input CNAs.
data(crc_gistic) gistic = import.GISTIC(crc_gistic)
data(crc_gistic) gistic = import.GISTIC(crc_gistic)
Import mutation profiles from a Manual Annotation Format (MAF) file. All mutations are aggregated as a
unique event type labeled "Mutation" and assigned a color according to the default of function
import.genotypes
. If this is a TCGA MAF file check for multiple samples per patient is performed
and a warning is raised if these occurr. Customized MAF files can be imported as well provided that
they have columns Hugo_Symbol, Tumor_Sample_Barcode and Variant_Classification.
Custom filters are possible (via filter.fun) to avoid loading the full MAF data. For details and examples
regarding the loading functions provided by the package we refer to the Vignette Section 3.
import.MAF( file, sep = "\t", is.TCGA = TRUE, filter.fun = NULL, to.TRONCO = TRUE, irregular = FALSE, paste.to.Hugo_Symbol = NULL, merge.mutation.types = TRUE, silent = FALSE )
import.MAF( file, sep = "\t", is.TCGA = TRUE, filter.fun = NULL, to.TRONCO = TRUE, irregular = FALSE, paste.to.Hugo_Symbol = NULL, merge.mutation.types = TRUE, silent = FALSE )
file |
MAF filename |
sep |
MAF separator, default \'\t\' |
is.TCGA |
TRUE if this MAF is from TCGA; thus its sample codenames can be interpreted |
filter.fun |
A filter function applied to each row. This is expected to return TRUE/FALSE. |
to.TRONCO |
If FALSE returns a dataframe with MAF data, not a TRONCO object |
irregular |
If TRUE seeks only for columns Hugo_Symbol, Tumor_Sample_Barcode and Variant_Classification |
paste.to.Hugo_Symbol |
If a list of column names, this will be pasted each Hugo_Symbol to yield names such as PHC2.chr1.33116215.33116215 |
merge.mutation.types |
If TRUE, all mutations are considered equivalent, regardless of their Variant_Classification value. Otherwise no. |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant representation of the input MAF
data(maf) mutations = import.MAF(maf) mutations = annotate.description(mutations, 'Example MAF') mutations = TCGA.shorten.barcodes(mutations) oncoprint(mutations)
data(maf) mutations = import.MAF(maf) mutations = annotate.description(mutations, 'Example MAF') mutations = TCGA.shorten.barcodes(mutations) oncoprint(mutations)
Add an adjacency matrix as a model to a TRONCO compliant object. Input model can be either a dataframe or a file name.
import.model(tronco_object, model, model.name = "imported_model")
import.model(tronco_object, model, model.name = "imported_model")
tronco_object |
A TRONCO compliant object |
model |
Either a dataframe or a filename |
model.name |
Name of the imported model |
A TRONCO compliant object
Create a list of unique Mutex groups for a given fdr cutoff current Mutex version is Jan 8, 2015 (ref: https://code.google.com/p/mutex/ )
import.mutex.groups(file, fdr = 0.2, display = TRUE)
import.mutex.groups(file, fdr = 0.2, display = TRUE)
file |
Mutex results ("ranked-groups.txt" file) |
fdr |
cutoff for fdr |
display |
print summary table of extracted groups |
Intersect samples and events of two dataset
intersect.datasets(x, y, intersect.genomes = TRUE)
intersect.datasets(x, y, intersect.genomes = TRUE)
x |
A TRONCO compliant dataset. |
y |
A TRONCO compliant dataset. |
intersect.genomes |
If False -> just samples |
A TRONCO complian dataset.
data(test_dataset)
data(test_dataset)
Check if 'x' is compliant with TRONCO's input: that is if it has dataframes x$genotypes, x$annotations, x$types and x$stage (optional)
is.compliant( x, err.fun = "[ERR]", stage = !(all(is.null(x$stages)) || all(is.na(x$stages))) )
is.compliant( x, err.fun = "[ERR]", stage = !(all(is.null(x$stages)) || all(is.na(x$stages))) )
x |
A TRONCO compliant dataset. |
err.fun |
string which identifies the function which called is.compliant |
stage |
boolean flag to check x$stage datagframe |
on error stops the computation
data(test_dataset) is.compliant(test_dataset)
data(test_dataset) is.compliant(test_dataset)
Merge a list of events in an unique event
join.events(x, ..., new.event, new.type, event.color)
join.events(x, ..., new.event, new.type, event.color)
x |
A TRONCO compliant dataset. |
... |
A list of events to merge |
new.event |
The name of the resultant event |
new.type |
The type of the new event |
event.color |
The color of the new event |
A TRONCO compliant dataset.
data(muts) dataset = join.events(muts, 'G1', 'G2', new.event='test', new.type='banana', event.color='yellow')
data(muts) dataset = join.events(muts, 'G1', 'G2', new.event='test', new.type='banana', event.color='yellow')
For an input dataset merge all the events of two or more distincit types (e.g., say that missense and indel mutations are events of a unique "mutation" type)
join.types(x, ..., new.type = "new.type", new.color = "khaki", silent = FALSE)
join.types(x, ..., new.type = "new.type", new.color = "khaki", silent = FALSE)
x |
A TRONCO compliant dataset. |
... |
type to merge |
new.type |
label for the new type to create |
new.color |
color for the new type to create |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant dataset.
data(test_dataset_no_hypos) join.types(test_dataset_no_hypos, 'ins_del', 'missense_point_mutations') join.types(test_dataset_no_hypos, 'ins_del', 'missense_point_mutations', new.type='mut', new.color='green')
data(test_dataset_no_hypos) join.types(test_dataset_no_hypos, 'ins_del', 'missense_point_mutations') join.types(test_dataset_no_hypos, 'ins_del', 'missense_point_mutations', new.type='mut', new.color='green')
Convert colnames/rownames of a matrix into intelligible event names, e.g., change a key G23 in 'Mutation KRAS'. If a name is not found, the original name is left unchanged.
keysToNames(x, matrix)
keysToNames(x, matrix)
x |
A TRONCO compliant dataset. |
matrix |
A matrix with colnames/rownames which represent genotypes keys. |
The matrix with intelligible colnames/rownames.
data(test_model) adj_matrix = as.adj.matrix(test_model, events=as.events(test_model)[5:15,])$capri_bic keysToNames(test_model, adj_matrix)
data(test_model) adj_matrix = as.adj.matrix(test_model, events=as.events(test_model)[5:15,])$capri_bic keysToNames(test_model, adj_matrix)
This dataset contains a standard MAF input for TRONCO
data(maf)
data(maf)
Manual Annotated Format
A standard TRONCO object
Luca De Sano
fake data
A simple mutation dataset without hypotheses
data(muts)
data(muts)
TRONCO compliant dataset
A standard TRONCO object
Luca De Sano
fake data
Convert to key an intelligible event names, e.g., change 'Mutation KRAS' in G23. If a name is not found, an error is raised!
nameToKey(x, name)
nameToKey(x, name)
x |
A TRONCO compliant dataset. |
name |
A intelligible event name |
A TRONCO dataset key name
data(test_model) adj_matrix = as.adj.matrix(test_model, events=as.events(test_model)[5:15,])$bic
data(test_model) adj_matrix = as.adj.matrix(test_model, events=as.events(test_model)[5:15,])$bic
Return the number of events in the dataset involving a certain gene or type of event.
nevents(x, genes = NA, types = NA)
nevents(x, genes = NA, types = NA)
x |
A TRONCO compliant dataset. |
genes |
The genes to consider, if NA all available genes are used. |
types |
The types of events to consider, if NA all available types are used. |
The number of events in the dataset involving a certain gene or type of event.
data(test_dataset) nevents(test_dataset)
data(test_dataset) nevents(test_dataset)
Return the number of genes in the dataset involving a certain type of event.
ngenes(x, types = NA)
ngenes(x, types = NA)
x |
A TRONCO compliant dataset. |
types |
The types of events to consider, if NA all available types are used. |
The number of genes in the dataset involving a certain type of event.
data(test_dataset) ngenes(test_dataset)
data(test_dataset) ngenes(test_dataset)
Return the number of hypotheses in the dataset
nhypotheses(x)
nhypotheses(x)
x |
the dataset. |
data(test_dataset) nhypotheses(test_dataset)
data(test_dataset) nhypotheses(test_dataset)
Return the number of patterns in the dataset
npatterns(x)
npatterns(x)
x |
the dataset. |
data(test_dataset) npatterns(test_dataset)
data(test_dataset) npatterns(test_dataset)
Return the number of samples in the dataset.
nsamples(x)
nsamples(x)
x |
A TRONCO compliant dataset. |
The number of samples in the dataset.
data(test_dataset) nsamples(test_dataset)
data(test_dataset) nsamples(test_dataset)
Return the number of types in the dataset.
ntypes(x)
ntypes(x)
x |
A TRONCO compliant dataset. |
The number of types in the dataset.
data(test_dataset) ntypes(test_dataset)
data(test_dataset) ntypes(test_dataset)
oncoPrint : plot a genotype. For details and examples regarding the visualization through oncoprints, we refer to the Vignette Section 4.4.
oncoprint( x, excl.sort = TRUE, samples.cluster = FALSE, genes.cluster = FALSE, file = NA, ann.stage = has.stages(x), ann.hits = TRUE, stage.color = "YlOrRd", hits.color = "Purples", null.color = "lightgray", border.color = "white", text.cex = 1, font.column = NA, font.row = NA, title = as.description(x), sample.id = FALSE, hide.zeroes = FALSE, legend = TRUE, legend.cex = 0.5, cellwidth = NA, cellheight = NA, group.by.label = FALSE, group.by.stage = FALSE, group.samples = NA, gene.annot = NA, gene.annot.color = "Set1", show.patterns = FALSE, annotate.consolidate.events = FALSE, txt.stats = paste(nsamples(x), " samples\n", nevents(x), " events\n", ngenes(x), " genes\n", npatterns(x), " patterns", sep = ""), gtable = FALSE, ... )
oncoprint( x, excl.sort = TRUE, samples.cluster = FALSE, genes.cluster = FALSE, file = NA, ann.stage = has.stages(x), ann.hits = TRUE, stage.color = "YlOrRd", hits.color = "Purples", null.color = "lightgray", border.color = "white", text.cex = 1, font.column = NA, font.row = NA, title = as.description(x), sample.id = FALSE, hide.zeroes = FALSE, legend = TRUE, legend.cex = 0.5, cellwidth = NA, cellheight = NA, group.by.label = FALSE, group.by.stage = FALSE, group.samples = NA, gene.annot = NA, gene.annot.color = "Set1", show.patterns = FALSE, annotate.consolidate.events = FALSE, txt.stats = paste(nsamples(x), " samples\n", nevents(x), " events\n", ngenes(x), " genes\n", npatterns(x), " patterns", sep = ""), gtable = FALSE, ... )
x |
A TRONCO compliant dataset |
excl.sort |
Boolean value, if TRUE sorts samples to enhance exclusivity of alterations |
samples.cluster |
Boolean value, if TRUE clusters samples (columns). Default FALSE |
genes.cluster |
Boolean value, if TRUE clusters genes (rows). Default FALSE |
file |
If not NA write to |
ann.stage |
Boolean value to annotate stage classification, default depends on |
ann.hits |
Boolean value to annotate the number of events in each sample, default is TRUE |
stage.color |
RColorbrewer palette to color stage annotations. Default is 'YlOrRd' |
hits.color |
RColorbrewer palette to color hits annotations. Default is 'Purples' |
null.color |
Color for the Oncoprint cells with 0s, default is 'lightgray' |
border.color |
Border color for the Oncoprint, default is white' (no border) |
text.cex |
Title and annotations cex, multiplied by font size 7 |
font.column |
If NA, half of font.row is used |
font.row |
If NA, max(c(15 * exp(-0.02 * nrow(data)), 2)) is used, where data is the data visualized in the Oncoprint |
title |
Oncoprint title, default is as.name(x) - see |
sample.id |
If TRUE shows samples name (columns). Default is FALSE |
hide.zeroes |
If TRUE trims data - see |
legend |
If TRUE shows a legend for the types of events visualized. Defualt is TRUE |
legend.cex |
Default 0.5; determines legend size if |
cellwidth |
Default NA, sets autoscale cell width |
cellheight |
Default NA, sets autoscale cell height |
group.by.label |
Sort samples (rows) by event label - usefull when multiple events per gene are available |
group.by.stage |
Default FALSE; sort samples by stage. |
group.samples |
If this samples -> group map is provided, samples are grouped as of groups
and sorted according to the number of mutations per sample - usefull when |
gene.annot |
Genes'groups, e.g. list(RAF=c('KRAS','NRAS'), Wnt=c('APC', 'CTNNB1')). Default is NA. |
gene.annot.color |
Either a RColorColorbrewer palette name or a set of custom colors matching names(gene.annot) |
show.patterns |
If TRUE shows also a separate oncoprint for each pattern. Default is FALSE |
annotate.consolidate.events |
Default is FALSE. If TRUE an annotation for events to consolidate is shown. |
txt.stats |
By default, shows a summary statistics for shown data (n,m, |G| and |P|) |
gtable |
If TRUE return the gtable object |
... |
other arguments to pass to pheatmap |
export input for cbio visualization at http://www.cbioportal.org/public-portal/oncoprinter.jsp
oncoprint.cbio( x, file = "oncoprint-cbio.txt", hom.del = "Homozygous Loss", het.loss = "Heterozygous Loss", gain = "Low-level Gain", amp = "High-level Gain" )
oncoprint.cbio( x, file = "oncoprint-cbio.txt", hom.del = "Homozygous Loss", het.loss = "Heterozygous Loss", gain = "Low-level Gain", amp = "High-level Gain" )
x |
A TRONCO compliant dataset. |
file |
name of the file where to save the output |
hom.del |
type of Homozygous Deletion |
het.loss |
type of Heterozygous Loss |
gain |
type of Gain |
amp |
type of Amplification |
A file containing instruction for the CBio visualization Tool
data(crc_gistic) gistic = import.GISTIC(crc_gistic) oncoprint.cbio(gistic)
data(crc_gistic) gistic = import.GISTIC(crc_gistic) oncoprint.cbio(gistic)
OR hypothesis
OR(...)
OR(...)
... |
Atoms of the soft exclusive pattern given either as labels or as partielly lifted vectors. |
Vector to be added to the lifted genotype resolving the soft exclusive pattern
Sort the internal genotypes according to event frequency.
order.frequency(x, decreasing = TRUE)
order.frequency(x, decreasing = TRUE)
x |
A TRONCO compliant dataset. |
decreasing |
Inverse order. Default TRUE |
A TRONCO compliant dataset with the internal genotypes sorted according to event frequency.
data(test_dataset) order.frequency(test_dataset)
data(test_dataset) order.frequency(test_dataset)
Visualise pathways informations
pathway.visualization( x, title = paste("Pathways:", paste(names(pathways), collapse = ", ", sep = "")), file = NA, pathways.color = "Set2", aggregate.pathways, pathways, ... )
pathway.visualization( x, title = paste("Pathways:", paste(names(pathways), collapse = ", ", sep = "")), file = NA, pathways.color = "Set2", aggregate.pathways, pathways, ... )
x |
A TRONCO complian dataset |
title |
Plot title |
file |
To generate a PDF a filename have to be given |
pathways.color |
A RColorBrewer color palette |
aggregate.pathways |
Boolean parameter |
pathways |
Pathways |
... |
Additional parameters |
plot information
A function to draw clustered heatmaps where one has better control over some graphical parameters such as cell size, etc.
pheatmap( mat, color = colorRampPalette(rev(brewer.pal(n = 7, name = "RdYlBu")))(100), kmeans_k = NA, breaks = NA, border_color = "grey60", cellwidth = NA, cellheight = NA, scale = "none", cluster_rows = TRUE, cluster_cols = TRUE, clustering_distance_rows = "euclidean", clustering_distance_cols = "euclidean", clustering_method = "complete", cutree_rows = NA, cutree_cols = NA, treeheight_row = ifelse(cluster_rows, 50, 0), treeheight_col = ifelse(cluster_cols, 50, 0), legend = TRUE, legend_breaks = NA, legend_labels = NA, annotation_row = NA, annotation_col = NA, annotation = NA, annotation_colors = NA, annotation_legend = TRUE, drop_levels = TRUE, show_rownames = TRUE, show_colnames = TRUE, main = NA, fontsize = 10, fontsize_row = fontsize, fontsize_col = fontsize, display_numbers = FALSE, number_format = "%.2f", number_color = "grey30", fontsize_number = 0.8 * fontsize, gaps_row = NULL, gaps_col = NULL, labels_row = NULL, labels_col = NULL, filename = NA, width = NA, height = NA, silent = FALSE, legend.cex = 1, txt.stats = NA, ... )
pheatmap( mat, color = colorRampPalette(rev(brewer.pal(n = 7, name = "RdYlBu")))(100), kmeans_k = NA, breaks = NA, border_color = "grey60", cellwidth = NA, cellheight = NA, scale = "none", cluster_rows = TRUE, cluster_cols = TRUE, clustering_distance_rows = "euclidean", clustering_distance_cols = "euclidean", clustering_method = "complete", cutree_rows = NA, cutree_cols = NA, treeheight_row = ifelse(cluster_rows, 50, 0), treeheight_col = ifelse(cluster_cols, 50, 0), legend = TRUE, legend_breaks = NA, legend_labels = NA, annotation_row = NA, annotation_col = NA, annotation = NA, annotation_colors = NA, annotation_legend = TRUE, drop_levels = TRUE, show_rownames = TRUE, show_colnames = TRUE, main = NA, fontsize = 10, fontsize_row = fontsize, fontsize_col = fontsize, display_numbers = FALSE, number_format = "%.2f", number_color = "grey30", fontsize_number = 0.8 * fontsize, gaps_row = NULL, gaps_col = NULL, labels_row = NULL, labels_col = NULL, filename = NA, width = NA, height = NA, silent = FALSE, legend.cex = 1, txt.stats = NA, ... )
mat |
numeric matrix of the values to be plotted. |
color |
vector of colors used in heatmap. |
kmeans_k |
the number of kmeans clusters to make, if we want to agggregate the rows before drawing heatmap. If NA then the rows are not aggregated. |
breaks |
a sequence of numbers that covers the range of values in mat and is one element longer than color vector. Used for mapping values to colors. Useful, if needed to map certain values to certain colors, to certain values. If value is NA then the breaks are calculated automatically. |
border_color |
color of cell borders on heatmap, use NA if no border should be drawn. |
cellwidth |
individual cell width in points. If left as NA, then the values depend on the size of plotting window. |
cellheight |
individual cell height in points. If left as NA, then the values depend on the size of plotting window. |
scale |
character indicating if the values should be centered and scaled in
either the row direction or the column direction, or none. Corresponding values are
|
cluster_rows |
boolean values determining if rows should be clustered, |
cluster_cols |
boolean values determining if columns should be clustered. |
clustering_distance_rows |
distance measure used in clustering rows. Possible
values are |
clustering_distance_cols |
distance measure used in clustering columns. Possible values the same as for clustering_distance_rows. |
clustering_method |
clustering method used. Accepts the same values as
|
cutree_rows |
number of clusters the rows are divided into, based on the hierarchical clustering (using cutree), if rows are not clustered, the argument is ignored |
cutree_cols |
similar to |
treeheight_row |
the height of a tree for rows, if these are clustered. Default value 50 points. |
treeheight_col |
the height of a tree for columns, if these are clustered. Default value 50 points. |
legend |
logical to determine if legend should be drawn or not. |
legend_breaks |
vector of breakpoints for the legend. |
legend_labels |
vector of labels for the |
annotation_row |
data frame that specifies the annotations shown on left side of the heatmap. Each row defines the features for a specific row. The rows in the data and in the annotation are matched using corresponding row names. Note that color schemes takes into account if variable is continuous or discrete. |
annotation_col |
similar to annotation_row, but for columns. |
annotation |
deprecated parameter that currently sets the annotation_col if it is missing |
annotation_colors |
list for specifying annotation_row and annotation_col track colors manually. It is possible to define the colors for only some of the features. Check examples for details. |
annotation_legend |
boolean value showing if the legend for annotation tracks should be drawn. |
drop_levels |
logical to determine if unused levels are also shown in the legend |
show_rownames |
boolean specifying if column names are be shown. |
show_colnames |
boolean specifying if column names are be shown. |
main |
the title of the plot |
fontsize |
base fontsize for the plot |
fontsize_row |
fontsize for rownames (Default: fontsize) |
fontsize_col |
fontsize for colnames (Default: fontsize) |
display_numbers |
logical determining if the numeric values are also printed to the cells. If this is a matrix (with same dimensions as original matrix), the contents of the matrix are shown instead of original values. |
number_format |
format strings (C printf style) of the numbers shown in cells.
For example " |
number_color |
color of the text |
fontsize_number |
fontsize of the numbers displayed in cells |
gaps_row |
vector of row indices that show shere to put gaps into
heatmap. Used only if the rows are not clustered. See |
gaps_col |
similar to gaps_row, but for columns. |
labels_row |
custom labels for rows that are used instead of rownames. |
labels_col |
similar to labels_row, but for columns. |
filename |
file path where to save the picture. Filetype is decided by the extension in the path. Currently following formats are supported: png, pdf, tiff, bmp, jpeg. Even if the plot does not fit into the plotting window, the file size is calculated so that the plot would fit there, unless specified otherwise. |
width |
manual option for determining the output file width in inches. |
height |
manual option for determining the output file height in inches. |
silent |
do not draw the plot (useful when using the gtable output) |
legend.cex |
Default 0.5; determines legend size if |
txt.stats |
By default, shows a summary statistics for shown data (n,m, |G| and |P|) |
... |
graphical parameters for the text used in plot. Parameters passed to
|
The function also allows to aggregate the rows using kmeans clustering. This is advisable if number of rows is so big that R cannot handle their hierarchical clustering anymore, roughly more than 1000. Instead of showing all the rows separately one can cluster the rows in advance and show only the cluster centers. The number of clusters can be tuned with parameter kmeans_k.
This is a modified version of the original pheatmap (https://cran.r-project.org/web/packages/pheatmap/index.html) edited in accordance with GPL-2.
Invisibly a list of components
tree_row
the clustering of rows as hclust
object
tree_col
the clustering of columns as hclust
object
kmeans
the kmeans clustering of rows if parameter kmeans_k
was
specified
Raivo Kolde <[email protected]>
# Create test matrix test = matrix(rnorm(200), 20, 10) test[1:10, seq(1, 10, 2)] = test[1:10, seq(1, 10, 2)] + 3 test[11:20, seq(2, 10, 2)] = test[11:20, seq(2, 10, 2)] + 2 test[15:20, seq(2, 10, 2)] = test[15:20, seq(2, 10, 2)] + 4 colnames(test) = paste("Test", 1:10, sep = "") rownames(test) = paste("Gene", 1:20, sep = "") # Draw heatmaps pheatmap(test)
# Create test matrix test = matrix(rnorm(200), 20, 10) test[1:10, seq(1, 10, 2)] = test[1:10, seq(1, 10, 2)] + 3 test[11:20, seq(2, 10, 2)] = test[11:20, seq(2, 10, 2)] + 2 test[15:20, seq(2, 10, 2)] = test[15:20, seq(2, 10, 2)] + 4 colnames(test) = paste("Test", 1:10, sep = "") rownames(test) = paste("Gene", 1:20, sep = "") # Draw heatmaps pheatmap(test)
Return the first n recurrent events
rank.recurrents(x, n)
rank.recurrents(x, n)
x |
A TRONCO compliant dataset. |
n |
The number of events to rank |
the first n recurrent events
data(test_dataset) dataset = rank.recurrents(test_dataset, 10)
data(test_dataset) dataset = rank.recurrents(test_dataset, 10)
Rename a gene
rename.gene(x, old.name, new.name)
rename.gene(x, old.name, new.name)
x |
A TRONCO compliant dataset. |
old.name |
The name of the gene to rename. |
new.name |
The new name |
A TRONCO complian dataset.
data(test_dataset) test_dataset = rename.gene(test_dataset, 'TET2', 'gene x')
data(test_dataset) test_dataset = rename.gene(test_dataset, 'TET2', 'gene x')
Rename an event type
rename.type(x, old.name, new.name)
rename.type(x, old.name, new.name)
x |
A TRONCO compliant dataset. |
old.name |
The type of event to rename. |
new.name |
The new name |
A TRONCO complian dataset.
data(test_dataset) test_dataset = rename.type(test_dataset, 'ins_del', 'deletion')
data(test_dataset) test_dataset = rename.type(test_dataset, 'ins_del', 'deletion')
Filter a dataset based on selected samples id
samples.selection(x, samples)
samples.selection(x, samples)
x |
A TRONCO compliant dataset. |
samples |
A list of samples |
A TRONCO compliant dataset.
data(test_dataset) dataset = samples.selection(test_dataset, c('patient 1', 'patient 2'))
data(test_dataset) dataset = samples.selection(test_dataset, c('patient 1', 'patient 2'))
Binds samples from one or more datasets, which must be defined over the same set of events
sbind(...)
sbind(...)
... |
the input datasets |
A TRONCO complian dataset.
Split cohort (samples) into groups, return either all groups or a specific group.
ssplit(x, clusters, idx = NA)
ssplit(x, clusters, idx = NA)
x |
A TRONCO compliant dataset. |
clusters |
A list of clusters. Rownames must match samples list of x |
idx |
ID of a specific group present in stages. If NA all groups will be extracted |
A TRONCO compliant dataset.
This dataset contains stage information for patient in test_dataset
data(stage)
data(stage)
Vector of stages
A list of stages
Luca De Sano
fake data
Map clinical data from the TCGA format
TCGA.map.clinical.data(file, sep = "\t", column.samples, column.map)
TCGA.map.clinical.data(file, sep = "\t", column.samples, column.map)
file |
A file with the clinical data |
sep |
file delimiter |
column.samples |
Required columns |
column.map |
Map to the required columns |
a map
Check if there are multiple sample in x, according to TCGA barcodes naming
TCGA.multiple.samples(x)
TCGA.multiple.samples(x)
x |
A TRONCO compliant dataset. |
A list of barcodes. NA if no duplicated barcode is found
data(test_dataset) TCGA.multiple.samples(test_dataset)
data(test_dataset) TCGA.multiple.samples(test_dataset)
If there are multiple sample in x, according to TCGA barcodes naming, remove them
TCGA.remove.multiple.samples(x)
TCGA.remove.multiple.samples(x)
x |
A TRONCO compliant dataset. |
A TRONCO compliant dataset
data(test_dataset) TCGA.remove.multiple.samples(test_dataset)
data(test_dataset) TCGA.remove.multiple.samples(test_dataset)
Keep only the first 12 character of samples barcode if there are no duplicates
TCGA.shorten.barcodes(x)
TCGA.shorten.barcodes(x)
x |
A TRONCO compliant dataset. |
A TRONCO compliant dataset
data(test_dataset) TCGA.shorten.barcodes(test_dataset)
data(test_dataset) TCGA.shorten.barcodes(test_dataset)
This dataset contains a complete test dataset
data(test_dataset)
data(test_dataset)
TRONCO compliant dataset
A standard TRONCO object
Luca De Sano
fake data
This dataset contains a complete test dataset
data(test_dataset_no_hypos)
data(test_dataset_no_hypos)
TRONCO compliant dataset
A standard TRONCO object
Luca De Sano
fake data
This dataset contains a model reconstructed with CAPRI
data(test_model)
data(test_model)
TRONCO compliant dataset
A standard TRONCO object
Luca De Sano
fake data
This dataset contains a model reconstructed with CAPRI
data(test_model_kfold)
data(test_model_kfold)
TRONCO compliant dataset
A standard TRONCO object
Luca De Sano
fake data
Deletes all events which have frequency 0 in the dataset.
trim(x)
trim(x)
x |
A TRONCO compliant dataset. |
A TRONCO compliant dataset.
data(test_dataset) test_dataset = trim(test_dataset)
data(test_dataset) test_dataset = trim(test_dataset)
Bootstrap a reconstructed progression model. For details and examples regarding the statistical assesment of an inferred model, we refer to the Vignette Section 7.
tronco.bootstrap( reconstruction, type = "non-parametric", nboot = 100, cores.ratio = 1, silent = FALSE )
tronco.bootstrap( reconstruction, type = "non-parametric", nboot = 100, cores.ratio = 1, silent = FALSE )
reconstruction |
The output of tronco.capri or tronco.caprese |
type |
Parameter to define the type of sampling to be performed, e.g., non-parametric for uniform sampling. |
nboot |
Number of bootstrap sampling to be performed when estimating the model confidence. |
cores.ratio |
Percentage of cores to use coresRate * (numCores - 1) |
silent |
A parameter to disable/enable verbose messages. |
A TRONCO compliant object with reconstructed model
data(test_model) boot = tronco.bootstrap(test_model, nboot = 1, cores.ratio = 0)
data(test_model) boot = tronco.bootstrap(test_model, nboot = 1, cores.ratio = 0)
Reconstruct a progression model using CAPRESE algorithm. For details and examples regarding the inference process and on the algorithm implemented in the package, we refer to the Vignette Section 6.
tronco.caprese(data, lambda = 0.5, silent = FALSE, epos = 0, eneg = 0)
tronco.caprese(data, lambda = 0.5, silent = FALSE, epos = 0, eneg = 0)
data |
A TRONCO compliant dataset. |
lambda |
Coefficient to combine the raw estimate with a correction factor into a shrinkage estimator. |
silent |
A parameter to disable/enable verbose messages. |
epos |
Error rate of false positive errors. |
eneg |
Error rate of false negative errors. |
A TRONCO compliant object with reconstructed model
data(test_dataset_no_hypos) recon = tronco.caprese(test_dataset_no_hypos)
data(test_dataset_no_hypos) recon = tronco.caprese(test_dataset_no_hypos)
Reconstruct a progression model using CAPRI algorithm. For details and examples regarding the inference process and on the algorithm implemented in the package, we refer to the Vignette Section 6.
tronco.capri( data, command = "hc", regularization = c("bic", "aic"), do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0, restart = 100 )
tronco.capri( data, command = "hc", regularization = c("bic", "aic"), do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0, restart = 100 )
data |
A TRONCO compliant dataset. |
command |
Parameter to define to heuristic search to be performed. Hill Climbing and Tabu search are currently available. |
regularization |
Select the regularization for the likelihood estimation, e.g., BIC, AIC. |
do.boot |
A parameter to disable/enable the estimation of the error rates give the reconstructed model. |
nboot |
Number of bootstrap sampling (with rejection) to be performed when estimating the selective advantage scores. |
pvalue |
Pvalue to accept/reject the valid selective advantage relations. |
min.boot |
Minimum number of bootstrap sampling to be performed. |
min.stat |
A parameter to disable/enable the minimum number of bootstrap sampling required besides nboot if any sampling is rejected. |
boot.seed |
Initial seed for the bootstrap random sampling. |
silent |
A parameter to disable/enable verbose messages. |
epos |
Error rate of false positive errors. |
eneg |
Error rate of false negative errors. |
restart |
An integer, the number of random restarts. |
A TRONCO compliant object with reconstructed model
data(test_dataset) recon = tronco.capri(test_dataset, nboot = 1)
data(test_dataset) recon = tronco.capri(test_dataset, nboot = 1)
Reconstruct a progression model using Chow Liu algorithm combined with probabilistic causation. For details and examples regarding the inference process and on the algorithm implemented in the package, we refer to the Vignette Section 6.
tronco.chowliu( data, regularization = c("bic", "aic"), do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0 )
tronco.chowliu( data, regularization = c("bic", "aic"), do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0 )
data |
A TRONCO compliant dataset. |
regularization |
Select the regularization for the likelihood estimation, e.g., BIC, AIC. |
do.boot |
A parameter to disable/enable the estimation of the error rates give the reconstructed model. |
nboot |
Number of bootstrap sampling (with rejection) to be performed when estimating the selective advantage scores. |
pvalue |
Pvalue to accept/reject the valid selective advantage relations. |
min.boot |
Minimum number of bootstrap sampling to be performed. |
min.stat |
A parameter to disable/enable the minimum number of bootstrap sampling required besides nboot if any sampling is rejected. |
boot.seed |
Initial seed for the bootstrap random sampling. |
silent |
A parameter to disable/enable verbose messages. |
epos |
Error rate of false positive errors. |
eneg |
Error rate of false negative errors. |
A TRONCO compliant object with reconstructed model
data(test_dataset_no_hypos) recon = tronco.chowliu(test_dataset_no_hypos, nboot = 1)
data(test_dataset_no_hypos) recon = tronco.chowliu(test_dataset_no_hypos, nboot = 1)
Reconstruct a progression model using Edmonds algorithm combined with probabilistic causation. For details and examples regarding the inference process and on the algorithm implemented in the package, we refer to the Vignette Section 6.
tronco.edmonds( data, regularization = "no_reg", score = "pmi", do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0 )
tronco.edmonds( data, regularization = "no_reg", score = "pmi", do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0 )
data |
A TRONCO compliant dataset. |
regularization |
Select the regularization for the likelihood estimation, e.g., BIC, AIC. |
score |
Select the score for the estimation of the best tree, e.g., pointwise mutual information (pmi), conditional entropy (entropy). |
do.boot |
A parameter to disable/enable the estimation of the error rates give the reconstructed model. |
nboot |
Number of bootstrap sampling (with rejection) to be performed when estimating the selective advantage scores. |
pvalue |
Pvalue to accept/reject the valid selective advantage relations. |
min.boot |
Minimum number of bootstrap sampling to be performed. |
min.stat |
A parameter to disable/enable the minimum number of bootstrap sampling required besides nboot if any sampling is rejected. |
boot.seed |
Initial seed for the bootstrap random sampling. |
silent |
A parameter to disable/enable verbose messages. |
epos |
Error rate of false positive errors. |
eneg |
Error rate of false negative errors. |
A TRONCO compliant object with reconstructed model
data(test_dataset_no_hypos) recon = tronco.edmonds(test_dataset_no_hypos, nboot = 1)
data(test_dataset_no_hypos) recon = tronco.edmonds(test_dataset_no_hypos, nboot = 1)
Reconstruct a progression model using Gabow algorithm combined with probabilistic causation. For details and examples regarding the inference process and on the algorithm implemented in the package, we refer to the Vignette Section 6.
tronco.gabow( data, regularization = "no_reg", score = "pmi", do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0, do.raising = TRUE )
tronco.gabow( data, regularization = "no_reg", score = "pmi", do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0, do.raising = TRUE )
data |
A TRONCO compliant dataset. |
regularization |
Select the regularization for the likelihood estimation, e.g., BIC, AIC. |
score |
Select the score for the estimation of the best tree, e.g., pointwise mutual information (pmi), conditional entropy (entropy). |
do.boot |
A parameter to disable/enable the estimation of the error rates give the reconstructed model. |
nboot |
Number of bootstrap sampling (with rejection) to be performed when estimating the selective advantage scores. |
pvalue |
Pvalue to accept/reject the valid selective advantage relations. |
min.boot |
Minimum number of bootstrap sampling to be performed. |
min.stat |
A parameter to disable/enable the minimum number of bootstrap sampling required besides nboot if any sampling is rejected. |
boot.seed |
Initial seed for the bootstrap random sampling. |
silent |
A parameter to disable/enable verbose messages. |
epos |
Error rate of false positive errors. |
eneg |
Error rate of false negative errors. |
do.raising |
Whether to use or not the raising condition as a prior. |
A TRONCO compliant object with reconstructed model
data(test_dataset_no_hypos) recon = tronco.gabow(test_dataset_no_hypos, nboot = 1)
data(test_dataset_no_hypos) recon = tronco.gabow(test_dataset_no_hypos, nboot = 1)
Perform a k-fold cross-validation using the function bn.cv to estimate the entropy loss. For details and examples regarding the statistical assesment of an inferred model, we refer to the Vignette Section 7.
tronco.kfold.eloss( x, models = names(as.models(x)), runs = 10, k = 10, silent = FALSE )
tronco.kfold.eloss( x, models = names(as.models(x)), runs = 10, k = 10, silent = FALSE )
x |
A reconstructed model (the output of tronco.capri or tronco.caprese) |
models |
The names of the selected regularizers (bic, aic or caprese) |
runs |
a positive integer number, the number of times cross-validation will be run |
k |
a positive integer number, the number of groups into which the data will be split |
silent |
A parameter to disable/enable verbose messages. |
data(test_model) tronco.kfold.eloss(test_model, k = 2, runs = 2)
data(test_model) tronco.kfold.eloss(test_model, k = 2, runs = 2)
Perform a k-fold cross-validation using the function bn.cv and scan every node to estimate its posterior classification error.
tronco.kfold.posterr( x, models = names(as.models(x)), events = as.events(x), runs = 10, k = 10, cores.ratio = 1, silent = FALSE )
tronco.kfold.posterr( x, models = names(as.models(x)), events = as.events(x), runs = 10, k = 10, cores.ratio = 1, silent = FALSE )
x |
A reconstructed model (the output of tronco.capri) |
models |
The names of the selected regularizers (bic, aic or caprese) |
events |
a list of event |
runs |
a positive integer number, the number of times cross-validation will be run |
k |
a positive integer number, the number of groups into which the data will be split |
cores.ratio |
Percentage of cores to use. coresRate * (numCores - 1) |
silent |
A parameter to disable/enable verbose messages. |
data(test_model) tronco.kfold.posterr(test_model, k = 2, runs = 2, cores.ratio = 0)
data(test_model) tronco.kfold.posterr(test_model, k = 2, runs = 2, cores.ratio = 0)
Perform a k-fold cross-validation using the function bn.cv and scan every node to estimate its prediction error. For details and examples regarding the statistical assesment of an inferred model, we refer to the Vignette Section 7.
tronco.kfold.prederr( x, models = names(as.models(x)), events = as.events(x), runs = 10, k = 10, cores.ratio = 1, silent = FALSE )
tronco.kfold.prederr( x, models = names(as.models(x)), events = as.events(x), runs = 10, k = 10, cores.ratio = 1, silent = FALSE )
x |
A reconstructed model (the output of tronco.capri) |
models |
The names of the selected regularizers (bic, aic or caprese) |
events |
a list of event |
runs |
a positive integer number, the number of times cross-validation will be run |
k |
a positive integer number, the number of groups into which the data will be split |
cores.ratio |
Percentage of cores to use. coresRate * (numCores - 1) |
silent |
A parameter to disable/enable verbose messages. |
data(test_model) tronco.kfold.prederr(test_model, k = 2, runs = 2, cores.ratio = 0)
data(test_model) tronco.kfold.prederr(test_model, k = 2, runs = 2, cores.ratio = 0)
tronco.pattern.plot : plot a genotype
tronco.pattern.plot( x, group = as.events(x), to, gap.cex = 1, legend.cex = 1, label.cex = 1, title = paste(to[1], to[2]), mode = "barplot" )
tronco.pattern.plot( x, group = as.events(x), to, gap.cex = 1, legend.cex = 1, label.cex = 1, title = paste(to[1], to[2]), mode = "barplot" )
x |
A TRONCO compliant dataset |
group |
A list of events (see as.events() for details) |
to |
A target event |
gap.cex |
cex parameter for gap |
legend.cex |
cex parameter for legend |
label.cex |
cex parameter for label |
title |
title |
mode |
can be 'circos' or 'barplot' |
Plots a progression model from a recostructed dataset. For details and examples regarding the visualization of an inferred model, we refer to the Vignette Section 7.
tronco.plot( x, models = names(x$model), fontsize = NA, height = 2, width = 3, height.logic = 1, pf = FALSE, disconnected = FALSE, scale.nodes = NA, title = as.description(x), confidence = NA, p.min = 0.05, legend = TRUE, legend.cex = 1, edge.cex = 1, label.edge.size = NA, expand = TRUE, genes = NULL, relations.filter = NA, edge.color = "black", pathways.color = "Set1", file = NA, legend.pos = "bottom", pathways = NULL, lwd = 3, samples.annotation = NA, export.igraph = FALSE, create.new.dev = TRUE, ... )
tronco.plot( x, models = names(x$model), fontsize = NA, height = 2, width = 3, height.logic = 1, pf = FALSE, disconnected = FALSE, scale.nodes = NA, title = as.description(x), confidence = NA, p.min = 0.05, legend = TRUE, legend.cex = 1, edge.cex = 1, label.edge.size = NA, expand = TRUE, genes = NULL, relations.filter = NA, edge.color = "black", pathways.color = "Set1", file = NA, legend.pos = "bottom", pathways = NULL, lwd = 3, samples.annotation = NA, export.igraph = FALSE, create.new.dev = TRUE, ... )
x |
A reconstructed model (the output of the inference by a tronco function) |
models |
A vector containing the names of the algorithms used (caprese, capri_bic, etc) |
fontsize |
For node names. Default NA for automatic rescaling |
height |
Proportion node height - node width. Default height 2 |
width |
Proportion node height - node width. Default width 2 |
height.logic |
Height of logical nodes. Defaul 1 |
pf |
Should I print Prima Facie? Default False |
disconnected |
Should I print disconnected nodes? Default False |
scale.nodes |
Node scaling coefficient (based on node frequency). Default NA (autoscale) |
title |
Title of the plot. Default as.description(x) |
confidence |
Should I add confidence informations? No if NA |
p.min |
p-value cutoff. Default automatic |
legend |
Should I visualise the legend? |
legend.cex |
CEX value for legend. Default 1.0 |
edge.cex |
CEX value for edge labels. Default 1.0 |
label.edge.size |
Size of edge labels. Default NA for automatic rescaling |
expand |
Should I expand hypotheses? Default TRUE |
genes |
Visualise only genes in this list. Default NULL, visualise all. |
relations.filter |
Filter relations to dispaly according to this functions. Default NA |
edge.color |
Edge color. Default 'black' |
pathways.color |
RColorBrewer colorser for patways. Default 'Set1'. |
file |
String containing filename for PDF output. If NA no PDF output will be provided |
legend.pos |
Legend position. Default 'bottom', |
pathways |
A vector containing pathways information as described in as.patterns() |
lwd |
Edge base lwd. Default 3 |
samples.annotation |
= List of samples to search for events in model |
export.igraph |
If TRUE export the generated igraph object |
create.new.dev |
If TRUE create a new graphical device when calling trono.plot. Set this to FALSE, e.g., if you do not wish to create a new device when executing the command with export.igraph = TRUE |
... |
Additional arguments for RGraphviz plot function |
Information about the reconstructed model
data(test_model) tronco.plot(test_model)
data(test_model) tronco.plot(test_model)
Reconstruct a progression model using Prim algorithm combined with probabilistic causation. For details and examples regarding the inference process and on the algorithm implemented in the package, we refer to the Vignette Section 6.
tronco.prim( data, regularization = "no_reg", do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0 )
tronco.prim( data, regularization = "no_reg", do.boot = TRUE, nboot = 100, pvalue = 0.05, min.boot = 3, min.stat = TRUE, boot.seed = NULL, silent = FALSE, epos = 0, eneg = 0 )
data |
A TRONCO compliant dataset. |
regularization |
Select the regularization for the likelihood estimation, e.g., BIC, AIC. |
do.boot |
A parameter to disable/enable the estimation of the error rates give the reconstructed model. |
nboot |
Number of bootstrap sampling (with rejection) to be performed when estimating the selective advantage scores. |
pvalue |
Pvalue to accept/reject the valid selective advantage relations. |
min.boot |
Minimum number of bootstrap sampling to be performed. |
min.stat |
A parameter to disable/enable the minimum number of bootstrap sampling required besides nboot if any sampling is rejected. |
boot.seed |
Initial seed for the bootstrap random sampling. |
silent |
A parameter to disable/enable verbose messages. |
epos |
Error rate of false positive errors. |
eneg |
Error rate of false negative errors. |
A TRONCO compliant object with reconstructed model
data(test_dataset_no_hypos) recon = tronco.prim(test_dataset_no_hypos, nboot = 1)
data(test_dataset_no_hypos) recon = tronco.prim(test_dataset_no_hypos, nboot = 1)
Print to console a short report of a dataset 'x', which should be
a TRONCO compliant dataset - see is.compliant
.
view(x, view = 5)
view(x, view = 5)
x |
A TRONCO compliant dataset. |
view |
The firse |
data(test_dataset) view(test_dataset)
data(test_dataset) view(test_dataset)
Return a list of samples with specified alteration
which.samples(x, gene, type, neg = FALSE)
which.samples(x, gene, type, neg = FALSE)
x |
A TRONCO compliant dataset. |
gene |
A list of gene names |
type |
A list of types |
neg |
If FALSE return the list, if TRUE return as.samples() - list |
A list of sample
data(test_dataset) which.samples(test_dataset, 'TET2', 'ins_del') which.samples(test_dataset, 'TET2', 'ins_del', neg=TRUE)
data(test_dataset) which.samples(test_dataset, 'TET2', 'ins_del') which.samples(test_dataset, 'TET2', 'ins_del', neg=TRUE)
XOR hypothesis
XOR(...)
XOR(...)
... |
Atoms of the hard exclusive pattern given either as labels or as partielly lifted vectors. |
Vector to be added to the lifted genotype resolving the hard exclusive pattern