Software for reckoning AnVIL/Terra usage

Introduction

Detailed billing information is essential for cost and resource planning in cloud based analysis projects, but it can be difficult to obtain. The goal of this software is to help users of the Terra/AnVIL platform get access to this data as easily as possible.

The google cloud platform console can be used to acquire information at varying levels of detail. For example, it is simple to generate a display like the following.

Cost track from Google console.
Cost track from Google console.

However, the cost track here sums up charges for various activities related to CPU usage, storage, and network use. Our aim is to provide R functions to help present information on charges arising in the use of AnVIL.

To whet the appetite, we will show how to run an exploratory app that looks like:

Early view of reckoning app.
Early view of reckoning app.

Installation

Install the AnVILBilling package with

if (!requireNamespace("BiocManager", quietly = TRUE))
    install.packages("BiocManager", repos = "https://cran.r-project.org")
BiocManager::install("AnVILBilling")

Once installed, load the package with

library(AnVILBilling)

Setup

The functions in this vignette require a user to connect the billing export in the Google Cloud Platform project associated with Terra/AnVIL to a BigQuery dataset.

General information on this process can be found here:

https://cloud.google.com/billing/docs/how-to/export-data-bigquery

In order to set this up with the AnVIL/Terra system:

  1. Create a new project in Google under the billing account linked to the Terra project
  2. Create a BigQuery dataset to store the billing export
  3. Go to the billing tab in the gcloud console and enable export to this dataset
  4. Make sure the user of this software has the BigQuery scope on the billing project

Once this is accomplished you will be able to see

exportview
exportview

with values appropriate to your project and account configuration substituted for ‘landmarkMark2’ (the compute project name), ‘bjbilling’ (the Google project with BigQuery scope that is used to transmit cost data on landmarkMark2 to Bigquery), and ‘anvilbilling’ (the BigQuery dataset name where next-day cost values are stored).

Obtaining billing data

Overview

Billing data is generally available 1 day after incurring charges. Billing data is stored in BigQuery in a partitioned table, and is queryable using the bigrquery package.

Setting up a request

In order to generate a request you need:

  1. start: date of start of reckoning
  2. end: date of end of reckoning
  3. project: GCP project id
  4. dataset: GCP dataset id for billing data in BigQuery
  5. table: GCP table for billing data in BigQuery

Then you can use the function:

setup_billing_request(start, end, project, dataset, table, billing_code)

To create a request.

Once you have a request object, then you can get the billing data associated with that request using the reckon() function on your billing request.

Output

The result of a reckoning on a billing request is an instance of avReckoning

We took a snapshot of usage in a project we work on, and it is available as demo_rec. This request represents one day of usage in AnVIL/Terra.

suppressPackageStartupMessages({
library(AnVILBilling)
library(dplyr)
library(magrittr)
library(BiocStyle)
})

demo_rec
## AnVIL reckoning info for project  bjbilling 
##   starting 2020-01-28, ending 2020-01-29.
## There are  1599  records.
## Available keys:
##  [1] "goog-resource-type"               "goog-metric-domain"              
##  [3] "goog-dataproc-cluster-name"       "goog-dataproc-cluster-uuid"      
##  [5] "goog-dataproc-location"           "cromwell-workflow-id"            
##  [7] "goog-pipelines-worker"            "terra-submission-id"             
##  [9] "wdl-task-name"                    "security"                        
## [11] "ad-anvil_devs"                    "ad-auth_anvil_anvil_gtex_v8_hg38"
## --- 
## Use ab_reckoning() for full table

The available keys for the billing object are shown.

For Terra, 3 of the most useful keys are:

  1. terra-submission-id : this key is associated with a cromwell workflow execution
  2. cromwell-workflow-id : this key is associated with a job in a workflow, so if you apply a workflow to multiple inputs, you will have 1 terra id and multiple cromwell ids
  3. goog-dataproc-cluster-name : this key is associated with a jupyter notebook or Rstudio cluster on Terra. The user of these resources can be determined using Bioconductor’s AnVIL package.

The code, to be used while the cluster is in use, would look like this:

library(AnVIL)
leo = Leonardo()
leo$getRuntime(clustername)

Drilling down

Given a key type, we want to know associated values.

v = getValues(demo_rec@reckoning, "terra-submission-id")
v
## [1] "terra-196d3163-4eef-46e8-a7e6-e71c0012003d"

To understand activities associated with this submission, we subset the table.

s = subsetByKeyValue(demo_rec@reckoning, "terra-submission-id", v)
s
## # A tibble: 955 × 17
##    billing_account_id   service          sku              usage_start_time   
##    <chr>                <list>           <list>           <dttm>             
##  1 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  2 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  3 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  4 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  5 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  6 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  7 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  8 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
##  9 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
## 10 015E39-38569D-3CC771 <named list [2]> <named list [2]> 2020-01-28 20:00:00
## # ℹ 945 more rows
## # ℹ 13 more variables: usage_end_time <dttm>, project <list>, labels <list>,
## #   system_labels <list>, location <list>, export_time <dttm>, cost <dbl>,
## #   currency <chr>, currency_conversion_rate <dbl>, usage <list>,
## #   credits <list>, invoice <list>, cost_type <chr>

The following data is available in this object

names(s)
##  [1] "billing_account_id"       "service"                 
##  [3] "sku"                      "usage_start_time"        
##  [5] "usage_end_time"           "project"                 
##  [7] "labels"                   "system_labels"           
##  [9] "location"                 "export_time"             
## [11] "cost"                     "currency"                
## [13] "currency_conversion_rate" "usage"                   
## [15] "credits"                  "invoice"                 
## [17] "cost_type"

You can drill down more to see what products used during the submission:

AnVILBilling:::getSkus(s)
##  [1] "Storage PD Capacity"                                                   
##  [2] "SSD backed PD Capacity"                                                
##  [3] "Network Inter Zone Ingress"                                            
##  [4] "Network Intra Zone Ingress"                                            
##  [5] "External IP Charge on a Standard VM"                                   
##  [6] "Custom Instance Ram running in Americas"                               
##  [7] "Custom Instance Core running in Americas"                              
##  [8] "Licensing Fee for Shielded COS (CPU cost)"                             
##  [9] "Licensing Fee for Shielded COS (RAM cost)"                             
## [10] "Network Internet Ingress from APAC to Americas"                        
## [11] "Network Internet Ingress from EMEA to Americas"                        
## [12] "Network Google Egress from Americas to Americas"                       
## [13] "Network Internet Ingress from China to Americas"                       
## [14] "Network Google Ingress from Americas to Americas"                      
## [15] "Network Internet Egress from Americas to Americas"                     
## [16] "Network Internet Ingress from Americas to Americas"                    
## [17] "Network Internet Ingress from Australia to Americas"                   
## [18] "Network HTTP Load Balancing Ingress from Load Balancer"                
## [19] "Network Inter Region Ingress from Americas to Americas"                
## [20] "Network Egress via Carrier Peering Network - Americas Based"           
## [21] "Network Ingress via Carrier Peering Network - Americas Based"          
## [22] "Licensing Fee for Container-Optimized OS from Google (CPU cost)"       
## [23] "Licensing Fee for Container-Optimized OS from Google (RAM cost)"       
## [24] "Licensing Fee for Container-Optimized OS - PCID Whitelisted (CPU cost)"
## [25] "Licensing Fee for Container-Optimized OS - PCID Whitelisted (RAM cost)"

You can also get the cost for a workflow using:

data(demo_rec) # makes rec
v = getValues(demo_rec@reckoning, "terra-submission-id")[1] # for instance
getSubmissionCost(demo_rec@reckoning,v)
## [1] 0.054044

And the ram usage as well:

data(demo_rec) # makes rec
v = getValues(demo_rec@reckoning, "terra-submission-id")[1] # for instance
getSubmissionRam(demo_rec@reckoning,v)
##                                 submissionID      workflow
## 1 terra-196d3163-4eef-46e8-a7e6-e71c0012003d runterratrial
## 2 terra-196d3163-4eef-46e8-a7e6-e71c0012003d runterratrial
## 3 terra-196d3163-4eef-46e8-a7e6-e71c0012003d runterratrial
## 4 terra-196d3163-4eef-46e8-a7e6-e71c0012003d runterratrial
## 5 terra-196d3163-4eef-46e8-a7e6-e71c0012003d runterratrial
##                                      cromwellID
## 1 cromwell-4dde8ce1-a8e5-47ba-a261-120ae8c7556c
## 2 cromwell-8edb23d6-a7d2-4b1f-96b4-496e96c1d707
## 3 cromwell-b1dcdbe1-b4ec-4428-8570-e2ab883087d0
## 4 cromwell-664b3519-7c7a-42ba-bb00-562ea1f650fd
## 5 cromwell-176a9f09-483c-4eb1-abf0-e5de2fdf36a7
##                                       sku       amount         unit
## 1 Custom Instance Ram running in Americas 1.676111e+12 byte-seconds
## 2 Custom Instance Ram running in Americas 1.737314e+12 byte-seconds
## 3 Custom Instance Ram running in Americas 1.607392e+12 byte-seconds
## 4 Custom Instance Ram running in Americas 1.573032e+12 byte-seconds
## 5 Custom Instance Ram running in Americas 1.582695e+12 byte-seconds
##     pricingUnit amountInPricingUnit
## 1 gibibyte hour           0.4336111
## 2 gibibyte hour           0.4494444
## 3 gibibyte hour           0.4158333
## 4 gibibyte hour           0.4069444
## 5 gibibyte hour           0.4094444

Using the exploratory app

To simplify some of the aspects of reporting on costs, we have introduced browse_reck, which will authenticate the user to Google BigQuery, and use user-specified inputs to identify an interval of days between which usage data are sought. This function can be called with no arguments, or you can supply the email address for the Google identity to be used in working with Google Cloud Platform projects and BigQuery.

Session Information

sessionInfo()
## R version 4.4.1 (2024-06-14)
## Platform: x86_64-pc-linux-gnu
## Running under: Ubuntu 24.04.1 LTS
## 
## Matrix products: default
## BLAS:   /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3 
## LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/libopenblasp-r0.3.26.so;  LAPACK version 3.12.0
## 
## locale:
##  [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
##  [3] LC_TIME=en_US.UTF-8        LC_COLLATE=C              
##  [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
##  [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
##  [9] LC_ADDRESS=C               LC_TELEPHONE=C            
## [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       
## 
## time zone: Etc/UTC
## tzcode source: system (glibc)
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] magrittr_2.0.3      dplyr_1.1.4         AnVILBilling_1.17.0
## [4] BiocStyle_2.33.1   
## 
## loaded via a namespace (and not attached):
##  [1] shinytoastr_2.2.0   tidyr_1.3.1         plotly_4.10.4      
##  [4] sass_0.4.9          utf8_1.2.4          generics_0.1.3     
##  [7] digest_0.6.37       timechange_0.3.0    evaluate_1.0.1     
## [10] grid_4.4.1          fastmap_1.2.0       jsonlite_1.8.9     
## [13] DBI_1.2.3           promises_1.3.0      BiocManager_1.30.25
## [16] httr_1.4.7          purrr_1.0.2         fansi_1.0.6        
## [19] viridisLite_0.4.2   scales_1.3.0        lazyeval_0.2.2     
## [22] jquerylib_0.1.4     cli_3.6.3           shiny_1.9.1        
## [25] rlang_1.1.4         munsell_0.5.1       bit64_4.5.2        
## [28] cachem_1.1.0        yaml_2.3.10         tools_4.4.1        
## [31] gargle_1.5.2        colorspace_2.1-1    ggplot2_3.5.1      
## [34] httpuv_1.6.15       DT_0.33             buildtools_1.0.0   
## [37] vctrs_0.6.5         R6_2.5.1            mime_0.12          
## [40] lubridate_1.9.3     lifecycle_1.0.4     fs_1.6.4           
## [43] htmlwidgets_1.6.4   bit_4.5.0           pkgconfig_2.0.3    
## [46] pillar_1.9.0        bslib_0.8.0         later_1.3.2        
## [49] gtable_0.3.6        data.table_1.16.2   glue_1.8.0         
## [52] Rcpp_1.0.13         xfun_0.48           tibble_3.2.1       
## [55] tidyselect_1.2.1    sys_3.4.3           knitr_1.48         
## [58] xtable_1.8-4        htmltools_0.5.8.1   rmarkdown_2.28     
## [61] maketools_1.3.1     compiler_4.4.1      bigrquery_1.5.1