--- title: "Get started with pins" output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Get started with pins} %\VignetteEngine{knitr::rmarkdown} %\usepackage[utf8]{inputenc} --- ```{r, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) # Fake testing environment to get consistent hashes Sys.setenv("TESTTHAT" = "true") options(pins.quiet = FALSE) ``` The pins package helps you publish data sets, models, and other R objects, making it easy to share them across projects and with your colleagues. You can pin objects to a variety of "boards", including local folders (to share on a networked drive or with dropbox), Posit Connect, Amazon S3, and more. This vignette will introduce you to the basics of pins. ```{r} library(pins) ``` ## Getting started Every pin lives in a pin *board*, so you must start by creating a pin board. In this vignette I'll use a temporary board which is automatically deleted when your R session is over: ```{r} board <- board_temp() ``` In real-life, you'd pick a board depending on how you want to share the data. Here are a few options: ```{r, eval = FALSE} board <- board_local() # share data across R sessions on the same computer board <- board_folder("~/Dropbox") # share data with others using dropbox board <- board_folder("Z:\\my-team\pins") # share data using a shared network drive board <- board_connect() # share data with Posit Connect ``` ## Reading and writing data Once you have a pin board, you can write data to it with `pin_write()`: ```{r} mtcars <- tibble::as_tibble(mtcars) board %>% pin_write(mtcars, "mtcars") ``` The first argument is the object to save (usually a data frame, but it can be any R object), and the second argument gives the "name" of the pin. The name is basically equivalent to a file name: you'll use it when you later want to read the data from the pin. The only rule for a pin name is that it can't contain slashes. After you've pinned an object, you can read it back with `pin_read()`: ```{r} board %>% pin_read("mtcars") ``` You don't need to supply the file type when reading data from a pin because pins automatically stores the file type in the [metadata](#metadata). ## How and what to store as a pin As you can see from the output in the previous section, pins has chosen to save this example data to an `.rds` file. But you can choose another option depending on your goals: - `type = "rds"` uses `writeRDS()` to create a binary R data file. It can save any R object (including trained models) but it's only readable from R, not other languages. - `type = "csv"` uses `write.csv()` to create a CSV file. CSVs are plain text and can be read easily by many applications, but they only support simple columns (e.g. numbers, strings), can take up a lot of disk space, and can be slow to read. - `type = "parquet"` uses `nanoparquet::write_parquet()` to create a Parquet file. [Parquet](https://parquet.apache.org/) is a modern, language-independent, column-oriented file format for efficient data storage and retrieval. Parquet is an excellent choice for storing tabular data but requires the [nanoparquet](https://nanoparquet.r-lib.org/) package. - `type = "arrow"` uses `arrow::write_feather()` to create an Arrow/Feather file. - `type = "json"` uses `jsonlite::write_json()` to create a JSON file. Pretty much every programming language can read json files, but they only work well for nested lists. - `type = "qs"` uses `qs::qsave()` to create a binary R data file, like `writeRDS()`. This format achieves faster read/write speeds than RDS, and compresses data more efficiently, making it a good choice for larger objects. Read more on the [qs package](https://github.com/qsbase/qs). Note that when the data lives elsewhere, pins takes care of downloading and caching so that it's only re-downloaded when needed. That said, most boards transmit pins over HTTP, and this is going to be slow and possibly unreliable for very large pins. As a general rule of thumb, we don't recommend using pins with files over 500 MB. If you find yourself routinely pinning data larger that this, you might need to reconsider your data engineering pipeline. Storing your data/object as a pin works well when you write from a single source or process. It is _not_ appropriate when multiple sources or processes need to write to the same pin; since the pins package reads and writes files, it cannot manage concurrent writes. - **Good** use for pins: an ETL pipeline that stores a model or summarized dataset once a day - **Bad** use for pins: a Shiny app that collects data from users, who may be using the app at the same time ## Metadata Every pin is accompanied by some metadata that you can access with `pin_meta()`: ```{r} board %>% pin_meta("mtcars") ``` This shows you the metadata that's generated by default. This includes: - `title`, a brief textual description of the dataset. - an optional `description`, where you can provide more details. - the date-time when the pin was `created`. - the `file_size`, in bytes, of the underlying files. - a unique `pin_hash` that you can supply to `pin_read()` to ensure that you're reading exactly the data that you expect. When creating the pin, you can override the default description or provide additional metadata that is stored with the data: ```{r, echo=FALSE} Sys.sleep(1) ``` ```{r} board %>% pin_write(mtcars, description = "Data extracted from the 1974 Motor Trend US magazine, and comprises fuel consumption and 10 aspects of automobile design and performance for 32 automobiles (1973–74 models).", metadata = list( source = "Henderson and Velleman (1981), Building multiple regression models interactively. Biometrics, 37, 391–411." ) ) board %>% pin_meta("mtcars") ``` While we'll do our best to keep the automatically generated metadata consistent over time, I'd recommend manually capturing anything you really care about in `metadata`. ## Versioning In many situations it's useful to version pins, so that writing to an existing pin does not replace the existing data, but instead adds a new copy. There are two ways to turn versioning on: - When you create a board you can turn versioning on for every pin in that board: ```{r, eval = FALSE} board2 <- board_temp(versioned = TRUE) ``` - When you write a pin, you can specifically request that versioning be turned on for that pin: ```{r, eval = FALSE} board2 <- board_temp() board2 %>% pin_write(mtcars, versioned = TRUE) ``` Most boards have versioning on by default. The primary exception is `board_folder()` since that stores data on your computer, and there's no automated way to clean up the data you're saving. Once you have turned versioning on, every `pin_write()` will create a new version: ```{r} board2 <- board_temp(versioned = TRUE) board2 %>% pin_write(1:5, name = "x", type = "rds") board2 %>% pin_write(2:6, name = "x", type = "rds") board2 %>% pin_write(3:7, name = "x", type = "rds") ``` You can list all the available versions with `pin_versions()`: ```{r} board2 %>% pin_versions("x") ``` You can delete a specific older version with `pin_version_delete()` or sets of older versions with `pin_versions_prune()`. By default, `pin_read()` will return the most recent version: ```{r} board2 %>% pin_read("x") ``` But you can request an older version by supplying the `version` argument: ```{r, eval = FALSE} board2 %>% pin_read("x", version = "20210520T173110Z-49519") ``` ## Reading and writing files So far we've focussed on `pin_write()` and `pin_read()` which work with R objects. pins also provides the lower-level `pin_upload()` and `pin_download()` which work with files on disk. You can use them to share types of data that are otherwise unsupported by pins. `pin_upload()` works like `pin_write()` but instead of an R object you give it a vector of paths. I'll start by creating a few files in the temp directory: ```{r} paths <- file.path(tempdir(), c("mtcars.csv", "alphabet.txt")) write.csv(mtcars, paths[[1]]) writeLines(letters, paths[[2]]) ``` Now I can upload those to the board: ```{r} board %>% pin_upload(paths, "example") ``` `pin_download()` returns a vector of paths: ```{r} board %>% pin_download("example") ``` It's now your job to handle them. You should treat these paths as internal implementation details --- never modify them and never save them for use outside of pins. Note that you can't `pin_read()` something you pinned with `pin_upload()`: ```{r, error = TRUE} board %>% pin_read("example") ``` But you can `pin_download()` something that you've pinned with `pin_write()`: ```{r} board %>% pin_download("mtcars") ``` ## Caching The primary purpose of pins is to make it easy to share data. But pins is also designed to help you spend as little time as possible downloading data. `pin_read()` and `pin_download()` automatically cache remote pins: they maintain a local copy of the data (so it's fast) but always check that it's up-to-date (so your analysis doesn't use stale data). Wouldn't it be nice if you could take advantage of this feature for any dataset on the internet? That's the idea behind `board_url()` --- you can assemble your own board from datasets, wherever they live on the internet. For example, this code creates a board containing a single pin, `penguins`, that refers to some fun data I found on GitHub: ```{r} my_data <- board_url(c( "penguins" = "https://raw.githubusercontent.com/allisonhorst/palmerpenguins/master/inst/extdata/penguins_raw.csv" )) ``` You can read this data by combining `pin_download()` with `read.csv()`[^1]: [^1]: Here I'm using `read.csv()` to the reduce the dependencies of the pins package. For real code I'd recommend using `data.table::fread()` or `readr::read_csv().` ```{r} my_data %>% pin_download("penguins") %>% read.csv(check.names = FALSE) %>% tibble::as_tibble() ``` `board_url()` requires a bit of work compared to using `download.file()` or similar but it has a big payoff: the data will only be re-downloaded when it changes.