This note shares an experiment comparing the performance of a number of data processing systems available in
R. Our notional or example problem is finding the top ranking item per group (group defined by three string columns, and order defined by a single numeric column). This is a common and often needed task.
Continue reading Timings of a Grouped Rank Filter Task
Some more Practical Data Science with R news.
Practical Data Science with R is the book we wish we had when we started in data science. Practical Data Science with R, Second Edition is the revision of that book with the packages we wish had been available at that time (in particular
wrapr). A second edition also lets us also correct some omissions, such as not demonstrating
For your part: please help us get the word out about this book. Practical Data Science with R, Second Edition, R in Action, Second Edition, and Think Like a Data Scientist are Manning’s August 20th 2018 “Deal of the Day” (use code
dotd082018au at https://www.manning.com/dotd).
For our part we are busy revising chapters and setting up a new Github repository for examples and code and other reader resources.
R package is really good at sorting. Below is a comparison of it versus
dplyr for a range of problem sizes.
Continue reading data.table is Really Good at Sorting
Derek Jones recently discussed a possible future for the
R ecosystem in “StatsModels: the first nail in R’s coffin”.
This got me thinking on the future of
CRAN (which I consider vital to
R, and vital in distributing our work) in the era of super-popular meta-packages. Meta-packages are convenient, but they have a profoundly negative impact on the packages they exclude.
tidyverse advertises a popular
R universe where the vital package
data.table never existed.
tidymodels is shaping up to be a popular universe where our own package
vtreat never existed, except possibly as a footnote to
Users currently (with some luck) discover packages like ours and then (because they trust
CRAN) feel able to try them. With popular walled gardens that becomes much less likely. It is one thing for a standard package to duplicate another package (it is actually hard to avoid, and how work legitimately competes), it is quite another for a big-brand meta-package to pre-pick winners (and losers).
All I can say is: please give
vtreat a chance and a try. It is a package for preparing messy real-world data for predictive modeling. In addition to re-coding high cardinality categorical variables (into what we call effect-codes after Cohen, or impact-codes), it deals with missing values, can be parallelized, can be run on databases, and has years of production experience baked in.
Some places to start with
Not a full
R article, but a quick note demonstrating by example the advantage of being able to collect many expressions and pack them into a single
Continue reading Collecting Expressions in R
rqdatatable are new
R packages for data wrangling; either at scale (in databases, or big data systems such as Apache Spark), or in-memory. The packages speed up both execution (through optimizations) and development (though a good mental model and up-front error checking) for data wrangling tasks.
Win-Vector LLC‘s John Mount will be speaking on the
rqdatatable packages at the The East Bay R Language Beginners Group Tuesday, August 7, 2018 (Oakland, CA).
Continue reading John Mount speaking on rquery and rqdatatable
In this note we will show how to speed up work in
R by partitioning data and process-level parallelization. We will show the technique with three different
dplyr. The methods shown will also work with base-
R and other packages.
For each of the above packages we speed up work by using
wrapr::execute_parallel which in turn uses
wrapr::partition_tables to partition un-related
data.frame rows and then distributes them to different processors to be executed.
rqdatatable::ex_data_table_parallel conveniently bundles all of these steps together when working with
The partitioning is specified by the user preparing a grouping column that tells the system which sets of rows must be kept together in a correct calculation. We are going to try to demonstrate everything with simple code examples, and minimal discussion.
Continue reading Speed up your R Work
We are pleased to announce that seplyr version 0.5.8 is now available on CRAN.
seplyr is an R package that provides a thin wrapper around elements of the dplyr package and (now with version 0.5.8) the tidyr package. The intent is to give the part time R user the ability to easily program over functions from the popular dplyr and tidyr packages. Our assumption is always that a data scientist most often comes to R to work with data, not to tinker with the programming language itself.
Continue reading seplyr 0.5.8 Now Available on CRAN