Posted on Categories Exciting Techniques, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, StatisticsTags , , ,

vtreat up on CRAN!

Nina Zumel and I are proud to announce our R vtreat variable treatment library has just been accepted by CRAN!

IMG 6061

It will take some time for the vtreat package to progress to various CRAN mirrors, but as of now you can install vtreat with the command:

install.packages('vtreat',
   repos='http://cran.r-project.org/')

Instead of needing to use devtools to install from the Github version as in:

devtools::install_github('WinVector/vtreat')

The purpose of vtreat library is to reliably prepare data for supervised machine learning. We try to leave as much as possible to the machine learning algorithms themselves, but cover most of the truly necessary typically ignored precautions. The library is designed to produce a data.frame that is entirely numeric and takes common precautions to guard against the following real world data issues:

  • Categorical variables with very many levels.

    We re-encode such variables as a family of indicator or dummy variables for common levels plus an additional impact code (also called “effects coded” in Jacob Cohen, Patricia Cohen, Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, 2nd edition, 1983). This allows principled use (including smoothing) of huge categorical variables (like zip-codes) when building models. This is critical for some libraries (such as randomForest, which has hard limits on the number of allowed levels).

  • Novel categorical levels).

    A common problem in deploying a classifier to production is: new levels (levels not seen during training) encountered during model application. We deal with this by encoding categorical variables in a possibly redundant manner: reserving a dummy variable for all levels (not the more common all but a reference level scheme). This is in fact the correct representation for regularized modeling techniques and lets us code novel levels as all dummies simultaneously zero (which is a reasonable thing to try). This encoding while limited is cheaper than the fully Bayesian solution of computing a weighted sum over previously seen levels during model application.

  • Missing/invalid values NA, NaN, +-Inf.

    Variables with these issues are re-coded as two columns. The first column is clean copy of the variable (with missing/invalid values replaced with either zero or the grand mean, depending on the user chose of the scale parameter). The second column is a dummy or indicator that marks if the replacement has been performed. This is simpler than imputation of missing values, and allows the downstream model to attempt to use missingness as a useful signal (which it often is in industrial data).

  • Extreme values.

    Variables can be restricted to stay in ranges seen during training. This can defend against some run-away classifier issues during model application.

  • Constant and near-constant variables.

    Variables that “don’t vary” or “nearly don’t vary” are suppressed.

  • Need for estimated single-variable model effect sizes and significances.

    It is a dirty secret that even popular machine learning techniques need some variable pruning (when exposed to very wide data frames, see here and here). We make the necessary effect size estimates and significances easily available and supply initial variable pruning.

The above are all awful things that often lurk in real world data. Automating these steps ensures they are easy enough that you actually perform them and leaves the analyst time to look for additional data issues. For example this allowed us to essentially automate a number of the steps taught in chapters 4 and 6 of Practical Data Science with R (Zumel, Mount; Manning 2014) into a very short worksheet (though we think for understanding it is essential to work all the steps by hand as we did in the book).

The idea is: data.frames prepared with the vtreat library are somewhat safe to train on as some precaution has been taken against all of the above issues. Also of interest are the vtreat variable significances (help in initial variable pruning, a necessity when there are a large number of columns) and vtreat::prepare(scale=TRUE) which re-encodes all variables into effect units making them suitable for y-aware dimension reduction (variable clustering, or principal component analysis) and for geometry sensitive machine learning techniques (k-means, knn, linear SVM, and more). You may want to do more than the vtreat library does (such as Bayesian imputation, variable clustering, and more) but you certainly do not want to do less.

The original announcement is getting a bit out of date, so we hope to be able to write a new article on vtreat soon. Until then we suggest running vignette('vtreat') in R to produce a rendered version of the package vignette. You can also checkout the package manual, now available online.

There have been a number of recent substantial improvements to the library, including:

  • Out of sample scoring.
  • Ability to use parallel.
  • More general calculation of effect sizes and significances.
  • Addition of collaring or Winsorising to defend from outliers.

Some of our related articles (which should make clear some of our motivations, and design decisions):

A short example of current best practice using vtreat (variable coding, train, test split) is here.

7 thoughts on “vtreat up on CRAN!”

  1. John,

    Say you have a high cardinality variable like SIC or NAICS code that has some hierarchy to it. Perhaps for a 4 digit SIC code you do not want to default to the global mean but rather the 3 digit SIC code family of which the 4 digit code is a member. Can vtreat handle this situation, where you want more versatility than simply using the global mean?

    Perhaps putting vtreat in a loop and executing each 3 digit family of rows separately or something like that…

    1. Great question and great idea.

      Basically a “domain aware” method is usually going to do better than a completely general approach like vtreat.

      The extreme case of having useful domain knowledge is you use the high cardinality variable as join key against some domain specific lookup table that brings in one or more columns that greatly improve your model.

      Another case is like your example: you don’t have an external data source you can join to, but you know how codes are grouped. For example United States Zip code are designed so the 3-digit prefixes are somewhat reasonable regions. So presenting the column twice to vtreat (once at 5 digits and once with the initial 3 digits) under different names can be very effective. The 3-digit stats may have much less variance (so are better estimated) and the 5 digit variants may supply additional push when needed. A further refinement would be to fit the 5 digit vtreat codes on the residuals of the 3-digit codes (something vtreat doesn’t currently do). One could try to simulate this by first fitting 3 digit model and then fitting a 5 digit model on a new y (either the residuals of the 3 digit model for regression, or regressing on deviance as a pseudo-gradient for classification).

      You wouldn’t use 4-digit prefixes of zip-codes, as they are not designed to be geographically sensible.

      Further ideas would be to preform some sort of hierarchical regression on the different granularities of the variable.

Comments are closed.