Posted on Categories Administrativia, Statistics, TutorialsTags , , Leave a comment on New screencast: using R and RStudio to install and experiment with Apache Spark

New screencast: using R and RStudio to install and experiment with Apache Spark

I have new short screencast up: using R and RStudio to install and experiment with Apache Spark.

More material from my recent Strata workshop Modeling big data with R, sparklyr, and Apache Spark can be found here.

Posted on Categories Exciting Techniques, Pragmatic Data Science, Programming, Statistics, TutorialsTags , , , , Leave a comment on Step-Debugging magrittr/dplyr Pipelines in R with wrapr and replyr

Step-Debugging magrittr/dplyr Pipelines in R with wrapr and replyr

In this screencast we demonstrate how to easily and effectively step-debug magrittr/dplyr pipelines in R using wrapr and replyr.



Continue reading Step-Debugging magrittr/dplyr Pipelines in R with wrapr and replyr

Posted on Categories Opinion, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, Statistics, TutorialsTags , , , Leave a comment on vtreat: prepare data

vtreat: prepare data

This article is on preparing data for modeling in R using vtreat.

Vtreat Continue reading vtreat: prepare data

Posted on Categories Opinion, Programming, TutorialsTags , , , 1 Comment on Iteration and closures in R

Iteration and closures in R

I recently read an interesting thread on unexpected behavior in R when creating a list of functions in a loop or iteration. The issue is solved, but I am going to take the liberty to try and re-state and slow down the discussion of the problem (and fix) for clarity.

The issue is: are references or values captured during iteration?

Many users expect values to be captured. Most programming language implementations capture variables or references (leading to strange aliasing issues). It is confusing (especially in R, which pushes so far in the direction of value oriented semantics) and best demonstrated with concrete examples.


NewImage

Please read on for a some of the history and future of this issue. Continue reading Iteration and closures in R

Posted on Categories data science, Practical Data Science, Pragmatic Data Science, Programming, Statistics, TutorialsTags , , , , 16 Comments on The Zero Bug

The Zero Bug

I am going to write about an insidious statistical, data analysis, and presentation fallacy I call “the zero bug” and the habits you need to cultivate to avoid it.


The zero bug

The zero bug

Here is the zero bug in a nutshell: common data aggregation tools often can not “count to zero” from examples, and this causes problems. Please read on for what this means, the consequences, and how to avoid the problem. Continue reading The Zero Bug

Posted on Categories Programming, Statistics, TutorialsTags , , , , , 1 Comment on Evolving R Tools and Practices

Evolving R Tools and Practices

One of the distinctive features of the R platform is how explicit and user controllable everything is. This allows the style of use of R to evolve fairly rapidly. I will discuss this and end with some new notations, methods, and tools I am nominating for inclusion into your view of the evolving “current best practice style” of working with R. Continue reading Evolving R Tools and Practices

Posted on Categories Exciting Techniques, Statistics, TutorialsTags , , , , , , , 2 Comments on Using the Bizarro Pipe to Debug magrittr Pipelines in R

Using the Bizarro Pipe to Debug magrittr Pipelines in R

I have just finished and released a free new R video lecture demonstrating how to use the “Bizarro pipe” to debug magrittr pipelines. I think R dplyr users will really enjoy it.



Please read on for the link to the video lecture. Continue reading Using the Bizarro Pipe to Debug magrittr Pipelines in R

Posted on Categories Computers, Opinion, Public Service Article, Statistics, TutorialsTags , , , , 12 Comments on Upgrading to macOS Sierra (nee OSX) for R users

Upgrading to macOS Sierra (nee OSX) for R users

A good fraction of R users use Apple computers. Apple machines historically have sat at a sweet spot of convenience, power, and utility:

  • Convenience: Apple machines are available at retail stores, come with purchasable support, and can run a lot of common commercial software.
  • Power: R packages such as parallel and Rcpp work better on top of a Posix environment.
  • Utility: OSX was good at interoperating with the Linux your big data systems are likely running on, and some R packages expect a native operating system supporting a Posix environment (which historically has not been a Microsoft Windows, strength despite claims to the contrary).

Frankly the trade-off is changing:

  • Apple is neglecting its computer hardware and operating system in favor of phones and watches. And (for claimed license prejudice reasons) the lauded OSX/macOS “Unix userland” is woefully out of date (try “bash --version” in an Apple Terminal; it is about 10 years out of date!).
  • Microsoft Windows Unix support is improving (Windows 10 bash is interesting, though R really can’t take advantage of that yet).
  • Linux hardware support is improving (though not fully there for laptops, modern trackpads, touch screens, or even some wireless networking).

Our current R platform remains Apple macOS. But our next purchase is likely a Linux laptop with the addition of a legal copy of Windows inside a virtual machine (for commercial software not available on Linux). It has been a while since Apple last “sparked joy” around here, and if Linux works out we may have a few Apple machines sitting on the curb with paper bags over their heads (Marie Kondo’s advice for humanely disposing of excess inanimate objects that “see”, such as unloved stuffed animals with eyes and laptops with cameras).

IMG 0726

That being said: how does one update an existing Apple machine to macOS Sierra and then restore enough functionality to resume working? Please read on for my notes on the process. Continue reading Upgrading to macOS Sierra (nee OSX) for R users

Posted on Categories math programming, Pragmatic Machine Learning, Statistics, TutorialsTags , , , , , 3 Comments on Why do Decision Trees Work?

Why do Decision Trees Work?

In this article we will discuss the machine learning method called “decision trees”, moving quickly over the usual “how decision trees work” and spending time on “why decision trees work.” We will write from a computational learning theory perspective, and hope this helps make both decision trees and computational learning theory more comprehensible. The goal of this article is to set up terminology so we can state in one or two sentences why decision trees tend to work well in practice.

Continue reading Why do Decision Trees Work?

Posted on Categories data science, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, Statistics, TutorialsTags , , , , , , , 3 Comments on A Theory of Nested Cross Simulation

A Theory of Nested Cross Simulation

[Reader’s Note. Some of our articles are applied and some of our articles are more theoretical. The following article is more theoretical, and requires fairly formal notation to even work through. However, it should be of interest as it touches on some of the fine points of cross-validation that are quite hard to perceive or discuss without the notational framework. We thought about including some “simplifying explanatory diagrams” but so many entities are being introduced and manipulated by the processes we are describing we found equation notation to be in fact cleaner than the diagrams we attempted and rejected.]

Please consider either of the following common predictive modeling tasks:

  • Picking hyper-parameters, fitting a model, and then evaluating the model.
  • Variable preparation/pruning, fitting a model, and then evaluating the model.

In each case you are building a pipeline where “y-aware” (or outcome aware) choices and transformations made at each stage affect later stages. This can introduce undesirable nested model bias and over-fitting.

Our current standard advice to avoid nested model bias is either:

  • Split your data into 3 or more disjoint pieces, such as separate variable preparation/pruning, model fitting, and model evaluation.
  • Reserve a test-set for evaluation and use “simulated out of sample data” or “cross-frame”/“cross simulation” techniques to simulate dividing data among the first two model construction stages.

The first practice is simple and computationally efficient, but statistically inefficient. This may not matter if you have a lot of data, as in “big data”. The second procedure is more statistically efficient, but is also more complicated and has some computational cost. For convenience the cross simulation method is supplied as a ready to go procedure in our R data cleaning and preparation package vtreat.

What would it look like if we insisted on using cross simulation or simulated out of sample techniques for all three (or more) stages? Please read on to find out.

CleanAllTheThings

Hyperbole and a Half copyright Allie Brosh (use allowed in some situations with attribution)