I recently read an interesting thread on unexpected behavior in R when creating a list of functions in a loop or iteration. The issue is solved, but I am going to take the liberty to try and re-state and slow down the discussion of the problem (and fix) for clarity.
The issue is: are references or values captured during iteration?
Many users expect values to be captured. Most programming language implementations capture variables or references (leading to strange aliasing issues). It is confusing (especially in R, which pushes so far in the direction of value oriented semantics) and best demonstrated with concrete examples.
I am going to write about an insidious statistical, data analysis, and presentation fallacy I call “the zero bug” and the habits you need to cultivate to avoid it.
The zero bug
Here is the zero bug in a nutshell: common data aggregation tools often can not “count to zero” from examples, and this causes problems. Please read on for what this means, the consequences, and how to avoid the problem. Continue reading The Zero Bug
Recently Dirk Eddelbuettel pointed out that our R function debugging wrappers would be more convenient if they were available in a low-dependency micro package dedicated to little else. Dirk is a very smart person, and like most R users we are deeply in his debt; so we (Nina Zumel and myself) listened and immediately moved the wrappers into a new micro-package: wrapr.
One of the distinctive features of the R platform is how explicit and user controllable everything is. This allows the style of use of R to evolve fairly rapidly. I will discuss this and end with some new notations, methods, and tools I am nominating for inclusion into your view of the evolving “current best practice style” of working with R. Continue reading Evolving R Tools and Practices
Are you attending or considering attending Strata / Hadoop World 2017 San Jose? Are you interested in learning to use R to work with Spark and h2o? Then please consider signing up for my 3 1/2 hour workshop soon. We are about half full now, but I really want to fill the room, while making sure that people who really want to go get in.
Win-Vector LLC is partnering with RStudio to produce and present some awesome material that will allow you to perform data science at scale using R to control Spark and even h2o.
The links to the event are below. To make sure you get to participate please sign up soon!
Modeling big data with R, sparklyr, and Apache Spark (by RStudio and Win-Vector LLC)
03/14/2017 1:30pm – 5:00pm PDT (210 minutes)
Strata & Hadoop World West, San Jose Convention Center, CA; Room: LL21 C/D
Win-Vector LLC’s John Mount will teach how to use R to control big data analytics and modeling. In depth training to prepare you to use R, Spark, sparklyr, h2o, and rsparkling.
This is going to be hands-on exercises with R, sparklyr, and h2o using RStudio Server Pro (generously provided by RStudio!).
Sponsored by RStudio and
Office Hour with John Mount (Win-Vector LLC)
03/15/2017 2:40pm – 3:20pm PDT (40 minutes)
Strata & Hadoop World West, San Jose Convention Center, CA; Room: Table B
Come and ask me questions about data science, machine learning, R, statistics, or whatever you like.
A good fraction of R users use Apple computers. Apple machines historically have sat at a sweet spot of convenience, power, and utility:
Convenience: Apple machines are available at retail stores, come with purchasable support, and can run a lot of common commercial software.
Power: R packages such as parallel and Rcpp work better on top of a Posix environment.
Utility: OSX was good at interoperating with the Linux your big data systems are likely running on, and some R packages expect a native operating system supporting a Posix environment (which historically has not been a Microsoft Windows, strength despite claims to the contrary).
Frankly the trade-off is changing:
Apple is neglecting its computer hardware and operating system in favor of phones and watches. And (for claimed license prejudice reasons) the lauded OSX/macOS “Unix userland” is woefully out of date (try “bash --version” in an Apple Terminal; it is about 10 years out of date!).
Microsoft Windows Unix support is improving (Windows 10 bash is interesting, though R really can’t take advantage of that yet).
Linux hardware support is improving (though not fully there for laptops, modern trackpads, touch screens, or even some wireless networking).
Our current R platform remains Apple macOS. But our next purchase is likely a Linux laptop with the addition of a legal copy of Windows inside a virtual machine (for commercial software not available on Linux). It has been a while since Apple last “sparked joy” around here, and if Linux works out we may have a few Apple machines sitting on the curb with paper bags over their heads (Marie Kondo’s advice for humanely disposing of excess inanimate objects that “see”, such as unloved stuffed animals with eyes and laptops with cameras).
In this article we will discuss the machine learning method called “decision trees”, moving quickly over the usual “how decision trees work” and spending time on “why decision trees work.” We will write from a computational learning theory perspective, and hope this helps make both decision trees and computational learning theory more comprehensible. The goal of this article is to set up terminology so we can state in one or two sentences why decision trees tend to work well in practice.