A good fraction of R users use Apple computers. Apple machines historically have sat at a sweet spot of convenience, power, and utility:
- Convenience: Apple machines are available at retail stores, come with purchasable support, and can run a lot of common commercial software.
R packages such as
Rcpp work better on top of a Posix environment.
- Utility: OSX was good at interoperating with the Linux your big data systems are likely running on, and some R packages expect a native operating system supporting a Posix environment (which historically has not been a Microsoft Windows, strength despite claims to the contrary).
Frankly the trade-off is changing:
- Apple is neglecting its computer hardware and operating system in favor of phones and watches. And (for claimed license prejudice reasons) the lauded OSX/macOS “Unix userland” is woefully out of date (try “
bash --version” in an Apple Terminal; it is about 10 years out of date!).
- Microsoft Windows Unix support is improving (
Windows 10 bash is interesting, though R really can’t take advantage of that yet).
- Linux hardware support is improving (though not fully there for laptops, modern trackpads, touch screens, or even some wireless networking).
Our current R platform remains Apple macOS. But our next purchase is likely a Linux laptop with the addition of a legal copy of Windows inside a virtual machine (for commercial software not available on Linux). It has been a while since Apple last “sparked joy” around here, and if Linux works out we may have a few Apple machines sitting on the curb with paper bags over their heads (Marie Kondo’s advice for humanely disposing of excess inanimate objects that “see”, such as unloved stuffed animals with eyes and laptops with cameras).
That being said: how does one update an existing Apple machine to macOS Sierra and then restore enough functionality to resume working? Please read on for my notes on the process. Continue reading Upgrading to macOS Sierra (nee OSX) for R users
Data preparation and cleaning are some of the most important steps of predictive analytic and data science tasks. They are laborious, where most of the errors are made, your last line of defense against a wild data, and hold the biggest opportunities for outcome improvement. No matter how much time you spend on them, they still seem like a neglected topic. Data preparation isn’t as self contained or genteel as tweaking machine learning models or hyperparameter tuning; and that is one of the reasons data preparation represents such an important practical opportunity for improvement.
Photo: NY – http://nyphotographic.com/, License: Creative Commons 3 – CC BY-SA 3.0
Our group is distributing a detailed writeup of the theory and operation behind our R realization of a set of sound data preparation and cleaning procedures called vtreat here: arXiv:1611.09477 [stat.AP]. This is where you can find out what
vtreat does, decide if it is appropriate for your problem, or even find a specification allowing the use of the techniques in non-
R environments (such as
Spark, and many others).
We have submitted this article for formal publication, so it is our intent you can cite this article (as it stands) in scientific work as a pre-print, and later cite it from a formally refereed source.
Or alternately, below is the tl;dr (“too long; didn’t read”) form. Continue reading Data Preparation, Long Form and tl;dr Form
Consider the problem of “parametric programming” in R. That is: simply writing correct code before knowing some details, such as the names of the columns your procedure will have to be applied to in the future. Our latest version of
replyr::let makes such programming easier.
Archie’s Mechanics #2 (1954) copyright Archie Publications
(edit: great news! CRAN just accepted our
replyr 0.2.0 fix release!)
Please read on for examples comparing standard notations and
replyr::let. Continue reading Comparative examples using replyr::let
Consider the common following problem: compute for a data set (say the infamous
iris example data set) per-group ranks. Suppose we want the rank of
Sepal.Lengths on a per-
Species basis. Frankly this is an “ugh” problem for many analysts: it involves all at the same time grouping, ordering, and window functions. It also is not likely ever the analyst’s end goal but a sub-step needed to transform data on the way to the prediction, modeling, analysis, or presentation they actually wish to get back to.
Iris, by Diliff – Own work, CC BY-SA 3.0, Link
In our previous article in this series we discussed the general ideas of “row-ID independent data manipulation” and “Split-Apply-Combine”. Here, continuing with our example, we will specialize to a data analysis pattern I call: “Grouped-Ordered-Apply”. Continue reading Organize your data manipulation in terms of “grouped ordered apply”
R picked up a nifty way to organize sequential calculations in May of 2014:
magrittr by Stefan Milton Bache and Hadley Wickham.
magrittr is now quite popular and also has become the backbone of current
If you read my last article on assignment carefully you may have noticed I wrote some code that was equivalent to a
magrittr pipeline without using the “
%>%” operator. This note will expand (tongue in cheek) that notation into an alternative to
magrittr that you should never use.
Superman #169 (May 1964, copyright DC)
What follows is a joke (though everything does work as I state it does, nothing is faked). Continue reading magrittr’s Doppelgänger
R has a number of assignment operators (at least “
=“, and “
->“; plus “
<<-” and “
->>” which have different semantics).
R-style guides routinely insist on “
<-” as being the only preferred form. In this note we are going to try to make the case for “
->” when using magrittr pipelines. [edit: After reading this article, please be sure to read Konrad Rudolph’s masterful argument for using only “
=” for assignment. He also demonstrates a function to land values from pipelines (though that is not his preference). All joking aside, the value-landing part of the proposal does not violate current style guidelines.]
Don Quijote and Sancho Panza, by Honoré Daumier
Continue reading The Case For Using -> In R
Statisticians and data scientists want a neat world where data is arranged in a table such that every row is an observation or instance, and every column is a variable or measurement. Getting to this state of “ready to model format” (often called a denormalized form by relational algebra types) often requires quite a bit of data manipulation. This is how
data.frames describe themselves (try “
str(data.frame(x=1:2))” in an
R-console to see this) and is part of the tidy data manifesto.
SQL (structured query language) and
dplyr can make the data arrangement process less burdensome, but using them effectively requires “index free thinking” where the data are not thought of in terms of row indices. We will explain and motivate this idea below. Continue reading The case for index-free data manipulation
When writing reusable code or packages you often do not know the names of the columns or variables you need to work over. This is what I call “parametric treatment of variables.” This can be a problem when using
R libraries that assume you know the variable names. The
R data manipulation library
dplyr currently supports parametric treatment of variables through “underbar forms” (methods of the form
dplyr::*_), but their use can get tricky.
Rube Goldberg machine 1931 (public domain).
Better support for parametric treatment of variable names would be a boon to
dplyr users. To this end the
replyr package now has a method designed to re-map parametric variable names to known concrete variable names. This allows concrete
dplyr code to be used as if it was parametric. Continue reading Parametric variable names and dplyr
Practical Data Science with R, Zumel, Mount; Manning 2014 is a book Nina Zumel and I are very proud of.
I have written before how I think this book stands out and why you should consider studying from it.
Please read on for a some additional comments on the intent of different sections of the book. Continue reading Teaching Practical Data Science with R
Nina Zumel and I have been doing a lot of writing on the (important) details of re-encoding high cardinality categorical variables for predictive modeling. These are variables that essentially take on string-values (also called levels or factors) and vary through many such levels. Typical examples include zip-codes, vendor IDs, and product codes.
In a sort of “burying the lede” way I feel we may not have sufficiently emphasized that you really do need to perform such re-encodings. Below is a graph (generated in R, code available here) of the kind of disaster you see if you throw such variables into a model without any pre-processing or post-controls.
In the above graph each dot represents the performance of a model fit on synthetic data. The x-axis is model performance (in this case pseudo R-squared, 1 being perfect and below zero worse than using an average). The training pane represents performance on the training data (perfect, but over-fit) and the test pane represents performance on held-out test data (an attempt to simulate future application data). Notice the test performance implies these models are dangerously worse than useless.
Please read on for how to fix this. Continue reading You should re-encode high cardinality categorical variables