We have just released two new free video lectures on vectors from a programmer’s point of view. I am experimenting with what ideas do programmers find interesting about vectors, what concepts do they consider safe starting points, and how to condense and present the material.
Please check the lectures out.
R users now call piping, popularized by Stefan Milton Bache and Hadley Wickham, is inline function application (this is notationally similar to, but distinct from the powerful interprocess communication and concurrency tool introduced to Unix by Douglas McIlroy in 1973). In object oriented languages this sort of notation for function application has been called “method chaining” since the days of
Smalltalk (~1972). Let’s take a look at method chaining in
Python, in terms of pipe notation.
Continue reading Piping is Method Chaining
Starting With Data Science
A rigorous hands-on introduction to data science for software engineers.
Win Vector LLC is now offering a 4 day on-site intensive data science course. The course targets software engineers familiar with Python and introduces them to the basics of current data science practice. This is designed as an interactive in-person (not remote or video) course.
Continue reading Starting With Data Science: A Rigorous Hands-On Introduction to Data Science for Software Engineers
R Tip: use inline operators for legibility.
Python feature I miss when working in
R is the convenience of
+ operator. In
+ does the right thing for some built in data types:
- It concatenates lists:
[1,2] +  is
[1, 2, 3].
- It concatenates strings:
'a' + 'b' is
And, of course, it adds numbers:
1 + 2 is
The inline notation is very convenient and legible. In this note we will show how to use a related notation
Continue reading R Tip: Use Inline Operators For Legibility
While developing the
R package I took a little extra time to port the core algorithm from
C++ to both
This means I can time the exact same algorithm implemented nearly identically in each of these three languages. So I can extract some comparative “apples to apples” timings. Please read on for a summary of the results.
Continue reading Timing the Same Algorithm in R, Python, and C++
According to a KDD poll fewer respondents (by rate) used only
R in 2017 than in 2016. At the same time more respondents (by rate) used only
Python in 2017 than in 2016.
Let’s take this as an excuse to take a quick look at what happens when we try a task in both systems.
Continue reading Running the Same Task in Python and R
Trick question: is a
10,000 cell numeric
data.frame big or small?
In the era of “big data”
10,000 cells is minuscule. Such data could be fit on fewer than
1,000 punched cards (or less than half a box).
The joking answer is: it is small when they are selling you the system, but can be considered unfairly large later.
Continue reading Is 10,000 Cells Big?
I recently got back from Strata West 2017 (where I ran a very well received workshop on
Spark). One thing that really stood out for me at the exhibition hall was
datashader from Continuum Analytics.
I had the privilege of having Peter Wang himself demonstrate
datashader for me and answer a few of my questions.
I am so excited about
datashader capabilities I literally will not wait for the functionality to be exposed in
rbokeh. I am going to leave my usual
rmarkdown world and dust off
Jupyter Notebook just to use
datashader plotting. This is worth trying, even for diehard
R users. Continue reading Datashader is a big deal
Beginning analysts and data scientists often ask: “how does one remember and master the seemingly endless number of classifier metrics?”
My concrete advice is:
- Read Nina Zumel’s excellent series on scoring classifiers.
- Keep notes.
- Settle on one or two metrics as you move project to project. We prefer “AUC” early in a project (when you want a flexible score) and “deviance” late in a project (when you want a strict score).
- When working on practical problems work with your business partners to find out which of precision/recall, or sensitivity/specificity most match their business needs. If you have time show them and explain the ROC plot and invite them to price and pick points along the ROC curve that most fit their business goals. Finance partners will rapidly recognize the ROC curve as “the efficient frontier” of classifier performance and be very comfortable working with this summary.
That being said it always seems like there is a bit of gamesmanship in that somebody always brings up yet another score, often apparently in the hope you may not have heard of it. Some choice of measure is signaling your pedigree (precision/recall implies a data mining background, sensitivity/specificity a medical science background) and hoping to befuddle others.
Stanley Wyatt illustration from “Mathmanship” Nicholas Vanserg, 1958, collected in A Stress Analysis of a Strapless Evening Gown, Robert A. Baker, Prentice-Hall, 1963
The rest of this note is some help in dealing with this menagerie of common competing classifier evaluation scores.
Continue reading A budget of classifier evaluation measures
At Strata+Hadoop World “R Day” Tutorial, Tuesday, March 29 2016, San Jose, California we spent some time on classifier measures derived from the so-called “confusion matrix.”
We repeated our usual admonition to not use “accuracy itself” as a project quality goal (business people tend to ask for it as it is the word they are most familiar with, but it usually isn’t what they really want).
One reason not to use accuracy: an example where a classifier that does nothing is “more accurate” than one that actually has some utility. (Figure credit Nina Zumel, slides here)
And we worked through the usual bestiary of other metrics (precision, recall, sensitivity, specificity, AUC, balanced accuracy, and many more).
Please read on to see what stood out. Continue reading A bit on the F1 score floor