What was data science before it was called data science?

Posted on Categories Opinion, StatisticsTags , , , 3 Comments on What was data science before it was called data science?

“Data Science” is obviously a trendy term making it way through the hype cycle. Either nobody is good enough to be a data scientist (unicorns) or everybody is too good to be a data scientist (or the truth is somewhere in the middle).



NewImage

Gartner hype cycle (Wikipedia).

And there is a quarter that grumbles that we are merely talking about statistics under a new name (see here and here).

It has always been the case that advances in data engineering (such as punch cards, or data centers) make analysis practical at new scales (though I still suspect Map/Reduce was a plot designed to trick engineers into being excited about ETL and report generation).


NewImage

Data Science 1832: Semen Korsakov card.

However, in the 1940s and 1950s the field was called “operations research” (even when performed by statisticians). When you read John F. Magee, (2002) “Operations Research at Arthur D. Little, Inc.: The Early Years”, Operations Research 50(1):149-153 http://dx.doi.org/10.1287/opre.50.1.149.17796 you really come away with the impression you are reading about a study of online advertising performed in the 1940s (okay mail advertising, but mail was “the email of its time”).

In this spirit next week we will write about the sequential analysis solution for A/B-testing, invented in the 1940s by one of the greats of statistics and operations research: Abraham Wald (whom we have written about before).


NewImage

Abraham Wald

Bitcoin’s status isn’t as simple as ruling if it is more a private token or a public ledger

Posted on Categories Finance, History, OpinionTags

There is a lot of current interest in various “crypto currencies” such as Bitcoin, but that does not mean there have not been previous combined ledger and token recording systems. Others have noticed the relevance of Crawfurd v The Royal Bank (the case where money became money), and we are going to write about this yet again.

Very roughly: a Bitcoin is a cryptographic secret that is considered to have some value. Bitcoins are individual data tokens, and duplication is prevented through a distributed shared ledger (called the blockchain). As interesting as this is, we want to point out notional value existing both in ledgers and as possessed tokens has quite a long precedent.

This helps us remember that important questions about Bitcoins (such as: are they a currency or a commodity?) will be determined by regulators, courts, and legislators. It will not be a simple inevitable consequence of some detail of implementation as this has never been the case for other forms of value (gold, coins, bank notes, stocks certificates, or bank account balances).

Value has often been recorded in combinations of ledgers and tokens, so many of these issues have been seen before (though they have never been as simple as one would hope). Historically the rules that apply to such systems are subtle, and not completely driven by whether the system primarily resides in ledgers or primarily resides portable tokens. So we shouldn’t expect determinations involving Bitcoin to be simple either.

What I would like to do with this note is point out some fun examples and end with the interesting case of Crawfurd v The Royal Bank, as brought up by “goonsack” in 2013. Continue reading Bitcoin’s status isn’t as simple as ruling if it is more a private token or a public ledger

Baking priors

Posted on Categories data science, Expository Writing, Opinion, Rants, Statistics, Statistics To English Translation, TutorialsTags , , , , ,

There remains a bit of a two-way snobbery that Frequentist statistics is what we teach (as so-called objective statistics remain the same no matter who works with them) and Bayesian statistics is what we do (as it tends to directly estimate posterior probabilities we are actually interested in). Nina Zumel hit the nail on the head when she wrote an article explaining the appropriateness of the type of statistical theory depends on the type of question you are trying to answer, not on your personal prejudices.

We will discuss a few more examples that have been in our mind, including one I am calling “baking priors.” This final example will demonstrate some of the advantages of allowing researchers to document their priors.


Thumb IMG 0539 1024
Figure 1: two loaves of bread.
Continue reading Baking priors

Thumbs up for Anaconda

Posted on Categories Computer Science, Exciting Techniques, Opinion, Programming, RantsTags , 2 Comments on Thumbs up for Anaconda

One of the things I like about R is: because it is not used for systems programming you can expect to install your own current version of R without interference from some system version of R that is deliberately being held back at some older version (for reasons of script compatibility). R is conveniently distributed as a single package (with automated install of additional libraries).

Want to do some data analysis? Install R, load your data, and go. You don’t expect to spend hours on system administration just to get back to your task.

Python, being a popular general purpose language does not have this advantage, but thanks to Anaconda from Continuum Analytics you can skip (or at least delegate) a lot of the system environment imposed pain. With Anaconda trying out Python packages (Jupyter, scikit-learn, pandas, numpy, sympy, cvxopt, bokeh, and more) becomes safe and pleasant. Continue reading Thumbs up for Anaconda

A deeper theory of testing

Posted on Categories data science, Opinion, Pragmatic Data Science, Pragmatic Machine Learning, StatisticsTags , , 2 Comments on A deeper theory of testing

In some of my recent public talks (for example: here and here) I have mentioned a desire for “a deeper theory of fitting and testing.” I thought I would expand on what I meant by this.

In this note I am going to cover a lot of different topics to try and suggest some perspective. I won’t have my usual luxury of fully defining my terms or working concrete examples. Hopefully a number of these ideas (which are related, but don’t seem to easily synthesize together) will be subjects of their own later articles.

Introduction

The focus of this article is: the true goal of predictive analytics is always: to build a model that works well in production. Training and testing procedures are designed to simulate this unknown future model performance, but can be expensive and can also fail.

What we want is a good measure of future model performance, and to apply that measure in picking a model without running deep into Goodhart’s law (“When a measure becomes a target, it ceases to be a good measure.”).

Most common training and testing procedures are destructive in the sense they use up data (data used for one step may not be safely used for another step in an unbiased fashion, example: excess generalization error). In this note I thought I would expand on the ideas for extending statistical efficiency or getting more out of your training while avoiding overfitting.


6816780226 4ff0d8324a o
Destructive testing.

I will outline a few variations of model construction and testing techniques that one should keep in mind.

Continue reading A deeper theory of testing

How do you know if your model is going to work?

Posted on Categories Opinion, Statistics, Statistics To English Translation, TutorialsTags , ,

Authors: John Mount (more articles) and Nina Zumel (more articles).

Our four part article series collected into one piece.

  • Part 1: The problem
  • Part 2: In-training set measures
  • Part 3: Out of sample procedures
  • Part 4: Cross-validation techniques

“Essentially, all models are wrong, but some are useful.”


George Box

Here’s a caricature of a data science project: your company or client needs information (usually to make a decision). Your job is to build a model to predict that information. You fit a model, perhaps several, to available data and evaluate them to find the best. Then you cross your fingers that your chosen model doesn’t crash and burn in the real world.

We’ve discussed detecting if your data has a signal. Now: how do you know that your model is good? And how sure are you that it’s better than the models that you rejected?


Bartolomeu Velho 1568
Geocentric illustration Bartolomeu Velho, 1568 (Bibliothèque Nationale, Paris)

Notice the Sun in the 4th revolution about the earth. A very pretty, but not entirely reliable model.

In this latest “Statistics as it should be” article, we will systematically look at what to worry about and what to check. This is standard material, but presented in a “data science” oriented manner. Meaning we are going to consider scoring system utility in terms of service to a negotiable business goal (one of the many ways data science differs from pure machine learning).

Continue reading How do you know if your model is going to work?

How do you know if your model is going to work? Part 2: In-training set measures

Posted on Categories Opinion, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, Statistics, Statistics To English Translation, TutorialsTags , ,

Authors: John Mount (more articles) and Nina Zumel (more articles).

When fitting and selecting models in a data science project, how do you know that your final model is good? And how sure are you that it’s better than the models that you rejected? In this Part 2 of our four part mini-series “How do you know if your model is going to work?” we develop in-training set measures.

Previously we worked on:

  • Part 1: Defining the scoring problem

In-training set measures

The most tempting procedure is to score your model on the data used to train it. The attraction is this avoids the statistical inefficiency of denying some of your data to the training procedure.

Run it once procedure

A common way to asses score quality is to run your scoring function on the data used to build your model. We might try comparing several models scored by AUC or deviance (normalized to factor out sample size) on their own training data as shown below.

NewImage

NewImage

What we have done is take five popular machine learning techniques (random forest, logistic regression, gbm, GAM logistic regression, and elastic net logistic regression) and plotted their performance in terms of AUC and normalized deviance on their own training data. For AUC larger numbers are better, and for deviance smaller numbers are better. Because we have evaluated multiple models we are starting to get a sense of scale. We should suspect an AUC of 0.7 on training data is good (though random forest achieved an AUC on training of almost 1.0), and we should be acutely aware that evaluating models on their own training data has an upward bias (the model has seen the training data, so it has a good chance of doing well on it; or training data is not exchangeable with future data for the purpose of estimating model performance).

There are two more Gedankenexperiment models that any machine data scientist should always have in mind:

  1. The null model (on the graph as “null model”). This is the performance of the best constant model (model that returns the same answer for all datums). In this case it is a model scores each and every row as having an identical 7% chance of churning. This is an important model that you want to better than. It is also a model you are often competing against as a data science as it is the “what if we treat everything in this group the same” option (often the business process you are trying to replace).

    The data scientist should always compare their work to the null model on deviance (null model AUC is trivially 0.5) and packages like logistic regression routinely report this statistic.

  2. The best single variable model (on the graph as “best single variable model”). This is the best model built using only one variable or column (in this case using a GAM logistic regression as the modeling method). This is another model the data scientist wants to out perform as it represents the “maybe one of the columns is already the answer case” (if so that would be very good for the business as they could get good predictions without modeling infrastructure).

    The data scientist should definitely compare their model to the best single variable model. Until you significantly outperform the best single variable model you have not outperformed what an analyst can find with a single pivot table.

At this point it would be tempting to pick the random forest model as the winner as it performed best on the training data. There are at least two things wrong with this idea:

Continue reading How do you know if your model is going to work? Part 2: In-training set measures

How do you know if your model is going to work? Part 1: The problem

Posted on Categories Opinion, Statistics, Statistics To English Translation, TutorialsTags , ,

Authors: John Mount (more articles) and Nina Zumel (more articles).

“Essentially, all models are wrong, but some are useful.”


George Box

Here’s a caricature of a data science project: your company or client needs information (usually to make a decision). Your job is to build a model to predict that information. You fit a model, perhaps several, to available data and evaluate them to find the best. Then you cross your fingers that your chosen model doesn’t crash and burn in the real world.

We’ve discussed detecting if your data has a signal. Now: how do you know that your model is good? And how sure are you that it’s better than the models that you rejected?


Bartolomeu Velho 1568
Geocentric illustration Bartolomeu Velho, 1568 (Bibliothèque Nationale, Paris)

Notice the Sun in the 4th revolution about the earth. A very pretty, but not entirely reliable model.

In this latest “Statistics as it should be” series, we will systematically look at what to worry about and what to check. This is standard material, but presented in a “data science” oriented manner. Meaning we are going to consider scoring system utility in terms of service to a negotiable business goal (one of the many ways data science differs from pure machine learning).

To organize the ideas into digestible chunks, we are presenting this article as a four part series (to finished in the next 3 Tuesdays). This part (part 1) sets up the specific problem.

Continue reading How do you know if your model is going to work? Part 1: The problem

I was wrong about statistics

Posted on Categories Opinion, Rants, StatisticsTags , , , 9 Comments on I was wrong about statistics

I’ll admit it: I have been wrong about statistics. However, that isn’t what this article is about. This article is less about some of the statistical mistakes I have made, as a mere working data scientist, and more of a rant about the hectoring tone of corrections from some statisticians (both when I have been right and when I have been wrong).


5317820857 d1f6a5b8a9 b
Used wrong (image Justin Baeder, some rights reserved).

Continue reading I was wrong about statistics

Text encoding is a convoluted mess

Posted on Categories Computer Science, Opinion, RantsTags , , , , , 2 Comments on Text encoding is a convoluted mess

Modern text encoding is a convoluted mess where costs can easily exceed benefits. I admit we are in a world that has moved beyond ASCII (which at best served only English, and even then without full punctuation). But modern text encoding standards (utf-x, Unicode) have metastasized to the point you spend more time working around them than benefiting from them.


NewImage
ASCII Code Chart-Quick ref card” by Namazu-tron – See above description. Licensed under Public Domain via Wikimedia Commons
Continue reading Text encoding is a convoluted mess