Free gradient boosting lecture

By: , November 21st, 2015.


We have always regretted that we didn’t get to cover gradient boosting in Practical Data Science with R (Manning 2014). To try make up for that we are sharing (for free) our GBM lecture from our (paid) video course Introduction to Data Science.

(link, all support material here).

Please help us get the word out by sharing/Tweeting!

Fluid use of data

By: , November 19th, 2015.


Nina Zumel and I recently wrote a few article and series on best practices in testing models and data:

What stands out in these presentations is: the simple practice of a static test/train split is merely a convenience to cut down on operational complexity and difficulty of teaching. It is in no way optimal. That is, using slightly more complicated procedures can build better models on a given set of data.

Suggested static cal/train/test experiment design from vtreat data treatment library.
Continue reading Fluid use of data

Fast food, fast publication

By: , November 8th, 2015.


The following article is getting quite a lot of press right now: David Just and Brian Wansink (2015), “Fast Food, Soft Drink, and Candy Intake is Unrelated to Body Mass Index for 95% of American Adults”, Obesity Science & Practice, forthcoming (upcoming in a new pay for placement journal). Obviously it is a sensational contrary position (some coverage: here, here, and here).

I thought I would take a peek to learn about the statistical methodology (see here for some commentary). I would say the kindest thing you can say about the paper is: its problems are not statistical.

At this time the authors don’t seem to have supplied their data preparation or analysis scripts and the paper “isn’t published yet” (though they have had time for a press release), so we have to rely on their pre-print. Read on for excerpts from the work itself (with commentary). Continue reading Fast food, fast publication

Bitcoin’s status isn’t as simple as ruling if it is more a private token or a public ledger

By: , November 7th, 2015.


There is a lot of current interest in various “crypto currencies” such as Bitcoin, but that does not mean there have not been previous combined ledger and token recording systems. Others have noticed the relevance of Crawfurd v The Royal Bank (the case where money became money), and we are going to write about this yet again.

Very roughly: a Bitcoin is a cryptographic secret that is considered to have some value. Bitcoins are individual data tokens, and duplication is prevented through a distributed shared ledger (called the blockchain). As interesting as this is, we want to point out notional value existing both in ledgers and as possessed tokens has quite a long precedent.

This helps us remember that important questions about Bitcoins (such as: are they a currency or a commodity?) will be determined by regulators, courts, and legislators. It will not be a simple inevitable consequence of some detail of implementation as this has never been the case for other forms of value (gold, coins, bank notes, stocks certificates, or bank account balances).

Value has often been recorded in combinations of ledgers and tokens, so many of these issues have been seen before (though they have never been as simple as one would hope). Historically the rules that apply to such systems are subtle, and not completely driven by whether the system primarily resides in ledgers or primarily resides portable tokens. So we shouldn’t expect determinations involving Bitcoin to be simple either.

What I would like to do with this note is point out some fun examples and end with the interesting case of Crawfurd v The Royal Bank, as brought up by “goonsack” in 2013. Continue reading Bitcoin’s status isn’t as simple as ruling if it is more a private token or a public ledger

Don’t use stats::aggregate()

By: , October 31st, 2015.


When working with an analysis system (such as R) there are usually good reasons to prefer using functions from the “base” system over using functions from extension packages. However, base functions are sometimes locked into unfortunate design compromises that can now be avoided. In R’s case I would say: do not use stats::aggregate().

Read on for our example. Continue reading Don’t use stats::aggregate()

Who is allowed to call themselves a data scientist?

By: , October 30th, 2015.


It has been popular to complain that the current terms “data science” and “big data” are so vague as to be meaningless. While these terms are quite high on the hype-cycle, even the American Statistical Association was forced to admit that data science is actually a real thing and exists.

Gartner hype cycle (Wikipedia).

Given we agree data science exists, who is allowed to call themselves a data scientist? Continue reading Who is allowed to call themselves a data scientist?

Thank you Joseph Rickert!

By: , October 16th, 2015.


A bit of text we are proud to steal from our good friend Joseph Rickert:

Then, for some very readable background material on SVMs I recommend section 13.4 of Applied Predictive Modeling and sections 9.3 and 9.4 of Practical Data Science with R by Nina Zumel and John Mount. You will be hard pressed to find an introduction to kernel methods and SVMs that is as clear and useful as this last reference.

For more on SVMs see the original article on the Revolution Analytics blog.

A simple differentially private-ish procedure

By: , October 13th, 2015.


Authors: John Mount and Nina Zumel

Nina and I were noodling with some variations of differentially private machine learning, and think we have found a variation of a standard practice that is actually fairly efficient in establishing differential privacy a privacy condition (but, as commenters pointed out- not differential privacy).


Read on for the idea and a rough analysis. Continue reading A simple differentially private-ish procedure

Baking priors

By: , October 13th, 2015.


There remains a bit of a two-way snobbery that Frequentist statistics is what we teach (as so-called objective statistics remain the same no matter who works with them) and Bayesian statistics is what we do (as it tends to directly estimate posterior probabilities we are actually interested in). Nina Zumel hit the nail on the head when she wrote an article explaining the appropriateness of the type of statistical theory depends on the type of question you are trying to answer, not on your personal prejudices.

We will discuss a few more examples that have been in our mind, including one I am calling “baking priors.” This final example will demonstrate some of the advantages of allowing researchers to document their priors.

Thumb IMG 0539 1024
Figure 1: two loaves of bread.
Continue reading Baking priors

Thumbs up for Anaconda

By: , October 10th, 2015.


One of the things I like about R is: because it is not used for systems programming you can expect to install your own current version of R without interference from some system version of R that is deliberately being held back at some older version (for reasons of script compatibility). R is conveniently distributed as a single package (with automated install of additional libraries).

Want to do some data analysis? Install R, load your data, and go. You don’t expect to spend hours on system administration just to get back to your task.

Python, being a popular general purpose language does not have this advantage, but thanks to Anaconda from Continuum Analytics you can skip (or at least delegate) a lot of the system environment imposed pain. With Anaconda trying out Python packages (Jupyter, scikit-learn, pandas, numpy, sympy, cvxopt, bokeh, and more) becomes safe and pleasant. Continue reading Thumbs up for Anaconda