Posted on Categories Exciting Techniques, Expository Writing, Opinion, StatisticsTags , , , , , , , 1 Comment on Laplace noising versus simulated out of sample methods (cross frames)

Laplace noising versus simulated out of sample methods (cross frames)

Nina Zumel recently mentioned the use of Laplace noise in “count codes” by Misha Bilenko (see here and here) as a known method to break the overfit bias that comes from using the same data to design impact codes and fit a next level model. It is a fascinating method inspired by differential privacy methods, that Nina and I respect but don’t actually use in production.


NewImage

Nested dolls, Wikimedia Commons

Please read on for my discussion of some of the limitations of the technique, and how we solve the problem for impact coding (also called “effects codes”), and a worked example in R. Continue reading Laplace noising versus simulated out of sample methods (cross frames)

Posted on Categories Mathematics, Programming, StatisticsTags , , 1 Comment on A bit more on impact coding

A bit more on impact coding

Dr. Nina Zumel recently published an excellent tutorial on a modeling technique she called impact coding. It is a pragmatic machine learning technique that has helped with more than one client project. Impact coding is a bridge from Naive Bayes (where each variable’s impact is added without regard to the known effects of any other variable) to Logistic Regression (where dependencies between variables and levels is completely accounted). A natural question is can pick up more of the positive features of each model? Continue reading A bit more on impact coding

Posted on Categories data science, Expository Writing, Pragmatic Machine Learning, Statistics, TutorialsTags , , , , , 3 Comments on Modeling Trick: Impact Coding of Categorical Variables with Many Levels

Modeling Trick: Impact Coding of Categorical Variables with Many Levels

One of the shortcomings of regression (both linear and logistic) is that it doesn’t handle categorical variables with a very large number of possible values (for example, postal codes). You can get around this, of course, by going to another modeling technique, such as Naive Bayes; however, you lose some of the advantages of regression — namely, the model’s explicit estimates of variables’ explanatory value, and explicit insight into and control of variable to variable dependence.

Here we discuss one modeling trick that allows us to keep categorical variables with a large number of values, and at the same time retain much of logistic regression’s power.

Continue reading Modeling Trick: Impact Coding of Categorical Variables with Many Levels