Posted on Categories Administrativia, data science, ProgrammingTags , Leave a comment on Slides from the PyData2019 data_algebra lightning talk

Slides from the PyData2019 data_algebra lightning talk

Slides from my PyData2019 data_algebra lightning talk are here.

Posted on Categories Administrativia, data science, Practical Data ScienceTags , , Leave a comment on Practical Data Science with R, 2nd Edition: Introduction Video

Practical Data Science with R, 2nd Edition: Introduction Video

Nina and I have prepared a quick introduction video for Practical Data Science with R, 2nd Edition.

We are really proud of both editions of the book. This book can help an R user directly experience the data science style of working with data and machine learning techniques.

The book is available now at:

Please check it out!

Posted on Categories Administrativia, data scienceTags , , , Leave a comment on Nina Zumel and John Mount speaking on vtreat at PyData LA 2019

Nina Zumel and John Mount speaking on vtreat at PyData LA 2019

As we have announced before, we have ported the R version of vtreat to a new Python version of vtreat.

Our latest news is: we are speaking about the Python version at PyData LA 2019 (Thursday 10:50 AM–11:35 AM in Track 2 Room).

Continue reading Nina Zumel and John Mount speaking on vtreat at PyData LA 2019

Posted on Categories Administrativia, data science, Practical Data Science, StatisticsTags , , , , 5 Comments on Practical Data Science with R, 2nd Edition, IS OUT!!!!!!!

Practical Data Science with R, 2nd Edition, IS OUT!!!!!!!

Practical Data Science with R, 2nd Edition author Dr. Nina Zumel, with a fresh author’s copy of her book!

IMG 3384

Posted on Categories data science, Expository Writing, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, Statistics, TutorialsTags , , , , , 1 Comment on When Cross-Validation is More Powerful than Regularization

When Cross-Validation is More Powerful than Regularization

Regularization is a way of avoiding overfit by restricting the magnitude of model coefficients (or in deep learning, node weights). A simple example of regularization is the use of ridge or lasso regression to fit linear models in the presence of collinear variables or (quasi-)separation. The intuition is that smaller coefficients are less sensitive to idiosyncracies in the training data, and hence, less likely to overfit.

Cross-validation is a way to safely reuse training data in nested model situations. This includes both the case of setting hyperparameters before fitting a model, and the case of fitting models (let’s call them base learners) that are then used as variables in downstream models, as shown in Figure 1. In either situation, using the same data twice can lead to models that are overtuned to idiosyncracies in the training data, and more likely to overfit.

Figure 1 Properly nesting models with cross-validation
Figure 1 Properly nesting models with cross-validation

In general, if any stage of your modeling pipeline involves looking at the outcome (we’ll call that a y-aware stage), you cannot directly use the same data in the following stage of the pipeline. If you have enough data, you can use separate data in each stage of the modeling process (for example, one set of data to learn hyperparameters, another set of data to train the model that uses those hyperparameters). Otherwise, you should use cross-validation to reduce the nested model bias.

Cross-validation is relatively computationally expensive; regularization is relatively cheap. Can you mitigate nested model bias by using regularization techniques instead of cross-validation?

The short answer: no, you shouldn’t. But as, we’ve written before, demonstrating this is more memorable than simply saying “Don’t do that.”

A simple example

Suppose you have a system with two categorical variables. The variable x_s has 10 levels, and the variable x_n has 100 levels. The outcome y is a function of x_s, but not of x_n (but you, the analyst building the model, don’t know this). Here’s the head of the data.

##     x_s  x_n           y
## 2  s_10 n_72  0.34228110
## 3  s_01 n_09 -0.03805102
## 4  s_03 n_18 -0.92145960
## 9  s_08 n_43  1.77069352
## 10 s_08 n_17  0.51992928
## 11 s_01 n_78  1.04714355

With most modeling techniques, a categorical variable with K levels is equivalent to K or K-1 numerical (indicator or dummy) variables, so this system actually has around 110 variables. In real life situations where a data scientist is working with high-cardinality categorical variables, or with a lot of categorical variables, the number of actual variables can begin to swamp the size of training data, and/or bog down the machine learning algorithm.

One way to deal with these issues is to represent each categorical variable by a single variable model (or base learner), and then use the predictions of those base learners as the inputs to a bigger model. So instead of fitting a model with 110 indicator variables, you can fit a model with two numerical variables. This is a simple example of nested models.

Figure 2 Impact coding as an example of nested models
Figure 2 Impact coding as an example of nested model

We refer to this procedure as “impact coding,” and it is one of the data treatments available in the vtreat package, specifically for dealing with high-cardinality categorical variables. But for now, let’s go back to the original problem.

The naive way

For this simple example, you might try representing each variable as the expected value of y - mean(y) in the training data, conditioned on the variable’s level. So the ith “coefficient” of the one-variable model would be given by:

vi = E[y|x = si] − E[y]

Where si is the ith level. Let’s show this with the variable x_s (the code for all the examples in this article is here):

##     x_s      meany      coeff
## 1  s_01  0.7998263  0.8503282
## 2  s_02 -1.3815640 -1.3310621
## 3  s_03 -0.7928449 -0.7423430
## 4  s_04 -0.8245088 -0.7740069
## 5  s_05  0.7547054  0.8052073
## 6  s_06  0.1564710  0.2069728
## 7  s_07 -1.1747557 -1.1242539
## 8  s_08  1.3520153  1.4025171
## 9  s_09  1.5789785  1.6294804
## 10 s_10 -0.7313895 -0.6808876

In other words, whenever the value of x_s is s_01, the one variable model vs returns the value 0.8503282, and so on. If you do this for both variables, you get a training set that looks like this:

##     x_s  x_n           y         vs         vn
## 2  s_10 n_72  0.34228110 -0.6808876 0.64754957
## 3  s_01 n_09 -0.03805102  0.8503282 0.54991135
## 4  s_03 n_18 -0.92145960 -0.7423430 0.01923877
## 9  s_08 n_43  1.77069352  1.4025171 1.90394159
## 10 s_08 n_17  0.51992928  1.4025171 0.26448341
## 11 s_01 n_78  1.04714355  0.8503282 0.70342961

Now fit a linear model for y as a function of vs and vn.

model_raw = lm(y ~ vs + vn,
               data=dtrain_treated)
summary(model_raw)
## 
## Call:
## lm(formula = y ~ vs + vn, data = dtrain_treated)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.33068 -0.57106  0.00342  0.52488  2.25472 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.05050    0.05597  -0.902    0.368    
## vs           0.77259    0.05940  13.006   <2e-16 ***
## vn           0.61201    0.06906   8.862   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.8761 on 242 degrees of freedom
## Multiple R-squared:  0.6382, Adjusted R-squared:  0.6352 
## F-statistic: 213.5 on 2 and 242 DF,  p-value: < 2.2e-16

Note that this model gives significant coefficients to both vs and vn, even though y is not a function of x_n (or vn). Because you used the same data to fit the one variable base learners and to fit the larger model, you have overfit.

The right way: cross-validation

The correct way to impact code (or to nest models in general) is to use cross-validation techniques. Impact coding with cross-validation is already implemented in vtreat; note the similarity between this diagram and Figure 1 above.

Figure 3 Cross-validated data preparation with vtreat
Figure 3 Cross-validated data preparation with vtreat

The training data is used both to fit the base learners (as we did above) and to also to create a data frame of cross-validated base learner predictions (called a cross-frame in vtreat). This cross-frame is used to train the overall model. Let’s fit the correct nested model, using vtreat.

library(vtreat)
library(wrapr)
xframeResults = mkCrossFrameNExperiment(dtrain, 
                                  qc(x_s, x_n), "y",
                                  codeRestriction = qc(catN), 
                                  verbose = FALSE)
# the plan uses the one-variable models to treat data
treatmentPlan = xframeResults$treatments
# the cross-frame
dtrain_treated = xframeResults$crossFrame

head(dtrain_treated)
##     x_s_catN   x_n_catN           y
## 1 -0.6337889 0.91241547  0.34228110
## 2  0.8342227 0.82874089 -0.03805102
## 3 -0.7020597 0.18198634 -0.92145960
## 4  1.3983175 1.99197404  1.77069352
## 5  1.3983175 0.11679580  0.51992928
## 6  0.8342227 0.06421659  1.04714355
variables = setdiff(colnames(dtrain_treated), "y")

model_X = lm(mk_formula("y", variables), 
             data=dtrain_treated)
summary(model_X)
## 
## Call:
## lm(formula = mk_formula("y", variables), data = dtrain_treated)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.2157 -0.7343  0.0225  0.7483  2.9639 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.04169    0.06745  -0.618    0.537    
## x_s_catN     0.92968    0.06344  14.656   <2e-16 ***
## x_n_catN     0.10204    0.06654   1.533    0.126    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.055 on 242 degrees of freedom
## Multiple R-squared:  0.4753, Adjusted R-squared:  0.471 
## F-statistic: 109.6 on 2 and 242 DF,  p-value: < 2.2e-16

This model correctly determines that x_n (and its one-variable model x_n_catN) do not affect the outcome. We can compare the performance of this model to the naive model on holdout data.

rmse rsquared
ypred_naive 1.303778 0.2311538
ypred_crossval 1.093955 0.4587089

The correct model has a much smaller root-mean-squared error and a much larger R-squared than the naive model when applied to new data.

An attempted alternative: regularized models.

But cross-validation is so complicated. Can’t we just regularize? As we’ll show in the appendix of this article, for a one-variable model, L2-regularization is simply Laplace smoothing. Again, we’ll represent each “coefficient” of the one-variable model as the Laplace smoothed value minus the grand mean.

vi = ∑xj = si yi/(counti + λ) − E[yi]

Where counti is the frequency of si in the training data, and λ is the smoothing parameter (usually 1). If λ = 1 then the first term on the right is just adding one to the frequency of the level and then taking the “adjusted conditional mean” of y.

Again, let’s show this for the variable x_s.

##     x_s      sum_y count_y   grandmean         vs
## 1  s_01  20.795484      26 -0.05050187  0.8207050
## 2  s_02 -37.302227      27 -0.05050187 -1.2817205
## 3  s_03 -22.199656      28 -0.05050187 -0.7150035
## 4  s_04 -14.016649      17 -0.05050187 -0.7282009
## 5  s_05  19.622340      26 -0.05050187  0.7772552
## 6  s_06   3.129419      20 -0.05050187  0.1995218
## 7  s_07 -35.242672      30 -0.05050187 -1.0863585
## 8  s_08  36.504412      27 -0.05050187  1.3542309
## 9  s_09  33.158549      21 -0.05050187  1.5577086
## 10 s_10 -16.821957      23 -0.05050187 -0.6504130

After applying the one variable models for x_s and x_n to the data, the head of the resulting treated data looks like this:

##     x_s  x_n           y         vs         vn
## 2  s_10 n_72  0.34228110 -0.6504130 0.44853367
## 3  s_01 n_09 -0.03805102  0.8207050 0.42505898
## 4  s_03 n_18 -0.92145960 -0.7150035 0.02370493
## 9  s_08 n_43  1.77069352  1.3542309 1.28612835
## 10 s_08 n_17  0.51992928  1.3542309 0.21098803
## 11 s_01 n_78  1.04714355  0.8207050 0.61015422

Now fit the overall model:

## 
## Call:
## lm(formula = y ~ vs + vn, data = dtrain_treated)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.30354 -0.57688 -0.02224  0.56799  2.25723 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.06665    0.05637  -1.182    0.238    
## vs           0.81142    0.06203  13.082  < 2e-16 ***
## vn           0.85393    0.09905   8.621  8.8e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.8819 on 242 degrees of freedom
## Multiple R-squared:  0.6334, Adjusted R-squared:  0.6304 
## F-statistic: 209.1 on 2 and 242 DF,  p-value: < 2.2e-16

Again, both variables look significant. Even with regularization, the model is still overfit. Comparing the performance of the models on holdout data, you see that the regularized model does a little better than the naive model, but not as well as the correctly cross-validated model.

rmse rsquared
ypred_naive 1.303778 0.2311538
ypred_crossval 1.093955 0.4587089
ypred_reg 1.267648 0.2731756

The Moral of the Story

Unfortunately, regularization is not enough to overcome nested model bias. Whenever you apply a y-aware process to your data, you have to use cross-validation methods (or a separate data set) at the next stage of your modeling pipeline.

Appendix: Derivation of Laplace Smoothing as L2-Regularization

Without regularization, the optimal one-variable model for y in terms of a categorical variable with K levels {sj} is a set of K coefficients v such that

f(\mathbf{v}) := \sum\limits_{i=1}^N (y_i - v_i)^2

is minimized (N is the number of data points). L2-regularization adds a penalty to the magnitude of v, so that the goal is to minimize

f(\mathbf{v}) := \sum\limits_{i=1}^N (y_i - v_i)^2 + \lambda \sum\limits_{j=1}^K {v_j}^2

where λ is a known smoothing hyperparameter, usually set (in this case) to 1.

To minimize the above expression for a single coefficient vj, take the deriviative with respect to vj and set it to zero:

\sum\nolimits_{x_i = s_j} -2 (y_i - v_j) + 2 \lambda v_j = 0\\ \sum\nolimits_{x_i = s_j }-y_i + \sum\nolimits_{x_i = s_j} v_j + \lambda v_j = 0\\ \sum\nolimits_{x_i = s_j }-y_i + \text{count}_j v_j + \lambda v_j = 0

Where countj is the number of times the level sj appears in the training data. Now solve for vj:

v_j (\text{count}_j + \lambda) = \sum\nolimits_{x_i = s_j} y_i\\ v_j = \sum\nolimits_{x_i = s_i} y_i / (\text{count}_j + \lambda)

This is Laplace smoothing. Note that it is also the one-variable equivalent of ridge regression.

Posted on Categories Administrativia, data science, Exciting Techniques, Opinion, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, StatisticsTags , , Leave a comment on Free R/datascience Extract: Evaluating a Classification Model with a Spam Filter

Free R/datascience Extract: Evaluating a Classification Model with a Spam Filter

We are excited to share a free extract of Zumel, Mount, Practical Data Science with R, 2nd Edition, Manning 2019: Evaluating a Classification Model with a Spam Filter.

Zumel eacmwasf 02

This section reflects an important design decision in the book: teach model evaluation first, and as a step separate from model construction.

It is funny, but it takes some effort to teach in this way. New data scientists want to dive into the details of model construction first, and statisticians are used to getting model diagnostics as a side-effect of model fitting. However, to compare different modeling approaches one really needs good model evaluation that is independent of the model construction techniques.

This teaching style has worked very well for us both in R and in Python (it is considered one of the merits of our LinkedIn AI Academy course design):

One of the best data science courses I’ve taken. The course focuses on model selection and evaluation which are usually underestimated. Thanks to John Mount, the teacher and the co-authors of Practical Data Science with R. hashtag#AI200

(Note: Nina Zumel, leads on the course design, which is the heavy lifting, John Mount just got tasked to be the one delivering it.)

Zumel, Mount, Practical Data Science with R, 2nd Edition is coming out in print very soon. Here is a discount code to help you get a good deal on the book:

Take 37% off Practical Data Science with R, Second Edition by entering fcczumel3 into the discount code box at checkout at manning.com.

Posted on Categories Administrativia, data science, OpinionTags , Leave a comment on AI for Engineers

AI for Engineers

For the last year we (Nina Zumel, and myself: John Mount) have had the honor of teaching the AI200 portion of LinkedIn’s AI Academy.

John Mount at LinkedIn

John Mount at the LinkedIn campus

Nina Zumel designed most of the material, and John Mount has been delivering it and bringing her feedback. We’ve just started our 9th cohort. We adjust the course each time. Our students teach us a lot about how one thinks about data science. We bring that forward to each round of the course.

Roughly the goal is the following.

If every engineer, product manager, and project manager had some hands-on experience with data science and AI (deep neural nets), then they are both more likely to think of using these techniques in their work and of introducing the instrumentation required to have useful data in the first place.

This will have huge downstream benefits for LinkedIn. Our group is thrilled to be a part of this.

We are looking for more companies that want an on-site data science intensive for their teams (either in Python or in R).

Posted on Categories Administrativia, data science, Practical Data Science, Pragmatic Data Science, Pragmatic Machine LearningTags , , , Leave a comment on vtreat Cross Validation

vtreat Cross Validation

Nina Zumel finished new documentation on how vtreat‘s cross validation works, which I want to share here.

vtreat is a system that makes data preparation for machine learning a “one-liner” (available in R or available in Python). We have a set of starting off points here. These documents describe what vtreat does for you, you just find the one that matches your task and you should have a good start for solving data science problems in R or in Python.

The latest documentation is a bit about how vtreat works, and how to control some of the details of the work it is doing for you.

The new documentation is:

Please give one of the examples a try, and consider adding vtreat to your data science workflow.

Posted on Categories Administrativia, data science, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, TutorialsTags , , , Leave a comment on New vtreat Documentation (Starting with Multinomial Classification)

New vtreat Documentation (Starting with Multinomial Classification)

Nina Zumel finished some great new documentation showing how to use Python vtreat to prepare data for multinomial classification mode. And I have finally finished porting the documentation to R vtreat. So we now have good introductions on how to use vtreat to prepare data for the common tasks of:

That is now 8 introductions to start with. To use vtreat you only have to work through one introduction (the one helping with the task you have at hand in the language you are using).

As I have said before:

  • vtreat helps with project blocking issues commonly seen in real world data: missing values, re-coding categorical variables, and dealing high cardinality categorical variables.
  • If you aren’t using a tool like vtreat in your data science projects: you are really missing out (and making more work for yourself).
Posted on Categories data science, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, Statistics, TutorialsTags , Leave a comment on Why Do We Plot Predictions on the x-axis?

Why Do We Plot Predictions on the x-axis?

When studying regression models, One of the first diagnostic plots most students learn is to plot residuals versus the model’s predictions (that is, with the predictions on the x-axis). Here’s a basic example.

# build an "ideal" linear process.
set.seed(34524)
N = 100
x1 = runif(N)
x2 = runif(N)
noise = 0.25*rnorm(N)
y = x1 + x2 + noise
df = data.frame(x1=x1, x2=x2, y=y)

# Fit a linear regression model
model = lm(y~x1+x2, data=df)
summary(model)
## 
## Call:
## lm(formula = y ~ x1 + x2, data = df)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.73508 -0.16632  0.02228  0.19501  0.55190 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.16706    0.07111   2.349   0.0208 *  
## x1           0.90047    0.09435   9.544 1.30e-15 ***
## x2           0.81444    0.09288   8.769 6.07e-14 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.2662 on 97 degrees of freedom
## Multiple R-squared:  0.6248, Adjusted R-squared:  0.6171 
## F-statistic: 80.78 on 2 and 97 DF,  p-value: < 2.2e-16
# plot it
library(ggplot2)

df$pred = predict(model, newdata=df)
df$residual = with(df, y-pred)

# standard residual plot
ggplot(df, aes(x=pred, y=residual)) + 
  geom_point(alpha=0.5) + geom_hline(yintercept=0, color="red") +
  geom_smooth(se=FALSE) + 
  ggtitle("Standard residual plot",
          subtitle = "linear model and process")

Standard Residual Plot

In the above plot, we’re plotting the residuals as a function of model prediction, and comparing them to the line y = 0, using a smoothing curve through the residuals. The idea is that for a well-fit model, the smoothing curve should approximately lie on the line y = 0. This is true not only for linear models, but for any model that captures most of the explainable variance, and for which the unexplainable variance (the noise) is IID and zero mean.

If the residuals aren’t zero mean independently of the model’s predictions, then either you are missing some explanatory variables, or your model does not have the correct structure, or an appropriate inductive bias. A simple example of the second case is trying to fit a linear model to a process where the outcome is quadratically (or otherwise non-linearly) related to the outcome. To see this, let’s make an example quadratic system while deliberately failing to supply that structure to the model.

# a simple quadratic example
x3 = runif(N)
qf = data.frame(x1=x1, x2=x2, x3=x3)
qf$y = x1 + x2 + 2*x3^2 + 0.25*noise

# Fit a linear regression model
model2 = lm(y~x1+x2+x3, data=qf)
# summary(model2)

qf$pred = predict(model2, newdata=qf)
qf$residual = with(qf, y-pred)

ggplot(qf, aes(x=pred, y=residual)) + 
  geom_point(alpha=0.5) + geom_hline(yintercept=0, color="red") +
  geom_smooth(se=FALSE) + 
  ggtitle("Standard residual plot",
          subtitle = "linear model, quadratic process")

Standard Residual Plot, quadratic process

In this case, the smoothing line on the residuals doesn’t approximate the line y = 0; when the model predicts a value in the range 1 to about 2.3, it tends to be overpredicting; otherwise, it tends to underpredict. This is an instance of a pathology called “structure in the residuals.”

The Peril of Outcomes on the x-axis

What happens if you erroneously plot the residuals versus the true outcome, instead of the predictions? Let’s try this with the model for the linear process (which we know is a well-fit model):

# the wrong residual graph
ggplot(df, aes(x=y, y=residual)) + 
  geom_point(alpha=0.5) + geom_hline(yintercept=0, color="red") +
  geom_smooth(se=FALSE) + 
  ggtitle("Incorrect residual plot",
          subtitle = "linear model and process")

Incorrect residual plot, linear process

If you make this plot when you meant to make the other, you will give yourself a nasty shock. Plotting residuals versus the outcome will always look more or less like the above graph. You might think that for a good model, the outcome and the prediction are close to each other, so the residual graphs should look about the same no matter which quantity you plot on the x-axis, right? Why do they look so different?

Reversion to mediocrity (or the mean)

One reason that the proper residual graph (for a well fit model) should smooth out to the line y=0 is known as reversion to mediocrity, or regression to the mean.

Imagine that you have an ideal process that always produces a single value y. You don’t actually observe this “true value”; instead, what you observe is y plus (IID, zero mean) noise. You can build a “model” for this process that predicts the mean of the observations, in this case the value 0.1033149. Then you can calculate the residuals of your “model” in the usual way.

When you plot the residuals as a function of the prediction, all the datums fall at the same horizontal coordinate of the graph, centered around zero, and approximately equally distributed between positive and negative. The “smoothing line” through this graph is simply the point (0.1033149, 0) – that is, the graph is centered at zero.

residuals correctly

On the other hand, if you plot the residuals as a function of the observed outcome, all the observations will be sorted so that the observations with positive noise are to the right of the observations with negative noise, and the smoothing line through the graph no longer looks like the line y = 0.

residuals wrong

For a process that varies as a function of the input, you can think of the prediction corresponding to an input X as the mean of all the observations corresponding to X, and the idea is the same.

Incidentally, this regression to the mean is also why model predictions tend to have less range than the original training data.

Plotting observations versus predictions

Sometimes instead of plotting residuals versus the predictions, I plot observations versus predictions. In this case, you want to check that the predictions lie approximately on the line y = x. This isn’t a standard diagnostic plot, but it does give a better sense of the magnitude of the errors relative to the magnitudes of the outcomes. Again, the important thing to remember is that the predictions go on the x-axis.

Here’s the correct plot:

# standard prediction plot
ggplot(df, aes(x=pred, y=y)) + 
  geom_point(alpha=0.5) + geom_abline(color="red") + 
  geom_smooth(se=FALSE) + 
  ggtitle("Standard prediction plot")

standard prediction plot

And here’s the wrong plot:

# the "wrong" way
ggplot(df, aes(x=y, y=pred)) + 
  geom_point(alpha=0.5) + geom_abline(color="red") +
  geom_smooth(se=FALSE) + 
  ggtitle("Incorrect prediction plot")

wrong prediction plot

Notice how the wrong plot again seems to show pathological structure where none exists.

Conclusion

The above examples show why you should always take care to plot your model diagnostics as functions of the predictions and not of the observations. Most students have heard this already, but we feel that demonstrating why will be more memorable that simply saying “make it so.”