Posted on Categories Coding, Opinion, Programming, Statistics, TutorialsTags , , , , Leave a comment on Is R base::subset() really that bad?

Is R base::subset() really that bad?

Is R base::subset() really that bad?

The Hitchhiker s Guide to the Galaxy svg

Continue reading Is R base::subset() really that bad?

Posted on Categories Coding, Statistics, TutorialsTags , , , , 6 Comments on R Tip: Force Named Arguments

R Tip: Force Named Arguments

R tip: force the use of named arguments when designing function signatures.

R’s named function argument binding is a great aid in writing correct programs. It is a good idea, if practical, to force optional arguments to only be usable by name. To do this declare the additional arguments after “...” and enforce that none got lost in the “... trap” by using a checker such as wrapr::stop_if_dot_args().

Example:

#' Increment x by inc.
#' 
#' @param x item to add to
#' @param ... not used for values, forces later arguments to bind by name
#' @param inc (optional) value to add
#' @return x+inc
#'
#' @examples
#'
#' f(7) # returns 8
#'
f <- function(x, ..., inc = 1) {
   wrapr::stop_if_dot_args(substitute(list(...)), "f")
   x + inc
}

f(7)
#> [1] 8

f(7, inc = 2)
#> [1] 9


f(7, q = mtcars)
#> Error: f unexpected arguments: q = mtcars

f(7, 2)
#> Error: f unexpected arguments: 2 

By R function evaluation rules: any unexpected/undeclared arguments are captured by the “...” argument. Then “wrapr::stop_if_dot_args()” inspects for such values and reports an error if there are such. The "f" string is returned as part of the error, I chose the name of the function as in this case. The “substitute(list(…))” part is R’s way of making the contents of “…” available for inspection.

You can also use the technique on required arguments. wrapr::stop_if_dot_args() is a simple low-dependency helper function intended to make writing code such as the above easier. This is under the rubric that hidden errors are worse than thrown exceptions. It is best to find and signal problems early, and near the cause.

The idea is that you should not expect a user to remember the positions of more than 1 to 3 arguments, the rest should only be referable by name. Do not make your users count along large sequences of arguments, the human brain may have special cases for small sequences.

If you have a procedure with 10 parameters, you probably missed some.

Alan Perlis, “Epigrams on Programming”, ACM SIGPLAN Notices 17 (9), September 1982, pp. 7–13.

Note that the “substitute(list(...))” part is the R idiom for capturing the unevaluated contents of “...“, I felt it best to use standard R as much a possible in favor of introducing any additional magic invocations.

Posted on Categories Administrativia, Coding, Statistics, TutorialsTags , , , 4 Comments on R Tip: Use [[ ]] Wherever You Can

R Tip: Use [[ ]] Wherever You Can

R tip: use [[ ]] wherever you can.

In R the [[ ]] is the operator that (when supplied a simple scalar argument) pulls a single element out of lists (and the [ ] operator pulls out sub-lists).

For vectors [[ ]] and [ ] appear to be synonyms (modulo the issue of names). However, for a vector [[ ]] checks that the indexing argument is a scalar, so if you intend to retrieve one element this is a good way of getting an extra check and documenting intent. Also, when writing reusable code you may not always be sure if your code is going to be applied to a vector or list in the future.

It is safer to get into the habit of always using [[ ]] when you intend to retrieve a single element.

Example with lists:

list("a", "b")[1]
#> [[1]]
#> [1] "a"

list("a", "b")[[1]]
#> [1] "a"

Example with vectors:

c("a", "b")[1]
#> [1] "a"

c("a", "b")[[1]]
#> [1] "a"

The idea is: in situations where both [ ] and [[ ]] apply we rarely see [[ ]] being the worse choice.


Note on this article series.

This R tips series is short simple notes on R best practices, and additional packaged tools. The intent is to show both how to perform common tasks, and how to avoid common pitfalls. I hope to share about 20 of these about every other day to learn from the community which issues resonate and to also introduce some of features from some of our packages. It is an opinionated series and will sometimes touch on coding style, and also try to showcase appropriate Win-Vector LLC R tools.

Posted on Categories Coding, StatisticsTags , , , , 1 Comment on R Tip: Use qc() For Fast Legible Quoting

R Tip: Use qc() For Fast Legible Quoting

Here is an R tip. Need to quote a lot of names at once? Use qc().

This is particularly useful in selecting columns from data.frames:

library("wrapr")  # get qc() definition

head(mtcars[, qc(mpg, cyl, wt)])

#                    mpg cyl    wt
# Mazda RX4         21.0   6 2.620
# Mazda RX4 Wag     21.0   6 2.875
# Datsun 710        22.8   4 2.320
# Hornet 4 Drive    21.4   6 3.215
# Hornet Sportabout 18.7   8 3.440
# Valiant           18.1   6 3.460

Or even to install many packages at once:

install.packages(qc(vtreat, cdata, WVPlots))
# shorter than the alternative:
#  install.packages(c("vtreat", "cdata", "WVPlots"))
Posted on Categories data science, Opinion, Statistics, TutorialsTags , , , , , Leave a comment on We Want to be Playing with a Moderate Number of Powerful Blocks

We Want to be Playing with a Moderate Number of Powerful Blocks

Many data scientists (and even statisticians) often suffer under one of the following misapprehensions:

  • They believe a technique doesn’t work in their current situation (when in fact it does), leading to useless precautions and missed opportunities.
  • They believe a technique does work in their current situation (when in fact it does not), leading to failed experiments or incorrect results.

I feel this happens less often if you are working with observable and composable tools of the proper scale. Somewhere between monolithic all in one systems, and ad-hoc one-off coding is a cognitive sweet spot where great work can be done.

Continue reading We Want to be Playing with a Moderate Number of Powerful Blocks

Posted on Categories Coding, data science, Programming, StatisticsTags , , , , , , , 12 Comments on Is 10,000 Cells Big?

Is 10,000 Cells Big?

Trick question: is a 10,000 cell numeric data.frame big or small?

In the era of "big data" 10,000 cells is minuscule. Such data could be fit on fewer than 1,000 punched cards (or less than half a box).


Punch card

The joking answer is: it is small when they are selling you the system, but can be considered unfairly large later.

Continue reading Is 10,000 Cells Big?

Posted on Categories Computer Science, Mathematics, StatisticsTags , , , , Leave a comment on Why No Exact Permutation Tests at Scale?

Why No Exact Permutation Tests at Scale?

Here at Win-Vector LLC we like permutation tests. Our team has written on them (for example: How Do You Know if Your Data Has Signal?) and they are used to estimate significances in our sigr and WVPlots R packages. For example permutation methods are used to estimate the significance reported in the following ROC plot.

NewImage

Permutation tests have their own literature and issues (examples: Permutation, Parametric and Bootstrap Tests of Hypotheses, Springer-Verlag, NY, 1994 (3rd edition, 2005), 2, 3, and 4).

In our R packages the permutation tests are estimated by a sampling procedure, and not computed exactly (or deterministically). It turns out this is likely a necessary concession; a complete exact permutation test procedure at scale would be big news. Please read on for my comments on this issue.

Continue reading Why No Exact Permutation Tests at Scale?

Posted on Categories Exciting Techniques, Programming, Statistics, TutorialsTags , , , , , 4 Comments on Supercharge your R code with wrapr

Supercharge your R code with wrapr

I would like to demonstrate some helpful wrapr R notation tools that really neaten up your R code.


1968 AMX blown and tubbed e

Img: Christopher Ziemnowicz.

Continue reading Supercharge your R code with wrapr

Posted on Categories data science, StatisticsTags , Leave a comment on Latest vtreat up on CRAN

Latest vtreat up on CRAN

There is a new version of the R package vtreat now up on CRAN.

vtreat is an essential data preparation system for predictive modeling that helps defend your predictive modeling work against real world data issues including:

  • High cardinality categorical variables
  • Rare levels (including new or novel levels during application) in categorical variables
  • Missing data (random or systematic)
  • Irrelevant variables/columns
  • Nested model bias, and other over-fit issues.

vtreat also includes excellent, citable, documentation: vtreat: a data.frame Processor for Predictive Modeling.

For this release I want to thank everybody who generously donated their time to submit an issue or build a git pull-request. In particular:

  • Vadim Khotilovich, who found and fixed a major performance problem in the y-stratified sampling.
  • Lawrence Wu, who has been donating documentation fixes.
  • Peter Hurford, who has been donating documentation fixes.
Posted on Categories Administrativia, data science, Exciting Techniques, Practical Data Science, Pragmatic Data Science, Pragmatic Machine Learning, Statistics, TutorialsTags , , , Leave a comment on Data Reshaping with cdata

Data Reshaping with cdata

I’ve just shared a short webcast on data reshaping in R using the cdata package.

(link)

We also have two really nifty articles on the theory and methods:

Please give it a try!

This is the material I recently presented at the January 2017 BARUG Meetup.

NewImage