R Tip: be wary of “
The following code example contains an easy error in using the R function
vec1 <- c("a", "b", "c")
vec2 <- c("c", "d")
#  "a" "b" "c"
Notice none of the novel values from
vec2 are present in the result. Our mistake was: we (improperly) tried to use
unique() with multiple value arguments, as one would use
union(). Also notice no error or warning was signaled. We used
unique() incorrectly and nothing pointed this out to us. What compounded our error was
...” function signature feature.
In this note I will talk a bit about how to defend against this kind of mistake. I am going to apply the principle that a design that makes committing mistakes more difficult (or even impossible) is a good thing, and not a sign of carelessness, laziness, or weakness. I am well aware that every time I admit to making a mistake (I have indeed made the above mistake) those who claim to never make mistakes have a laugh at my expense. Honestly I feel the reason I see more mistakes is I check a lot more.
Continue reading R Tip: Be Wary of “…”
R Tip: use
A lot of R functions are type unstable, which means they return different types or classes depending on details of their values.
For example consider
all.equal(), it returns the logical value
TRUE when the items being compared are equal:
all.equal(1:3, c(1, 2, 3))
#  TRUE
However, when the items being compared are not equal
all.equal() instead returns a message:
all.equal(1:3, c(1, 2.5, 3))
#  "Mean relative difference: 0.25"
This can be inconvenient in using functions similar to
all.equal() as tests in
if()-statements and other program control structures.
The saving functions is
TRUE if its argument value is equivalent to
TRUE, and returns
R programming much easier.
Continue reading R Tip: use isTRUE()
R tip: use slices.
R has a very powerful array slicing ability that allows for some very slick data processing.
Continue reading R Tip: Use Slices
R tip: first organize your tasks in terms of data, values, and desired transformation of values, not initially in terms of concrete functions or code.
I know I write a lot about coding in
R. But it is in the service of supporting statistics, analysis, predictive analytics, and data science.
R without data is like going to the theater to watch the curtain go up and down.
(Adapted from Ben Katchor’s Julius Knipl, Real Estate Photographer: Stories, Little, Brown, and Company, 1996, page 72, “Excursionist Drama 2”.)
Usually you come to
R to work with data. If you think and plan in terms of data and values (including introducing more data to control processing) you will usually work in much faster, explainable, and maintainable fashion.
Continue reading R Tip: Think in Terms of Values
Here is an R tip. Want to re-map a column of values? Use a named vector as the mapping.
Continue reading R Tip: Use Named Vectors to Re-Map Values
Another R tip. Need to replace a name in some R code or make R code re-usable? Use
Continue reading R Tip: Use let() to Re-Map Names
R tip: use
stringsAsFactors = FALSE.
R often uses a concept of
factors to re-encode strings. This can be too early and too aggressive. Sometimes a string is just a string.
It is often claimed Sigmund Freud said “Sometimes a cigar is just a cigar.”
Continue reading R Tip: Use
stringsAsFactors = FALSE
If you are working with predictive modeling or machine learning in
R this is the
R tip that is going to save you the most time and deliver the biggest improvement in your results.
R Tip: Use the
vtreat package for data preparation in predictive analytics and machine learning projects.
When attempting predictive modeling with real-world data you quickly run into difficulties beyond what is typically emphasized in machine learning coursework:
- Missing, invalid, or out of range values.
- Categorical variables with large sets of possible levels.
- Novel categorical levels discovered during test, cross-validation, or model application/deployment.
- Large numbers of columns to consider as potential modeling variables (both statistically hazardous and time consuming).
- Nested model bias poisoning results in non-trivial data processing pipelines.
Any one of these issues can add to project time and decrease the predictive power and reliability of a machine learning project. Many real world projects encounter all of these issues, which are often ignored leading to degraded performance in production.
vtreat systematically and correctly deals with all of the above issues in a documented, automated, parallel, and statistically sound manner.
vtreat can fix or mitigate these domain independent issues much more reliably and much faster than by-hand ad-hoc methods.
This leaves the data scientist or analyst more time to research and apply critical domain dependent (or knowledge based) steps and checks.
If you are attempting high-value predictive modeling in
R, you should try out
vtreat and consider adding it to your workflow.
Continue reading R Tip: Use the vtreat Package For Data Preparation