R users have been enjoying the benefits of
SQL query generators for quite some time, most notably using the
dbplyr package. I would like to talk about some features of our own
rquery query generator, concentrating on derived result re-use.
R package and training materials we emphasize the record-oriented thinking and how to design a transform control table. We now have an additional exciting new feature: control table keys.
The user can now control which columns of a
cdata control table are the keys, including now using composite keys (that is keys that are spread across more than one column). This is easiest to demonstrate with an example.
Composing functions and sequencing operations are core programming concepts.
Some notable realizations of sequencing or pipelining operations include:
- CMS Pipelines.
F#‘s forward pipe operator
- Haskel’s Data.Function
The idea is: many important calculations can be considered as a sequence of transforms applied to a data set. Each step may be a function taking many arguments. It is often the case that only one of each function’s arguments is primary, and the rest are parameters. For data science applications this is particularly common, so having convenient pipeline notation can be a plus. An example of a non-trivial data processing pipeline can be found here.
One of the design goals of the
R package is that very powerful and arbitrary record transforms should be convenient and take only one or two steps. In fact it is the goal to take just about any record shape to any other in two steps: first convert to row-records, then re-block the data into arbitrary record shapes (please see here and here for the concepts).
But as with all general ideas, it is much easier to see what we mean by the above with a concrete example.
To make teaching
R quasi-quotation easier it would be nice if
R string-interpolation and quasi-quotation both used the same notation. They are related concepts. So some commonality of notation would actually be clarifying, and help teach the concepts. We will define both of the above terms, and demonstrate the relation between the two concepts.
R Tip: use inline operators for legibility.
- It concatenates lists:
[1,2] + is
[1, 2, 3].
- It concatenates strings:
'a' + 'b'is
And, of course, it adds numbers:
1 + 2 is
The inline notation is very convenient and legible. In this note we will show how to use a related notation
While working on a variation of the
RcppDynProg algorithm we derived the following beautiful identity of 2 by 2 real matrices:
The superscript “top” denoting the transpose operation, the ||.||^2_2 denoting sum of squares norm, and the single |.| denoting determinant.
This means I can time the exact same algorithm implemented nearly identically in each of these three languages. So I can extract some comparative “apples to apples” timings. Please read on for a summary of the results.
One often hears that
R can not be fast (false), or more correctly that for fast code in
R you may have to consider “vectorizing.”
A lot of knowledgable
R users are not comfortable with the term “vectorize”, and not really familiar with the method.
“Vectorize” is just a slightly high-handed way of saying:
Rnaturally stores data in columns (or in column major order), so if you are not coding to that pattern you are fighting the language.
In this article we will make the above clear by working through a non-trivial example of writing vectorized code.
RcppDynProg is a new
R package that implements simple, but powerful, table-based dynamic programming. This package can be used to optimally solve the minimum cost partition into intervals problem (described below) and is useful in building piecewise estimates of functions (shown in this note).