Please check it out!
Nina Zumel and I have a two new tutorials on fluid data wrangling/shaping. They are written in a parallel structure, with the R version of the tutorial being almost identical to the Python version of the tutorial.
This reflects our opinion on the “which is better for data science R or Python?” They both are great. So start with one, and expect to eventually work with both (if you are lucky).
For quite a while we have been teaching estimating variable re-encodings on the exact same data they are later naively using to train a model on, leads to an undesirable nested model bias. The
vtreat package (both the
R version and
Python version) both incorporate a cross-frame method that allows one to use all the training data both to build learn variable re-encodings and to correctly train a subsequent model (for an example please see our recent PyData LA talk).
The next version of
vtreat will warn the user if they have improperly used the same data for both
vtreat impact code inference and downstream modeling. So in addition to us warning you not to do this, the package now also checks and warns against this situation.
vtreat has had methods for avoiding nested model bias for vary long time, we are now adding new warnings to confirm users are using them.
Set up the Example
This example is excerpted from some of our classification documentation.
I’d like to share some new timings on a grouped in-place aggregation task. A client of mine was seeing some slow performance, so I decided to time a very simple abstraction of one of the steps of their workflow.
I’ve been writing a lot about a category theory interpretations of data-processing pipelines and some of the improvements we feel it is driving in both the
data_algebra and in
I think I’ve found an even better category theory re-formulation of the package, which I will describe here.
In our recent note What is new for
rquery December 2019 we mentioned an ugly processing pipeline that translates into
SQL of varying size/quality depending on the query generator we use. In this note we try a near-relative of that query in the
data_algebra package is a query generator that can act on either
Pandas data frames or on
SQL tables. This is discussed on the project site and the examples directory. In this note we will set up some technical terminology that will allow us to discuss some of the underlying design decisions. These are things that when they are done well, the user doesn’t have to think much about. Discussing such design decisions at length can obscure some of their charm, but we would like to point out some features here.
Slides from my PyData2019 data_algebra lightning talk are here.