Archive

Posts Tagged ‘Mathematical Bedside Reading’

Kernel Methods and Support Vector Machines de-Mystified

October 7th, 2011 2 comments

We give a simple explanation of the interrelated machine learning techniques called kernel methods and support vector machines. We hope to characterize and de-mystify some of the properties of these methods. To do this we work some examples and draw a few analogies. The familiar no matter how wonderful is not perceived as mystical. Read more…

What Did Theorists Do Before The Age Of Big Data?

August 2nd, 2010 1 comment

We have been living in the age of “big data” for some time now. This is an age where incredible things can be accomplished through the effective application of statistics and machine learning at large scale (for example see: “The Unreasonable Effectiveness of Data” Alon Halevy, Peter Norvig, Fernando Pereira, IEEE Intelligent Systems (2009)). But I have gotten to thinking about the period before this. The period before we had easy access to so much data, before most computation was aggregation and before we accepted numerical analysis style convergence as “efficient.” A small problem I needed to solve (as part of a bigger project) reminded me what theoretical computer scientists did then: we worried about provable worst case efficiency.

Read more…

Gradients via Reverse Accumulation

July 14th, 2010 Comments off

We extend the ideas of from Automatic Differentiation with Scala to include the reverse accumulation. Reverse accumulation is a non-obvious improvement to automatic differentiation that can in many cases vastly speed up calculations of gradients. Read more…

“Easy” Portfolio Allocation

January 14th, 2010 Comments off

This is an elementary mathematical finance article. This means if you know some math (linear algebra, differential calculus) you can find a quick solution to a simple finance question. The topic was inspired by a recent article in The American Mathematical Monthly (Volume 117, Number 1 January 2010, pp. 3-26): “Find Good Bets in the Lottery, and Why You Shouldn’t Take Them” by Aaron Abrams and Skip Garibaldi which said optimal asset allocation is now an undergraduate exercise. That may well be, but there are a lot of people with very deep mathematical backgrounds that have yet to have seen this. We will fill in the details here. The style is terse, but the content should be about what you would expect from one day of lecture in a mathematical finance course.

Read more…

What is the gambler’s equivalent of Amdahl’s Law?

October 8th, 2009 2 comments

While executing some statistical detective work for a client we had a major “aha!” moment and realized something like “Amdahl’s Law” rephrased in terms of probability would solve everything. We finished our work using direct methods and moved on. But it is an interesting question: what is the probabilist’s (or gambler’s) equivalent of Amdahl’s Law? Read more…

Good Graphs: Graphical Perception and Data Visualization

August 28th, 2009 7 comments

What makes a good graph? When faced with a slew of numeric data, graphical visualization can be a more efficient way of getting a feel for the data than going through the rows of a spreadsheet. But do we know if we are getting an accurate or useful picture? How do we pick an effective visualization that neither obscures important details, or drowns us in confusing clutter? In 1968, William Cleveland published a text called The Elements of Graphing Data, inspired by Strunk and White’s classic writing handbook The Elements of Style . The Elements of Graphing Data puts forward Cleveland’s philosophy about how to produce good, clear graphs — not only for presenting one’s experimental results to peers, but also for the purposes of data analysis and exploration. Cleveland’s approach is based on a theory of graphical perception: how well the human perceptual system accomplishes certain tasks involved in reading a graph. For a given data analysis task, the goal is to align the information being presented with the perceptual tasks the viewer accomplishes the best. Read more…

The Data Enrichment Method

April 30th, 2009 2 comments

We explore some of the ideas from the seminal paper “The Data-Enrichment Method” ( Henry R Lewis, Operations Research (1957) vol. 5 (4) pp. 1-5). The paper explains a technique of improving the quality of statistical inference by increasing the effective size of the data-set. This is called “Data-Enrichment.”

Now more than ever we must be familiar with the consequences of these important techniques. Especially if we don’t know if we might already be a victim of them.

Read more…

A Quick Appreciation of the Sharpe Ratio

September 30th, 2008 Comments off

The current state of the global financial markets has gotten more people than usual worrying about the technical aspects of finance. One method for reasoning about investment returns and risk is a tool called the Sharpe Ratio. It is well worth reviewing this measure and seeing how, if used properly, it doesn’t favor any of the mistakes that underly our current financial crisis. Read more…