Lester Mackey

Lester Mackey

Lester Mackey

Dr. Mackey is a machine learning researcher at Microsoft Research New England and an adjunct professor at Stanford University. His PhD (Computer Science 2012) and MA (Statistics 2011) are both from University of California, Berkeley, while his undergraduate degree (Computer Science 2007) is from Princeton University.

He is involved in Stanford’s initiative of Statistics for Social Good and has the following quote on his website:

Quixotic though it may sound, I hope to use computer science and statistics to change the world for the better.

In 2023, Dr. Mackey was awarded a MacArthur Genius Grant.

Topics covered

From Dr. Mackey’s personal website his areas of research are:

  • statistical machine learning
  • scalable algorithms
  • high-dimensional statistics
  • approximate inference
  • probability

Relevant work

When data is collected in an adaptive manner, even simple methods like ordinary least squares can exhibit non-normal asymptotic behavior. As an undesirable consequence, hypothesis tests and confidence intervals based on asymptotic normality can lead to erroneous results. We propose a family of online debiasing estimators to correct these distributional anomalies in least squares estimation. Our proposed methods take advantage of the covariance structure present in the dataset and provide sharper estimates in directions for which more information has accrued. We establish an asymptotic normality property for our proposed online debiasing estimators under mild conditions on the data collection process and provide asymptotically exact confidence intervals…

This work develops central limit theorems for cross-validation and consistent estimators of its asymptotic variance under weak stability conditions on the learning algorithm. Together, these results provide practical, asymptotically-exact confidence intervals for k-fold test error and valid, powerful hypothesis tests of whether one learning algorithm has smaller k-fold test error than another. These results are also the first of their kind for the popular choice of leave-one-out cross-validation. In our real-data experiments with diverse learning algorithms, the resulting intervals and tests outperform the most popular alternative methods from the literature…

Other

The precursor to kaggle was a $1 million prize given by Netflix to the most accurate prediction of ratings that people give to the movies they watch. As undergraduates, Dr. Mackey and two friends led the competition for a few hours in its first year. Later, groups merged and Dr. Mackey’s group merged with a few others, forming The Ensemble. Their final analysis came in second with the exact same error rates as the winning entry. The winning entry, however, had been submitted 20 minutes prior. Sigh.


Back to the full database

GitHub repository