Lars Peter Hansen explains the work that won him the 2013 Nobel.
In October, Lars Peter Hansen, the David Rockefeller distinguished service professor in economics and statistics, shared the 2013 Nobel Prize in Economics with finance professor Eugene Fama, MBA'63, PhD'64, and Yale's Robert Shiller, all cited for their contributions to asset price analysis. Hansen is recognized for his work on a statistical technique called the Generalized Method of Moments (GMM), a method that lets you "do something without having to do everything," he says. Building on these theoretical advances, Hansen and coauthors applied the methods to understand better the linkages between financial markets and the macroeconomy.
The GMM is designed to help statisticians make more accurate conclusions about the relationship between certain variables in a stochastic economic model without having to consider formally the dynamic evolution of all model components. It can also be used to estimate and test existing models, including those advocated by FAma and Shiller. The GMM can function as a skeleton key to unlock strengths and weaknesses of important elements of an economic model that pertain to the returns to alternative investments opportunities.
You work in econometrics, at the intersection between economics and statistics. What is the benefit of adding mathematical modeling to economics?
Some people view the role of formal mathematics with skepticism, arguing that the field is too mathematical. A reason that I use math is to provide clarity. By the time you translate something into mathematics, you have to be clear what you're trying to say, and you open the door to a formal analysis of the strengths and weakness of the models that interest us.
What are the practical applications of the Generalized Method of Moments?
A lot of it is about assessing and testing models. Some of the papers that I received a lot of attention for were exposing problems in models, exposing what their gaps were. How do we fill those gaps? How do we build new models that can address the empirical shortcomings of the initial set of models?
In a paper that I wrote with Stanford's Ken Singleton, we took a model that was in common use among economists and showed that if you looked at it from a formal statistical perspective, it had some major problems in terms of connecting the macroeconomy and financial markets.
My work isn't telling people how to go out and make money; it's asking how can we build models that work better and are more useful for thinking about important policy questions.
When you write down a model, it's a simplification.
- Lars Hansen
How can your research affect the average citizen?
I'm interested in the concept of uncertainty, how people respond to and cope with it. This work should have ramifications for how people make decisions.
When we build models as economists, and when people build and use models in the private sector, these models are not perfect. If you take them too literally, you can make mistakes. How do you use models in sensible ways without pretending that they get everything correct? That's really important challenge for the private sector, the public sector, and for academic economists interested in uncertainty.
How does your method take into account that uncertainty?
When you write down a model, it's a simplification. It's not going to capture everything in reality by its very nature. How do you approach that? Well, you can start doing things like sensitivity analysis—to see what happens if you start changing aspects of the model. What happens to the probabilities? How much do they change, which probabilities change a lot, and which ones don't change much?
These questions often do not translate into complete probabilistic statements because when a model is wrong, if you knew how to fix it, then you could make it right. Typically you don't quite know how to fix it, but you analyze it to at least get some idea of where the important sensitivities are. From the decision-maker's standpoint, you want to know where a mistake in the model would have the biggest consequences.
How does this work affect policy making?
The financial crisis exposed gaps in our understanding of the linkages between the financial markets and the macroeconomy. Those linkages now have to get repaired. If you look at what's required in policy-making circles, there's this pressure to question how we can do a better job of monitoring financial markets.
Recently, a lot of that's been done in very informal and ad hoc ways because the models out there were not so useful as guides. So what I've been trying to do through the Becker Friedman Institute and with scholars all around the country is to determine how we can build the next generation of models that will be useful for quantitative purposes and will put us in a better position to think about sensible ways to monitor financial markets. —M.S.
- 1978: PhD (Economics) - University of Minnesota
- 1974: BS (Mathematics) - Utah State University
- 1981—present: University of Chicago
- 1978—1981: Carnegie-Mellon University
- 2013: Sveriges Riksban Prize in Economic Sciences in Memory of Alfred Nobel (w/E. Fama, R. Shiller)
- 2011: BBVA Foundation Frontiers of Knowledge Award in Economics, Finance and Management
- 2008: CME Group-MSRI Prize in Innovative Quantitative Applications
- 2006: Erwin Plein Nemmers Prize in Economics
- 1984: Frisch Medal, Econometric society (w/K. Singleton)