There is a narrative within our field that macroeconomics has lost its way. While I have some sympathy with this narrative, I think it is a better description of the field 10 years ago than of the field today. Today, macroeconomics is in the process of regaining its footing. Because of this, in my view, the state of macroeconomics is actually better than it has been for quite some time.
The most important problem with macro over the past few decades has been that it has been too theoretical. When I say this, I don‘t at all mean to say that theory is useless. To the contrary, theory is an essential element of a healthy science. But a healthy science needs a balance between theory and empirical work. Macro lost this balance in the 1980s and is only regaining it now.
Most narratives about the evolution of macro focus on the evolution of macroeconomic theory and the rational expectations revolution in particular. An under-appreciated part of this story is that the rational expectations revolution shifted the field away from empirical work. This was partly because building models that met the higher standards of rigor set by Lucas and his co-revolutionaries was a challenging and therefore highly absorbing task. But that isn’t the only reason.
For reasons that are not entirely clear to me, a very substantial fraction of macroeconomists came to believe that the Lucas critique implied that quasi-experimental empirical methods could not be used in macro. The idea that changes in policy could radically alter empirical regularities (i.e., the Lucas critique) somehow came to be interpreted to mean that the only way to do empirical work in macro was to write down fully specified general equilibrium models of the whole economy and evaluate the entire model (either by full-information inference methods or moment matching). Sargent, for example, placed enormous emphasis on the idea of “cross-equation restrictions.“ It seems that this line of thinking led large numbers of macroeconomists astray in terms of how to think about empirical work in macro for several decades.
This misunderstanding was never complete. There were isolated pockets of empirical work in macro that employed instrumental variables methods — e.g., using lags as instruments when estimating Phillips curves or Euler equations. The structural VAR literature also managed to carve out some limited understanding of using “identifying assumptions” to move away from whole-model inference. And throughout, there was a small minority of empirical macro researchers that understood the value of quasi-experimental methods. But a large fraction of macroeconomists rejected such analysis as being un-sound and an even larger fraction of macroeconomists (including myself) were muddled and inconsistent in their thinking about when and how quasi-experimental methods could be used in macro (often rejecting these methods out of hand in unfamiliar settings while being perfectly happy to use them in other more familiar settings).
This misunderstanding seriously held back progress in empirical macroeconomics for a generation. Over this period, applied micro experienced a credibility revolution which led various types of quasi-experimental methods to become vastly more important in many subfields of economics. Macro was largely left behind on this front. Two things are worth noting about this. First, quasi-experimental work is particularly difficult in macro due to identification being difficult in a general equilibrium setting. Second, substantial parts of applied micro became pretty unbalanced in the other direction, with theory largely falling by the wayside. Recently, as macro has been catching up on the empirical side, it seems that more and more researchers in applied micro have also started embracing more thoroughly the complementarity of quasi-experimental methods and serious structural modelling – i.e., having “the best of both worlds” as Todd and Wolpin recently put it. In this regard, macro was probably ahead of the curve and may even have helped influence our applied micro colleagues.
During the time theory was dominant in macro, much progress was made on the theoretical front. But being so dominated by theory, the field was very exposed to another problem: models in which markets work well are (usually) easier to solve than models in which market work poorly. This simple fact has huge consequences because it imparts a bias on economic theory towards models in which markets work well. Since models in which markets work well are easier to solve, researchers tend to work with such models. The default assumption about a market is typically that it is perfectly competitive. Researcher will often introduce a carefully constructed friction in a critical place in their model and focus their analysis on the implications of this friction. But all other markets in the model are typically modeled as being perfectly competitive for simplicity.
The typical researcher is so used to assuming that virtually all markets are perfectly competitive that they are often completely blinded as to the consequences of these assumptions. They take as given certain implications of these perfect markets assumptions as though they were inevitable consequences of logic as opposed to the consequences of obviously false simplifying assumptions that they and everyone they know have made for years and years. One example of this that resonates strongly with me is the notion that MPCs are trivially small in many macro models. This has wide ranging consequences for the behavior of these models (e.g., stimulus checks are useless). For many years, I had never encountered a model where this was not true (or at least forgotten any such instances). So, I was largely brainwashed to believe that these features of these models were just how things must be. But then at some point I came to appreciate how differently models with uninsurable idiosyncratic risk behave and it was like being hit by a ton of bricks. How could I have not realized how critical the simplifying assumption of perfect markets was in this regard! (There are many other such examples. A high profile one in labor economics has to do with the implications of raising minimum wages.)
One of the critical roles of empirical work is to confront theorists and policymakers with facts that help them see that the models they are using are not well suited to analyze whatever feature of reality they are interested in. Macro’s muddled understanding of the value of quasi-experimental methods was a huge handicap for the development of the field in this regard. Some facts are simple and don’t need quasi-experimental methods to establish (the equity premium is a good example). But many critically important facts are beyond reach without quasi-experimental methods (e.g., estimates of MPCs, fiscal multipliers, the slope of the Phillips curve, the IES, the effects of monetary shocks, etc.). Without a robust set of such estimates to guide the development of theory, the theoretical literature is rudderless and is at risk of getting lost at sea.
Thankfully, things have now started to improve very rapidly on this front in macro. Especially among younger researchers, the cross-equation restriction fog is lifting and the value of quasi-experimental methods is starting to be understood more clearly and more generally. It is, for example, becoming better and better understood that with the help of an instrument (or some other source of exogenous variation) one can estimate various types of causal effects without specifying a full structural model of the whole economy. (Panel data methods and various non-traditional datasets have also helped a lot.)
One still sometimes faces questions along the following lines: “but aren’t X and Y endogenous variables that are jointly determined in general equilibrium” even when one has spent a huge amount of time explaining the nature of the exogenous variation that one is exploiting; and one still faces blank stares on occasion when one responds to such questions with a version of “Yes, so are P and Q in a supply and demand setting, but that doesn’t mean that estimating the slope of a demand curve by IV is impossible.” However, such instances are becoming less frequent.
The upside of this is that credible estimates of more and more critical empirical statistics are emerging in macro and this is starting to guide theoretical and policy work in a more and more serious way. Let me take a few examples: We now have a substantial body of high quality work indicating that MPCs are quite large. This is the basic empirical fact favoring HANK models over traditional NK models. But this fact also has important consequences when it comes to the macroeconomic effects of policies that supplement people’s incomes during recessions. We also have a substantial body of high quality work indicating that fiscal multipliers are large in the cross-section. This fact points in a similar direction as the high MPC fact: macro stimulus can raise output substantially in circumstances when monetary policy is accommodative (e.g., at the ZLB). Furthermore, we have more and more work indicating that the slope of the Phillips curve is modest. This implies that a boom that leads to overheating of the economy will have modest effects on inflation as long as inflationary expectations remain anchored. (There are many more good examples.)
Macro has a lot of lost time to make up for when it comes to making use of quasi-experimental empirical methods. This creates a stock-flow problem. The stock of empirical work using quasi-experimental methods in macroeconomics remains low, and the challenges of doing such work remain high due to the difficulties of identification in a general equilibrium setting. But the flow is very different from the stock: there is a substantial flow of high quality quasi-experimental empirical work in macro. Looking at this flow one can reasonably argue that the field is trending strongly towards a healthy balance between theory and empirical work. I am optimistic that this will over time result in more and more people concluding that macro has “found its way” again. I certainly think it has.