Pressemitteilung - Working Paper 245

Robustifying optimal monetary policy using simple rules as cross-checks

There are two main approaches to specifying monetary policy in the literature; optimal policy and simple instrument rules. By 'optimal policy', we mean minimizing a specific loss function using all information embedded in the model.

Simple instrument rules, on the other hand, specify how the monetary policy instrument - the key interest rate - should respond to a subset of the information available to the policy-maker. The original Taylor (1993) rule is an example of a simple rule where the central bank responds to a subset of the information set, i.e., the rate of inflation and the output gap. By construction, simple rules lead to a higher loss than optimal policy when evaluated in a given model, but the excess loss depends both on how restricted the simple rule is and on the model itself.

In addition to providing a rough description of actual policy, simple rules have a normative motivation; they are considered more robust to model uncertainty than optimal policy. Taylor and Wieland (2012) provide a survey and discussion of the literature on simple robust rules. In the literature, the model simulations are commonly based on the assumption that the central bank commits to the simple rule in a mechanical way. However, as pointed out by Svensson (2003), full commitment to a simple rule like the Taylor rule is unrealistic, and no central bank actually does this in practice. Svensson therefore rejects simple rules, both from a positive and a normative perspective. Instead, he advocates optimal policy (or 'targeting rules') and argues that this is a more reasonable description of monetary policy, as the central bank is treated as an optimizing agent in the same way as households and firms, and that optimal policy leads to better outcomes than simple rules.

Although Svensson's critique may be justified, the fact that central banks do not commit themselves to following simple instrument rules like the Taylor rule mechanically does not imply that monetary policy is not influenced by such rules at all. On the contrary, we find it reasonable to assume that monetary policy in practice has, at least to some extent, been influenced by the vast literature on simple robust rules. Indeed, Kahn (2012) provides a thorough documentation on how simple Taylor-type rules have influenced the Federal Open Market Committee's decisions and how they are used as cross-checks to the interest rate decisions in many central banks. An illustrative example is the FOMC meeting on 31 January 1 February 1995, where the Greenbook suggested a 150 basis points increase in the federal funds rate to 7 percent. FOMC member Janet Yellen expressed the following concern: "I do not disagree with the Greenbook strategy. But the Taylor rule and other rules… call for a rate in the 5 percent range, which is where we already are. Therefore, I am not imagining another 150 basis points".

Similar references to the Taylor rule can also be found from policy meetings held in other central banks.
We will therefore argue that a realistic description of the monetary policy process is optimal policy using all available information, but where simple rules are used as cross-checks (or guidelines). This approach seems consistent with how policy-makers form their interest rate decisions in practice. For example, Yellen (2012) describes the assessments as follows: "One approach I find helpful in judging an appropriate path for policy is based on optimal control techniques. [...]. An alternative approach that I find helpful [...] is to consult prescriptions from simple policy rules".

While the existing literature on robustness assumes either optimal policy or full commitment to a simple robust rule, we take an intermediate approach. We introduce a modified loss function extended with a term penalizing deviations of the interest rate from the level implied by a simple rule.
Our approach is inspired by Rogoff's (1985) seminal paper on the optimal degree of commitment to an intermediate target, in which he argues that "it is not generally optimal to legally constrain the central bank to hit its intermediate target (or follow its rule) exactly" (p.1169). While Rogoff's proposal was aimed to reduce the inflationary bias under discretion, we consider partial commitment to rules designed to make policy more robust. In other words, we analyze whether the loss across different models tends to be lower if the central bank minimizes a modified loss function with weight on a simple rule. The idea of extending the loss function with a term with a simple interest rate rule is novel, but the idea of robustifying optimal policy through modified loss functions is not new. Orphanides and Williams (2008) show that a loss function with reduced weight on the unemployment gap and on interest rate stability is more robust to wrong assumptions about private agents' expectations formation (i.e., rational expectations versus learning).

Our approach of using cross-checks in the modified loss function is also related to Beck and Wieland (2009). They consider a policy where the central bank conducts optimal policy in "normal" times, but extends the loss function with a money growth term when money growth is outside a critical range. Our specification differs in using simple interest rate rules, rather than money growth, as cross-check and by letting the simple rule always enter the operational loss function and not only when the deviation is outside a critical range. The novelty of our modified loss function is that it builds a bridge between the two alternative monetary policy approaches; optimal policy and simple robust rules, making it possible to analyze intermediate solutions.

To analyze the robustness properties of the modified loss function, we consider three alternative models for the US economy: The Smets and Wouters (2007) model, the Rudebusch and Svensson (1999) model, and the Fuhrer and Moore (1995) model. We assume that the Smets-Wouters model is the central bank's reference model due to its influence on models used for policy simulations among central banks in practice. We have re-estimated the other two models on the dataset from Smets and Wouters (2007) to get comparable estimates of the variances of the shocks and thereby losses in the different models. The two alternative models have been thoroughly investigated in the robustness literature, which makes it easier to compare our results with those obtained earlier. Moreover, and most importantly, these models represent very different views on issues such as inflation persistence and expectation formation. The Rudebusch-Svensson model is completely backward-looking, while the Fuhrer-Moore model is partly forward-looking and partly backward-looking. As we will show, backward-looking models imply very different monetary policy as far as inertia is concerned and they are therefore a natural alternative to the largerly forward-looking reference model.

We consider different simple rules, including the classical Taylor rule, an optimal simple Bayesian rule, which minimizes the (weighted) average of the losses in the different alternative models, and a minimax rule, which minimizes the maximum loss in the alternative models. We find that placing a weight on either of the rules, the central bank can insure against very bad outcomes if the reference model is wrong. Even if the simple Bayesian rule and the minimax rule are derived optimally using the alternative models, the classical Taylor rule, with the coeffients of 1.5 on inflation and 0.5 on the output gap, does surprisingly well and not significantly worse than the simple optimized rules. Another interesting finding is that the weight on a simple rule is always strictly smaller than one. Thus, a robust monetary policy is to lean towards simple rules, but not follow them mechanically. We therefore find support for the common view among proponents of simple rules, namely that they should be used as guidelines, but not as mechanical formulas for interest rate-setting.