Archive

Archive for November, 2014

The Fisher Paradox

November 24, 2014 2 comments

There is a bit of a paradox underlying much of monetary economics. If real rates are independent of monetary factors, then a reduction in the nominal rate should be accompanied by a reduction in the expected rate of inflation (or vice-versa). Yet we typically observe, at least in the short run, that if the central bank lowers its interest rate target, it causes a higher rate of inflation. Of course, both old monetarists and market monetarists reconcile this by saying “never reason from a price change” (always good advice) and instead, reason from a change in the money supply (and expected future money supply), assuming sticky prices in the short run and then separate the effects on interest rates into the well-known liquidity, income and Fisher effects which allows for the real rate to change in the short run and for the nominal rate to go either way.

That’s all perfectly reasonable but lately there has been a school of thought emerging known as “neo Fisherites” who are bringing this issue back into the discussion. Nick Rowe (for one) has recently been taking them to task(here, here and here).

Now let me say for starters that I suspect everything Nick says about these papers is correct, and I’m not trying to defend them. I agree that denying that lowering rates raises inflation is contrary to all observations, and I suspect (though I haven’t read them yet) that his analysis of the specific papers as lacking in economic intuition and relying on strange assumptions to “rig” the results in favor of their prior beliefs is most likely spot on. That is how I feel about most modern economic papers I read, sadly. However, I think beneath the snow job and the tiny pebble of wrongness, there is actually a kernel of insight (or at least the pebble started out as a kernel before it got all mangled and turned to the dark side) and it is closely related to the stuff I have been trying to say. So I will try to flesh it out a little bit in a way that does not contradict everything we know about how monetary policy actually works.

Note that this actually began as a discussion of monetary and “fiscal” policy, which I intend to get to but I will put that off for a future post since just dealing with this Fisher paradox will be enough to fill a lengthy post by itself, but keep in mind that adding that piece in will be important for making this model look like the real world. (And also keep in mind that I don’t mean what other people mean when I say “fiscal policy.” Frankly, it’s almost tongue-in-cheek. All macro is monetary.) Read more…

A Reply To Nick Rowe on Robustness

November 22, 2014 3 comments

 

This is a reply to Nick Rowe’s post on the fragility/robustness of equilibria. For the record, I agree entirely with his overarching, macroeconomic point. I’m just nit-picking the technical details here (which I believe is what he’s looking for).

Here are Nick’s definitions.

Let G be a game, let S be a set of strategies in that game (one for each player), and let S* be a Nash equilibrium in that game. Assume a large number of players, and a continuous strategy space, if it helps (because that’s what I have in my mind).

Suppose that a small fraction n of the players deviate by a small amount e from S* (their hands tremble slightly), and that the remaining players know this. Let S*’ (if it exists) be a Nash equilibrium in the modified game.

  1. If S*’ does not exist, then S* is a fragile Nash equilibrium.

  2. If S*’ does not approach S* in the limit as n approaches zero, then S* is a fragile Nash equilibrium.

  3. If S*’ does not approach S* in the limit as e approaches zero, then S* is a fragile Nash equilibrium.

  4. But if S*’ does exist, and S*’ approaches S* in the limit as n or e approaches zero, then S* is a robust Nash equilibrium.

[This began as a comment on the original post so I will proceed in the second person]

Nick,

I think the wheel you are reinventing is basically the idea of trembling hand perfection. I’m not quite an expert on that but I think I know enough game theory to go out on a limb here. So taking the definition from Wikipedia.

First we define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy where every pure strategy is played with non-zero probability. This is the “trembling hands” of the players; they sometimes play a different strategy than the one they intended to play. Then we define a strategy set S (in a base game) as being trembling hand perfect if there is a sequence of perturbed games that converge to the base game in which there is a series of Nash equilibria that converge to S.

I think the main difference between what you are doing and the TH concept is that you are limiting the errors to a “small fraction” of the players whereas the TH definition above assumes that all players have some probability of making a mistake.(also it assumes that all players know this not only the “non-trembling” players, which is only natural since there aren’t any such players.)

Now, I believe your game will pass both the traditional trembling-hand perfect criteria and your modified “robustnest/fragility” criteria for the same reasons, but let’s work with the standard modification since we don’t have to deal with two types. So let us assume that everyone chooses a “target” speed St and let each individual’s actual speed be Sti+ei where ei is an error with some distribution and assume that everyone’s errors are identically distributed and everyone knows the distribution.

Now there are two issues here. First, there is the issue of the number of players. If it is finite, I believe (though I haven’t done the math) that the game will break down even in its original form because when everyone is going the speed limit, any individual driver will be able to change the average slightly by changing their own speed and therefore be able to get paid by doing so and so everyone will want to do this. (Although, there might (in fact I bet there would) be an equilibrium where half of them drive over the speed limit and half drive under and the average speed is S*.)

However, if we assume an infinite number of players, then this won’t be a problem and the equilibrium (the one in question that is) to the base game will be as you say. However, now we have another issue to deal with.

First of all, let me say that the thing which makes TH difficult to deal with is the bit “there is a sequence of perturbed games that converge to the base game” which could mean a lot of different things. But let us assume that the sequence we are interested in is e converging to zero. But the problem here is that the thing that matters to each individual’s payoff is the average speed. And if the mean of e is zero, and everyone is choosing St=S* and there are an infinite number of them, then the average speed Sbar will always be S* and the equilibrium will work no matter what the distribution of e (so long as it is mean zero). This is because the distribution of sample means converges to the population mean as the sample size approaches infinity.  (And note that if the mean of e is not zero, I’m pretty sure they can all just adjust their target to account for it and you will still have an equilibrium.)

I believe this will be the case in your formulation as well since a fraction of the infinite number of players will still be an infinite number and the distribution of the mean of their errors will still be degenerate. So essentially we have an equilibrium that doesn’t work under any circumstances with a finite number of drivers and is not ruled out under any circumstances with our proposed refinements.

What we need in order to rule this out is some way of saying that the average speed Sbar might vary for some reason. For instance if there were some error e which were random but applied to every driver (like weather or traffic or something, or “real” shocks in the case of the macroeconomy), that would probably blow it up in a way that would prevent it from converging, although I think you might be able to find one, like I said above, where some people choose a target a bit over and some a bit lower than S* and the amount over/under decreases as the distribution of e collapses to zero, which could be said to be “converging to S.”

This is interesting stuff though, I’m glad you got me thinking about it. There is a sort of fundamental dilemma underlying this I think, which is that much of game theory (and economics) is built around finding conditions under which everyone is indifferent and calling it an equilibrium. For instance, any mixed-strategy equilibrium basically requires the payout function to be flat over some range of strategies. But that ends up looking a lot like the kind of thing you want to rule out when you start looking for some kind of “stability” criteria.

So what we kind of want to do is have a way of determining whether the nature of an equilibrium is such that if you “unflattened” it a little bit, each individual would have a maximum in the general neighborhood of that equilibrium that is somehow qualitatively similar as opposed to “unflattening” it a little and finding a minimum there which is sort of the case we have here. However, this is a highly untechnical way of putting things.

In this case, we only get an equilibrium to the base game there because we made the payoff function flat in that equilibrium by assuming an infinite number of players. But doing that makes other things “flat” in a sense (makes the distribution of the average speed collapse to the target speed) which makes it hard to rule out. What I think you and I would both like to say is something like “let’s assume a ‘large’ number of players such that the effect each of their speeds has on the average is functionally zero but that there is still some random variation in the average.” Then we could say that even a slight variation in the average would torpedo the equilibrium and we would be happy. But man it’s hard to do that rigorously! (I had a similar problem in my dissertation which I never really solved.)

Another thing you could probably do for this particular case is put it in the context of a dynamic game and put some restriction on peoples’ beliefs like: everyone observes the average speed of the previous day and chooses their target speed based on the assumption that it will be the same today. Then ask what would happen if you had one day where the speed were slightly above or below the speed limit. Would it work back toward the equilibrium or would it shoot off to someplace else. Here, I think obviously, it would do the latter. It’s just that with an infinite number of players and an error with mean zero, we can’t get it to depart from the equilibrium in the first place.

Incidentally, I have been working on a bit of an apology for the neo-Fisherites. I agree about the “90 percent snow job with a tiny pebble of wrongness” analysis (great line by the way) but I think there is a kernel of solid intuition in there, it’s just being applied carelessly. I’ll have that soon.