A Reply To Nick Rowe on Robustness
This is a reply to Nick Rowe’s post on the fragility/robustness of equilibria. For the record, I agree entirely with his overarching, macroeconomic point. I’m just nitpicking the technical details here (which I believe is what he’s looking for).
Here are Nick’s definitions.
Let G be a game, let S be a set of strategies in that game (one for each player), and let S* be a Nash equilibrium in that game. Assume a large number of players, and a continuous strategy space, if it helps (because that’s what I have in my mind).
Suppose that a small fraction n of the players deviate by a small amount e from S* (their hands tremble slightly), and that the remaining players know this. Let S*’ (if it exists) be a Nash equilibrium in the modified game.

If S*’ does not exist, then S* is a fragile Nash equilibrium.

If S*’ does not approach S* in the limit as n approaches zero, then S* is a fragile Nash equilibrium.

If S*’ does not approach S* in the limit as e approaches zero, then S* is a fragile Nash equilibrium.

But if S*’ does exist, and S*’ approaches S* in the limit as n or e approaches zero, then S* is a robust Nash equilibrium.
[This began as a comment on the original post so I will proceed in the second person]
Nick,
I think the wheel you are reinventing is basically the idea of trembling hand perfection. I’m not quite an expert on that but I think I know enough game theory to go out on a limb here. So taking the definition from Wikipedia.
First we define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy where every pure strategy is played with nonzero probability. This is the “trembling hands” of the players; they sometimes play a different strategy than the one they intended to play. Then we define a strategy set S (in a base game) as being trembling hand perfect if there is a sequence of perturbed games that converge to the base game in which there is a series of Nash equilibria that converge to S.
I think the main difference between what you are doing and the TH concept is that you are limiting the errors to a “small fraction” of the players whereas the TH definition above assumes that all players have some probability of making a mistake.(also it assumes that all players know this not only the “nontrembling” players, which is only natural since there aren’t any such players.)
Now, I believe your game will pass both the traditional tremblinghand perfect criteria and your modified “robustnest/fragility” criteria for the same reasons, but let’s work with the standard modification since we don’t have to deal with two types. So let us assume that everyone chooses a “target” speed St and let each individual’s actual speed be Sti+ei where ei is an error with some distribution and assume that everyone’s errors are identically distributed and everyone knows the distribution.
Now there are two issues here. First, there is the issue of the number of players. If it is finite, I believe (though I haven’t done the math) that the game will break down even in its original form because when everyone is going the speed limit, any individual driver will be able to change the average slightly by changing their own speed and therefore be able to get paid by doing so and so everyone will want to do this. (Although, there might (in fact I bet there would) be an equilibrium where half of them drive over the speed limit and half drive under and the average speed is S*.)
However, if we assume an infinite number of players, then this won’t be a problem and the equilibrium (the one in question that is) to the base game will be as you say. However, now we have another issue to deal with.
First of all, let me say that the thing which makes TH difficult to deal with is the bit “there is a sequence of perturbed games that converge to the base game” which could mean a lot of different things. But let us assume that the sequence we are interested in is e converging to zero. But the problem here is that the thing that matters to each individual’s payoff is the average speed. And if the mean of e is zero, and everyone is choosing St=S* and there are an infinite number of them, then the average speed Sbar will always be S* and the equilibrium will work no matter what the distribution of e (so long as it is mean zero). This is because the distribution of sample means converges to the population mean as the sample size approaches infinity. (And note that if the mean of e is not zero, I’m pretty sure they can all just adjust their target to account for it and you will still have an equilibrium.)
I believe this will be the case in your formulation as well since a fraction of the infinite number of players will still be an infinite number and the distribution of the mean of their errors will still be degenerate. So essentially we have an equilibrium that doesn’t work under any circumstances with a finite number of drivers and is not ruled out under any circumstances with our proposed refinements.
What we need in order to rule this out is some way of saying that the average speed Sbar might vary for some reason. For instance if there were some error e which were random but applied to every driver (like weather or traffic or something, or “real” shocks in the case of the macroeconomy), that would probably blow it up in a way that would prevent it from converging, although I think you might be able to find one, like I said above, where some people choose a target a bit over and some a bit lower than S* and the amount over/under decreases as the distribution of e collapses to zero, which could be said to be “converging to S.”
This is interesting stuff though, I’m glad you got me thinking about it. There is a sort of fundamental dilemma underlying this I think, which is that much of game theory (and economics) is built around finding conditions under which everyone is indifferent and calling it an equilibrium. For instance, any mixedstrategy equilibrium basically requires the payout function to be flat over some range of strategies. But that ends up looking a lot like the kind of thing you want to rule out when you start looking for some kind of “stability” criteria.
So what we kind of want to do is have a way of determining whether the nature of an equilibrium is such that if you “unflattened” it a little bit, each individual would have a maximum in the general neighborhood of that equilibrium that is somehow qualitatively similar as opposed to “unflattening” it a little and finding a minimum there which is sort of the case we have here. However, this is a highly untechnical way of putting things.
In this case, we only get an equilibrium to the base game there because we made the payoff function flat in that equilibrium by assuming an infinite number of players. But doing that makes other things “flat” in a sense (makes the distribution of the average speed collapse to the target speed) which makes it hard to rule out. What I think you and I would both like to say is something like “let’s assume a ‘large’ number of players such that the effect each of their speeds has on the average is functionally zero but that there is still some random variation in the average.” Then we could say that even a slight variation in the average would torpedo the equilibrium and we would be happy. But man it’s hard to do that rigorously! (I had a similar problem in my dissertation which I never really solved.)
Another thing you could probably do for this particular case is put it in the context of a dynamic game and put some restriction on peoples’ beliefs like: everyone observes the average speed of the previous day and chooses their target speed based on the assumption that it will be the same today. Then ask what would happen if you had one day where the speed were slightly above or below the speed limit. Would it work back toward the equilibrium or would it shoot off to someplace else. Here, I think obviously, it would do the latter. It’s just that with an infinite number of players and an error with mean zero, we can’t get it to depart from the equilibrium in the first place.
Incidentally, I have been working on a bit of an apology for the neoFisherites. I agree about the “90 percent snow job with a tiny pebble of wrongness” analysis (great line by the way) but I think there is a kernel of solid intuition in there, it’s just being applied carelessly. I’ll have that soon.
Mike: this is very helpful. Thanks for this.
I’m still thinking it through. A couple of minor points.
The difference between a small percentage of the players deviating, and every player having a small probability of deviating, probably doesn’t matter much for me. I would happily go with the second. But I see the problem with an infinite number of players. I would need an aggregate tremble.
With a finite number of players. I have two games: one where “the cops get the sign right”; and a second where “the cops get the sign wrong”. If the cops get the sign wrong, a finite number of players doesn’t affect the results much. Starting at S*, if one driver increases speed, that causes the average speed to increase, so the cops start imposing fines on those who drive above the average speed. Knowing that, the single driver won’t deviate from S*. But if the cops get the sign wrong, the one driver who deviates from S* will pay a negative fine, so will choose to deviate. So S* is not a NE with a finite number of drivers.
We could do a dynamic version of my game, where the players are learning. That is basically what Peter Howitt did, and others who followed him. I was looking for something simpler, so we could say something similar for even a oneshot game.
I keep having thoughts about the shape of the payoff function, and then realising my thoughts are not quite right. That’s what I’m still thinking about.
“I keep having thoughts about the shape of the payoff function, and then realising my thoughts are not quite right. That’s what I’m still thinking about.”
Me too! That thing I said about “unflattening” is sort of right and sort of not quite right but it’s a complicated thing to wrap one’s mind around. Also I think I had something slightly different in mind when I was talking about an equilibrium where some are above the speed limit and some are below so I think that part is not quite right. But yeah, overall I get your point and I agree with it. The sign matters. It’s just that finding a technical (and general) way of saying why the sign matters is tricky.
Hello there! This post couldn’t be written any better!
Looking through this article reminds me of my previous roommate!
He continually kept talking about this. I’ll forward this post to him.
Pretty sure he’ll have a good read. Thank you for sharing!