Archive for the ‘Micro’ Category

A Reply To Nick Rowe on Robustness

November 22, 2014 3 comments


This is a reply to Nick Rowe’s post on the fragility/robustness of equilibria. For the record, I agree entirely with his overarching, macroeconomic point. I’m just nit-picking the technical details here (which I believe is what he’s looking for).

Here are Nick’s definitions.

Let G be a game, let S be a set of strategies in that game (one for each player), and let S* be a Nash equilibrium in that game. Assume a large number of players, and a continuous strategy space, if it helps (because that’s what I have in my mind).

Suppose that a small fraction n of the players deviate by a small amount e from S* (their hands tremble slightly), and that the remaining players know this. Let S*’ (if it exists) be a Nash equilibrium in the modified game.

  1. If S*’ does not exist, then S* is a fragile Nash equilibrium.

  2. If S*’ does not approach S* in the limit as n approaches zero, then S* is a fragile Nash equilibrium.

  3. If S*’ does not approach S* in the limit as e approaches zero, then S* is a fragile Nash equilibrium.

  4. But if S*’ does exist, and S*’ approaches S* in the limit as n or e approaches zero, then S* is a robust Nash equilibrium.

[This began as a comment on the original post so I will proceed in the second person]


I think the wheel you are reinventing is basically the idea of trembling hand perfection. I’m not quite an expert on that but I think I know enough game theory to go out on a limb here. So taking the definition from Wikipedia.

First we define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy where every pure strategy is played with non-zero probability. This is the “trembling hands” of the players; they sometimes play a different strategy than the one they intended to play. Then we define a strategy set S (in a base game) as being trembling hand perfect if there is a sequence of perturbed games that converge to the base game in which there is a series of Nash equilibria that converge to S.

I think the main difference between what you are doing and the TH concept is that you are limiting the errors to a “small fraction” of the players whereas the TH definition above assumes that all players have some probability of making a mistake.(also it assumes that all players know this not only the “non-trembling” players, which is only natural since there aren’t any such players.)

Now, I believe your game will pass both the traditional trembling-hand perfect criteria and your modified “robustnest/fragility” criteria for the same reasons, but let’s work with the standard modification since we don’t have to deal with two types. So let us assume that everyone chooses a “target” speed St and let each individual’s actual speed be Sti+ei where ei is an error with some distribution and assume that everyone’s errors are identically distributed and everyone knows the distribution.

Now there are two issues here. First, there is the issue of the number of players. If it is finite, I believe (though I haven’t done the math) that the game will break down even in its original form because when everyone is going the speed limit, any individual driver will be able to change the average slightly by changing their own speed and therefore be able to get paid by doing so and so everyone will want to do this. (Although, there might (in fact I bet there would) be an equilibrium where half of them drive over the speed limit and half drive under and the average speed is S*.)

However, if we assume an infinite number of players, then this won’t be a problem and the equilibrium (the one in question that is) to the base game will be as you say. However, now we have another issue to deal with.

First of all, let me say that the thing which makes TH difficult to deal with is the bit “there is a sequence of perturbed games that converge to the base game” which could mean a lot of different things. But let us assume that the sequence we are interested in is e converging to zero. But the problem here is that the thing that matters to each individual’s payoff is the average speed. And if the mean of e is zero, and everyone is choosing St=S* and there are an infinite number of them, then the average speed Sbar will always be S* and the equilibrium will work no matter what the distribution of e (so long as it is mean zero). This is because the distribution of sample means converges to the population mean as the sample size approaches infinity.  (And note that if the mean of e is not zero, I’m pretty sure they can all just adjust their target to account for it and you will still have an equilibrium.)

I believe this will be the case in your formulation as well since a fraction of the infinite number of players will still be an infinite number and the distribution of the mean of their errors will still be degenerate. So essentially we have an equilibrium that doesn’t work under any circumstances with a finite number of drivers and is not ruled out under any circumstances with our proposed refinements.

What we need in order to rule this out is some way of saying that the average speed Sbar might vary for some reason. For instance if there were some error e which were random but applied to every driver (like weather or traffic or something, or “real” shocks in the case of the macroeconomy), that would probably blow it up in a way that would prevent it from converging, although I think you might be able to find one, like I said above, where some people choose a target a bit over and some a bit lower than S* and the amount over/under decreases as the distribution of e collapses to zero, which could be said to be “converging to S.”

This is interesting stuff though, I’m glad you got me thinking about it. There is a sort of fundamental dilemma underlying this I think, which is that much of game theory (and economics) is built around finding conditions under which everyone is indifferent and calling it an equilibrium. For instance, any mixed-strategy equilibrium basically requires the payout function to be flat over some range of strategies. But that ends up looking a lot like the kind of thing you want to rule out when you start looking for some kind of “stability” criteria.

So what we kind of want to do is have a way of determining whether the nature of an equilibrium is such that if you “unflattened” it a little bit, each individual would have a maximum in the general neighborhood of that equilibrium that is somehow qualitatively similar as opposed to “unflattening” it a little and finding a minimum there which is sort of the case we have here. However, this is a highly untechnical way of putting things.

In this case, we only get an equilibrium to the base game there because we made the payoff function flat in that equilibrium by assuming an infinite number of players. But doing that makes other things “flat” in a sense (makes the distribution of the average speed collapse to the target speed) which makes it hard to rule out. What I think you and I would both like to say is something like “let’s assume a ‘large’ number of players such that the effect each of their speeds has on the average is functionally zero but that there is still some random variation in the average.” Then we could say that even a slight variation in the average would torpedo the equilibrium and we would be happy. But man it’s hard to do that rigorously! (I had a similar problem in my dissertation which I never really solved.)

Another thing you could probably do for this particular case is put it in the context of a dynamic game and put some restriction on peoples’ beliefs like: everyone observes the average speed of the previous day and chooses their target speed based on the assumption that it will be the same today. Then ask what would happen if you had one day where the speed were slightly above or below the speed limit. Would it work back toward the equilibrium or would it shoot off to someplace else. Here, I think obviously, it would do the latter. It’s just that with an infinite number of players and an error with mean zero, we can’t get it to depart from the equilibrium in the first place.

Incidentally, I have been working on a bit of an apology for the neo-Fisherites. I agree about the “90 percent snow job with a tiny pebble of wrongness” analysis (great line by the way) but I think there is a kernel of solid intuition in there, it’s just being applied carelessly. I’ll have that soon.

Walras with Money

October 2, 2014 2 comments

As I’ve been saying, in the standard Walrasian model you don’t get absolute prices, you get only relative prices and you have to apply an arbitrary restriction in order to make them look like absolute prices (like all prices sum to 1 or something similar) these relative prices can be multiplied by any scalar (“price level”) without changing the solution. So what if, just for fun, we try to add money in, make it an economy where all goods are traded for money, try to get a price level and see if we can characterize a general glut. This is, I suspect, exactly what most economists have in mind when they imagine a general glut and I assume it has been done before but I don’t recall seeing anyone put it explicitly in this context.

Let’s say you have an economy with n “real” goods and you also have money. The quantity of all of the goods produced as well as the quantity of money are determined exogenously. People only care about the quantity of each good they consume as well as their (average) real money balances (m/P) where m is the quantity of money an individual holds and P is the price level somehow defined. (For instance, we might let P be the sum of all nominal prices or the average nominal price or something along those lines such that we can characterize the price vector as a vector of relative prices–somehow defined–multiplied by the price level). So we have utility functions that look like this.


And assume, for ease of exposition, that this function is separable in money so that we can write:

Ux(X1,X2,…Xn)+Um(m/P)= U(X1,X2,….Xn,m/P)

And everyone has a budget constraint that looks like this.

Sum[Pi(Xi-Xi’)Pi]+ m-m’=0

Where Xi is the quantity of good i consumed, Xi’ is the initial endowment of good i, Pi is the price of good i and m’ is the initial endowment of money (nominal).

Now assume that you have a Walrasian auctioneer calling out nominal prices until every market clears. If you take out the money part and just have Ux() and the Xs in the budget constraints, then you will get a vector of relative prices that clears all markets. If you say that one price is fixed too low, then you get excess demand for that good and excess supply of some other good(s). If you then add to the model by saying that people change their demands for other goods in response to the constraint on their ability to purchase the good with the fixed price and you then have the Walrasian auctioneer call out prices for the other goods until those markets all clear conditional on that constraint, then you have what Nick Rowe has been talking about.

But if you have no money and the Walrasian auctioneer calls out prices which are all too high what happens? The answer is: that question doesn’t make any sense. Without money, he is only calling out relative prices. It’s impossible for them to all be too high. If the supposedly “too high” prices are all exactly half of the supposedly correct prices, then they are the same prices and the markets all clear. If the relative prices change, then you have a case where there is excess demand for some good(s) and excess supply for some good(s) and what happens depends on how you alter the model from the original to account for the persistence of this phenomenon.

In order to even consider the possibility of all prices being “too high” or “too low,” we have to change the model. We have to put money in. Luckily I did that already. So return to that formulation.

With money, the solution will be a vector of prices such that the sum of the excess demands for all real goods equals zero and everyone is holding their desired quantity of money. This means that the marginal utility of a dollar will be equal to the marginal utility of one dollars-worth of each good. This allows us to get an actual set of nominal prices (and by extension, a price level).

So let us assume that the relative price vector called out by the Walrasian auctioneer is the “correct” one (the one which would clear all markets in the case with no money). What if the price level is too low? Even if the real goods are allocated efficiently, the marginal utility of a dollar’s-worth of money balances will be higher than the marginal utility of an additional unit of some good for at least some people and they will try to trade dollars for goods. Since the number of dollars is fixed exogenously, they can’t all do this at once. There will be an excess demand for goods and an excess supply of dollars.

The only way to alleviate this situation will be for the Walrasian auctioneer to call out a higher price level. As he dos this, the quantity of real money balances will fall (the nominal value stays the same but the price level rises) and the marginal utility will rise. At some point, the marginal utility of a dollar will be equal to the marginal utility of a dollar’s-worth of any other good (since we are assuming the equilibrium relative prices) and that will be the equilibrium price level—the level at which people are just willing to hold the quantity of dollars that exist.

Conversely, if the Walrasian auctioneer calls out a price level that is too high, people will want to hold more dollars than there are and the only way to alleviate this is for the price level to fall. This is a general glut. If, for instance, the money supply contracts, prices will need to fall to bring things into equilibrium. If they can’t fall because they are “sticky” for some reason, then you may get a general glut in which the excess supply of real goods is offset by an excess demand for money.

Now does this contradict Walras’ Law? Not exactly. Since we changed the model, we have to change the characterization of the law before we can ask a question like that. If what you mean by “Walras’ Law” in this context is that an excess supply in the market for some real good, measured in dollars, must be offset by an excess demand in the market for another real good, measured in dollars, then no. If what you mean is that an excess supply of goods must be offset by an excess demand for something, potentially money, then yes. Is the latter characterization of the law meaningless? Maybe some would say yes but I think that a lot of people out there could benefit from carefully considering in what sense “Walras’ Law” applies in an economy with money and in my book, that makes it pretty useful.

For the record, this is pretty standard stuff, I don’t think I’m saying anything groundbreaking here. I also think there is more to the story but saying groundbreaking things is hard. I’ll get around to it eventually.


More on Walras’ Law

October 1, 2014 2 comments

Have taken a hiatus from blogging to deal with moving, new job, weddings, etc. and trying to get back in the habit so I figure I will finish up a post on Walras’ Law that I mostly wrote a while ago.  The topic may be a little stale now but whatever.  After all, this debate seems to have been going on for years.  I have a bunch of outstanding business with Nick Rowe but am having difficulty putting it all together.  After this little warm-up, I will try to work through that backlog.

Following the latest [at the original conception of this post] installment from Nick Rowe, it is pretty clear to me that there are three distinct issues which are all mixing together in the discussion so I want to try to separate them.  I will go through them in increasing order of significance.

1.  Is Walras’ Law useless?

I say no but that’s because I’m a micro guy at heart (and in training).  And for the record, I think I got kind of a weak acquiescence out of Nick on this so I don’t think there is very much room between our views but just for the record, here is my argument.

This is the entry from the index of Mas-Colell, Whinston and Green (the standard graduate micro text).

Walras’ Law: 23, 27, 28, 30-2, 52, 54, 59, 75, 80, 87, 109, 582, 585, 589, 599, 601, 602, 604, 780

Why am I telling you this?  Because I’m trying to demonstrate that if you want to expunge Walras’ Law from the record, you will need to totally rewrite microeconomics.  You can’t solve the Walrasian model without it.  You can bad-mouth the Walrasian model all you want, I’m not saying it perfectly represents every aspect of a real economy but if you want to tear down the pillars of that model (rather than adding on to it) you are essentially taking a wrecking ball to the rock on which our church is built.  Some people will argue for doing that, for sure, but it’s a rather extreme position which I don’t think is what folks like Nick really want.

Now the real issue is some people like to misuse the law by applying it carelessly to other models without doing the necessary work to determine whether it actually makes sense or not in those contexts.  This, I think, is what Nick objects to.  I didn’t carefully go through all of the above sections but I would be willing to bet that nowhere in there does it say that Walras’ Law proves that if we observe a shortage in some market because the price mechanism is not functioning in the way specified in the model, then there must also be a surplus in some other market.

2.  What if some price doesn’t adjust?

The Walrasian model is a model of price adjustment.  If you want to hold some price constant and ration quantity somehow, you are changing the model.  That’s fine, but you can’t take a “Law” from a different model and just try to slap it carelessly onto your new model.  If you fix the price of some good and put a quantity constraint on buyers of that good, you can find a vector of prices for the other goods such that all other markets clear given that constraint.  Whether this “violates” Walras’ Law is a nonsensical question because that law can’t be stated in the same way in the new model.

If you want to have an analogue for Walras’ Law in your new model, you have to redefine things.  The way I would go about doing this would be to treat it as a model of price adjustment in the markets for the n-1 goods, since there is nothing happening endogenously in the other market (at least nothing interesting, you have a kind of “corner solution” where you run into the constraint).  Then you would get a version of the law that applies in the subset of the market where the price mechanism is functioning in the same way that it functions in the original Walrasian model.

Alternatively, if you want to get a bit more esoteric, you can define excess demand for each good in real terms (in quantities of other goods).  This will complicate your model because you will need a lot more prices, but then you can take the price vector to be all prices, including the fixed price, and you will find that even when the remaining markets “clear” given the constraint, there is still some “excess supply” (assuming a shortage in the fixed market) of those goods relative to the good whose price is too high.  This is the sense in which Walras’ Law indicates something about such a market that is true but this phenomenon will not show up if you just look at any one of those markets and see if there is a shortage or surplus at the prevailing money prices (which is another reason to keep it, but only if you use it carefully).

This is all consistent with everything Nick has said but it is worth mentioning that the issue isn’t whether we think of it as one market for n goods or n markets for goods and money.  The issue is what constraints we put on people’s behavior and how we define things like excess demand and Walras’ law in the presence of these constraints.  The original model is set up in such a way that defining this in terms of money is equivalent (at least in equilibrium) to defining it in real terms and makes the model simpler.  But the reason it is equivalent is that when all prices can freely adjust, the marginal rate of substitution between any good and any other good has to be equal to the ratio of their prices in equilibrium so the marginal value of apples measured in dollars worth of bananas has to be equal to the marginal value of apples measured in dollars worth of papayas.  This means that instead of measuring the marginal value of each good in relation to each other good and getting a price of each good in terms of every other good, we can just measure the marginal value of each good in terms of dollars and get a price of each good in terms of dollars and have only n prices rather than n(n-1)/2 prices.  The whole matrix of relative prices in equilibrium can be expressed by this vector of dollar prices because of the equilibrium conditions on all of the marginal rates of substitution.

But once you stick in a price that doesn’t adjust, this will not be the case in equilibrium.  The marginal value of a good will be equal to the same dollar amount of every good whose price is free to adjust but not of the good whose price is fixed.  So how do we define excess demand?  In real terms or nominal terms?  The answer is: it doesn’t matter, it’s just two ways of describing the thing that happens in the model.  The important thing is whether we understand what is going on in the model.  If you just memorized Walras’ Law, without really appreciating what it means and tried to clumsily apply it to every model, then you probably don’t understand.  But by the same token, if you were never taught Walras’ Law at all, then you probably never understood the original model and you still probably don’t understand.  (Neither of these is meant to apply to Nick, who, I think, completely understands what is going on in the model.)

3.  What is the role of money in all of this? (And is a general glut possible?)

While the most recent rounds of Walras-bashing have centered mainly on the issue above, the original debate (which started years ago) was mostly about general gluts.  Walras’ law seems to imply that such a thing is impossible, yet we seem to observe them.  This is a different question from the one above.  Above the question is can one market be out of equilibrium while all others are in equilibrium?  Here, the question is can all markets be out of equilibrium in the same direction (excess supply) at the same time?

This is where the role of money becomes critical.  The Walrasian model is not a model of money.  Money is used as a rhetorical device to streamline the model.  There is no attempt made in that model to characterize the demand for money, the velocity of money or anything like that.  It is assumed that people don’t care about money, they only care about “real” goods and that money is nothing more than a mechanism which somehow allows the market to work perfectly, eliminating any frictions and allowing the “Walrasian auctioneer” to call out more complicated matrices of relative prices as a relatively simple vector of nominal prices.  (Though it is worth noting that this does restrict the set of possible relative prices.)

So this begs, not the question: does Walras’ Law hold in the real world, but the question: is that really how money works?  And the answer to that is obviously no.  Since the answer is no, it is dangerous, again, to take a simple conclusion from such a model and clumsily try to apply it to the real world.  But, also again, that doesn’t make the model worthless.  Another question one might ask is does money work kind of like that sometimes?  This is sufficiently vague to admit of no concrete answer but there is room to argue in the affirmative I think.  A better question is how does the actual nature of money differ from that assumed in the model and what are the possible consequences of that difference.  It’s questions like this that allow us to climb onto the shoulders of giants like Walras and hopefully see a bit further over the horizon.

Of course, I have a lot of thoughts about that which I will mostly avoid getting into here.  But here is a question that I think is worth pondering.  If a technology were developed tomorrow that allowed barter to be carried out frictionlessly, like with the Walrasian auctioneer, what would happen to the value of money?  Would it go to zero?  (Hint: no.)




Further Reflections on Austrian Economics

July 4, 2014 22 comments

Oddly enough, the appearance of Major Freedom in the comments section of my last post has got me wondering if I have got Austrians all wrong.  I used to see that guy comment on other blogs and always completely miss the point and go on and on about stuff that made no sense.  Some people would always agree with him and they would go down some “Austrian” rabbit hole and everyone else, including me, learned to just skip those long blocks of text.  But since I felt obliged to respond, at least initially, on my own blog I had to go through the ordeal of trying to make counter-arguments to arguments that barely grazed the issues I had tried to address in the first place and it was very frustrating.  And then I started wondering: is this the type of person who has shaped my view of Austrian economics?

The short answer is no but the long answer, I think, is kind of nuanced.  The short answer is that a lot of it comes from and people talking on TV like Peter Schiff.  And yes, I have read some Hayek and some Rothbard and some stuff like that.  I though Hayek had some interesting ideas.  I though Rothbard was mind-numbing.  I don’t really know what Mises thought, I just know what they say about what he thought on the afore-mentioned  So it’s not just Major Freedom and company.  Although, I am sure that to a lot of people, that is they account for the vast majority of their run-ins with so-called “Austrians.”  And I think, ironically, that this accounts for much of the severe disdain most “mainstream” economists have for all things “Austrian.”

So I think I have dug one layer deeper than most because I am a libertarian and so I have quite a bit of exposure to somewhat more serious, less troll-like “Austrians.”  But commenter John S. points out (and I have heard from some others) that there are really two schools of Austrian.  The Auburn/ school which is essentially what I am complaining about and the more reasonable GMU school, and apparently they don’t get along very well.  So I can’t help but wonder if I am being unfair to the latter.  I am trying to look into it a little.

I watched this debate between Caplan and Boettke which I remembered watching years ago and finding interesting.  Essentially, Caplan represents my view perfectly in every respect.  And the Boettke comes out and, as far as I can tell, doesn’t really disagree with anything Caplan says.  I get the impression that they both agree on practically everything except what to call each other.  Boettke thinks Caplan is an Austrian and Caplan thinks Boettke is neoclassical and while Caplan makes points about methodology, Boettke talks about the history of economics and who said what when and a bunch of stuff that I don’t really know about but to me is not that important.  I care about the methodology.  And that is what the people in the camp are always griping about.  However, from what I can discern, Boettke seems pretty reasonable to me (though I do think “radical uncertainty,” or whatever, is not a useful concept).

So my position is essentially this.  Speaking solely in terms of micro, the basic, neoclassical, consumer choice model (and the model of markets which is built on it) is good and the arguments I have heard from so-called Austrians against it are all dumb.  Now what I wonder is: Do Boettke and the GMU “Austrians” agree with me that this is a perfectly good model or do they agree with the guys that it is all garbage, and on a side note, do they agree that diminishing marginal utility is a logical necessity or, for that matter, that it is important in any way?

I get the feeling that they aren’t completely comfortable with this model because Caplan also wrote this which makes many of the same points I tried to make, along with some others which are also excellent, and he knows the GMU guys pretty well I think.  On a related note, maybe I should have been talking about “indifference” instead of continuous quantities.  Indifference doesn’t drive people to action.  Fine, but people act until they reach a state of marginal indifference.  That’s actually pretty much the central tenet of neoclassical economics.  But I digress.  Also, I probably need to learn a bit from him about how to get along with Austrians better.

But if the answer is that they agree with me about this model, then I am left wondering what is the difference between us.  If they say “nothing, you’re an Austrian” then I am unsatisfied and I would say “no you are mainstream” and we would be having what I consider a pointless argument (very similar to the debate above).  There are still some issues regarding the degree to which our analysis should be driven by preexisting normative beliefs.  Maybe I will say more on that later but for the most part, in my mind, if you drop all the criticisms of mainstream economics (at least the core of micro) and just say “we want to look into the role of the entrepreneur more” or whatever, then I have no problem but then why make a point of differentiating yourself from the mainstream so much?

At any rate, though I am genuinely interested in answering these questions, I think they are all dodging the real issue which is those blasted people.  It may be the case that the GMU Austrians are not that nuts but they aren’t the ones on TV or blowing up the comment sections of every econ blog.  And if you go to any kind of libertarian gathering and try to talk to somebody who fancies themselves a part-time Austrian economist (which will be half the people there), chances are they will be suffering from a lot of confusion brought on by the popularity of thinking in those circles.  So to me that is the issue that must be dealt with.  It would be nice if the GMU types could draw people away from that but they don’t seem to be very successful at this.  I don’t know what the answer is.  I just know that it’s a problem.  And admitting you have a problem is the first step toward recovery.

Categories: Micro Tags:

More on Diminishing Marginal Utility (or: This is Why Austrian Economics Drives Me Crazy)

July 2, 2014 41 comments

For those who are new to this blog, I am a pretty staunch libertarian, free-market kind of guy.  So naturally, there was a point in my life when I gravitated toward Austrian economics.  But the thing that really drove me away was when I realized what they believe about utility, especially “diminishing marginal utility.”  There are quite a few things that I think Austrians are wrong about (hyperinflation and Cantillon effects for instance) but the utility thing was special because it is a case where the confusion is plainly obvious if you really understand the mainstream model of consumer choice.

Of course, I figured I would just explain this to them and we would all live happily ever after.  Needless to say, that never seems to work.  But at this point it has become sort of my white whale: convincing an Austrian–just one Austrian–that diminishing marginal utility is nonsense.  Then recently I stumbled upon this paper by an Austrian, on Mises.0rg, in which an Austrian explains exactly what I have been trying, in vain, to explain.  So, what the hell, might as well give it one more try.

Here is a rough outline of the debate.

1.  Austrians claim that utility is inherently ordinal and that cardinal utility is nonsense.

Mainstream economists agree (at least officially) and have a model in which utility is purely ordinal but Austrians don’t realize it because it doesn’t look ordinal to them.

From my first year graduate text:

Toward the end of the nineteenth century, perhaps initially from introspection, the concept of utility as a cardinal measure of some inner level of satisfaction was discarded.  More importantly, though, economists, particularly Pareto, became aware that no refutable implications of cardinality were derivable that were not also derivable from the concept of utility as a strictly ordinal index of preferences.  As we shall see presently, all of the known implications of the utility maximization hypothesis are derivable from the assumption that consumers are merely able to rank all commodity bundles, without regard to the intensity of satisfaction gained by consuming a particular commodity bundle . . .

. . . To say that utility is an ordinal concept is therefore to say that the utility function is arbitrary up to any monotonic (i.e., monotonically increasing) transformation.

2.  Austrians don’t seem to believe in assuming things that can’t be proven to be necessarily true logically.

To this end they selectively reject whatever hypotheses mainstream economists arrive at because they all require some set of assumptions.

3. The one thing that Austrians feel comfortable claiming that they can prove logically without any assumptions whatsoever (except that people act) is the “law of diminishing marginal utility.”

In order to arrive at this conclusion logically, they construct a framework with “means” and “ends” and postulate that a person will always use a good (means) for the highest valued use (end) first and therefore, as they get more of the good, the value of the marginal use falls and thus you get diminishing marginal utility.  However, this is not a conclusion which logically must be true.  It is, rather, the result of an assumption which Austrians don’t notice that they are making.  McCulloch is somewhat unusual among Austrians in that he realizes this and pointed it out in 1977.  This is why the paper caught my attention.  Now that we know an Austrian said it, can we all agree that this is the case?

[See pp. 251, 252 (you can thank for copy/paste protecting the document….)]




Notice particularly the line: “Bilimovic argues as if these are valid deductions from a rank-ordering on W, but that is not the case unless we assume that the wants are unrelated.  So there you have it, an Austrian saying exactly what I have been trying to say.  But then, having said that, he goes on to assume that “unrelatedness” and continue to deduce the law of diminishing marginal utility based on that assumption.  This wouldn’t be that annoying if he didn’t then say this:

Note that the Austrian principle of diminishing marginal utility is a theorem, rather than an assumption as with Gossen, Jevons and Walras. [p. 255]

Okay, it’s a theorem, but it is only a theorem in the sense that it follows directly from the assumption of unrelatedness of uses.  In other words, in no way does it represent something that logically must be true.  It is just something that is true if the assumptions made to get to it are true.  And we know that that assumption need not be, and probably usually isn’t, true.  This doesn’t make the theorem meaningless but it does make it no different from all of the conclusions of the mainstream model which Austrians like to claim are useless….

And what’s more, the assumption made here is more restrictive than those typically made in the mainstream model of nonsatiation, substitution and quasi-convexity.  So, essentially there is no intellectual reason to cling to this means-ends framework and the notion of diminishing marginal utility.  Frankly, I don’t even understand what Austrians think the significance of diminishing marginal utility is.  If I had to guess, I would suspect that they might say that it implies downward sloping demand (and in some cases upward sloping supply) curves, and that is sort of true but the mainstream model does it much better.

It is diminishing marginal value that implies downward sloping demand.  Value, meaning the willingness to trade-off one good for another.  For this to be diminishing, you only need to have the ratio of the marginal utilities (the marginal rate of substitution) diminishing.  It is possible for this to be the case even if there is increasing marginal utility of both goods.  Now, it is true that diminishing marginal utility of all goods will give you diminishing marginal value, so in that sense, diminishing marginal value does imply downward sloping demand curves.  But you don’t need to go that far.  All it takes is quasi-concavity.  That is why the mainstream model assumes quasi-concavity and not diminishing marginal utility, because it is the smallest assumption required to get the type of refutable implications that the model gets.  So let’s review.

1. Diminishing marginal utility is an assumption.

2. It is more restrictive than the assumptions in the mainstream consumer choice model.

3. Therefore, the mainstream model is better.

This follows logically, therefore you can’t question it.  If you don’t see the logic, I will just dismiss you as someone who clearly doesn’t understand logic.  See what I did there?  But seriously, Austrians, this is an intervention.  I’m telling you this for your own good because I love you, I love the things that you love like free markets, property rights and individual liberty, and I want what’s best for you.  The mainstream model is just a better version of your model.  It’s that simple.  Let me put snarkyness aside for a moment and try to explain why.

1. Means/ends is pointless.

The means/ends framework adds nothing.  It only makes it easier to confuse yourself and others.  Economics is about choosing between scarce alternatives.  That means we need alternatives, and we need preferences over those alternatives.  That’s it.  If you are choosing quantities of two goods, all we need to know (by which I mean assume) are your preferences over different combinations of those goods.  It makes no difference why your preferences are what they are or what “ends” you are applying the goods too.

The only thing the means/ends framework accomplishes is to take the case where a consumer has preferences over combinations of the good and make it into a two-stage problem where the stages are essentially identical.  Instead of just saying “they have certain preferences over combinations of the good” you say “they have certain preferences over different ends and the goods can each be used for different ends in different ways.”  But the only thing that matters is their preferences over the different combinations of goods because that is the decision we are trying to model.  So you try to logically deduce what those preferences look like based on what you assumed about the preferences over “ends” and the connection between the ends and the means.  Then you claim that what you are saying about their preferences over combinations of goods is not an assumption, it follows from logical deduction.  But it only follows logically from the (possibly implicit) assumptions you made about their preferences over ends and the connection between ends and means.  You just buried the assumptions one stage deeper.  But this gives you nothing, it makes it needlessly complicated.  The only thing this accomplishes is it makes it easier for you to apply false reasoning in connecting the two levels by implicitly assuming something that is not necessarily true, deceiving yourself into thinking that it is necessarily true because you don’t notice that you are assuming it, and then believing that you have proven something which you haven’t.

So why not get rid of all of that nonsense and just say that people have preferences over different combinations of goods?  I think there are two possible Austrian answers to this.  One is that this is methodologically unacceptable because we are not allowed to make assumptions about people’s preferences that aren’t objectively and immutably true.  But the Austrian making this argument must not have been reading carefully because that is what you are doing anyway and the assumption you are making is more restrictive than the one I am making.  The other argument is that then we wouldn’t be able to sit around and talk about how ridiculous mainstream economics is because then our model would be exactly like theirs. (Okay, so maybe a little snarkyness.  A fish has gotta swim.)

2. You need more than just action.

Nothing follows logically from the single axiom that people act.  Their preferences matter.  Since we can’t observe preferences but only action, there is nothing we can say about those preferences that must be true a priori.  If you can’t say anything about those preferences that can’t possibly not be true, you can’t say anything about action period.  If you try, you will just end up assuming something without realizing it.  A careful and responsible approach to modeling action must be very explicit about what it assumes and try to cut those assumptions down to the smallest, simplest, most realistic and least restrictive assumptions possible for the model to “work” and tell you something interesting.  This is what the mainstream model has done and–I can’t stress this enough–our assumptions are less restrictive than yours!

3.  Continuous quantities!

First of all, discreet quantities are not more realistic.  Gasoline, flour, tap water, electricity, labor, cheese, ground beef, and a million other goods are actually measured in continuous quantities.  But more importantly, the consumption of any good properly understood, should be modeled as consumption per some unit of time.  So it’s not the number of cars you buy on a given day on the horizontal axis of the “cars” market, it is the average cars you use up in a year or something like that which is a rate and is inherently continuous.  But even more importantly, it makes everything so much easier and more sensical to use continuous quantities.  I sometimes wonder if you are purposely sabotaging yourselves by forcing yourselves to work with a model that is so unmanageable that no worthwhile conclusions can possibly be drawn from it.  It doesn’t have to be that way.

4.  It really is ordinal utility.

Just because we use a function to represent preferences doesn’t mean they are cardinal.  The numbers have no significance in the model beyond identifying which bundles are preferred to the one in question, which ones it is preferred to and which are neither (in which case the person is indifferent between them).  That is all the numbers mean.  You can plug in any function that preserves the rank ordering and you will get the same results.  This is something we are aware of and are careful about (at least some of us are).  Representing ordinal preferences this way allows us to apply a much higher degree of logic to the problem in much simpler ways than your framework.  This, is a benefit not a drawback.   Typically, we don’t even assign numbers or even a function we just say let there be some function which conforms to the minimum necessary assumptions mentioned above.  The fact that we put an actual function to it with numerical values in order to teach undergraduate students does not make the underlying model cardinal.

So there it is.  If you acquiesce on these points, you arrive at the standard mainstream model.  This has been an Austrian intervention.  Sure it’s one guy doing an intervention on a whole gang of people who all act as a mutual support group for each other, and yes, that does seem to run counter to the established rules for interventions.  So maybe it’s wishful thinking to expect it to be successful.  But that doesn’t mean I can’t waste my life trying.  Maybe if I can get through to one person, and then he can get through to one person, one day all of us true, old-school, ordinal-utility types will be able to band together and have an intervention with the Scott Sumners of the world when they say things like this (good grief!):

As an aside, I believe about 90% of all negative and positive utility in life occurs during dreams, as the feelings tend to be more intense than during waking hours.  (We forget most dreams.) It is only the bigotry of awake people (who control the printing presses) that privileges waking life.


Scarcity is Real (But it’s not What You Should Be Afraid of)

April 29, 2014 7 comments

[Note: for some reason, when I post this it always takes out the spaces between paragraphs.  I can’t seem to find a way to fix this and wordpress refuses to respond to my help request….]

I recently came across a WSJ article entitled “The Scarcity Fallacy.”  Since one of my biggest beefs is with people (typically on the left) denying scarcity, it immediately got my dander up.  However, the article ended up being nothing like what I was expecting.  As it turns out, it was a critique of the perpetual doomsday predictions made by environmentalists which I completely agree with.  But I still don’t like the way he frames it as a question of whether “scarcity” is or is not real, with environmentalists on the pro-scarcity side and economists on the anti-scarcity side.  After all, scarcity is the entire foundation of economics.  It’s just that we mean something different by the word that what most people think of.

This is how defines scarcity:

scar·ci·ty: noun, plural scar·ci·ties.

1. insufficiency or shortness of supply; dearth.
2. rarity; infrequency.
This seems to be what most people have in mind.  Alternatively, this is what my intro text has to say about it.

The term scarce means that there are not enough of the items humans find desirable to satisfy everyone’s wants.  If goods were handed out free to all who wanted them in unrestricted quantities, there would simply not be enough to go around. . .
Economics is concerned with this central issue.  Economics is the study of how scarce resources, that have alternative uses, are allocated amongst competing ends. . .
. . . it is impossible to enact laws that eliminate the underlying scarcity of goods and resources.  The horrible truth is that scarcity is a pervasive empirical fact about the world.  It is caused by the demands on the world’s resources by consumers of those resources–mainly humans–in amounts greater than the earth would produce on its own.  We cannot legislate scarcity out of existence any more than we can abolish the law of gravity.
Got that?  So if you are an economist, scarcity is the starting point of any analysis.  If a good weren’t scarce in an economic sense, there would be no reason for concern about running out.  But scarcity doesn’t occur when the quantity available falls below some arbitrary level that causes it to be deemed “rare” or of “insufficient supply.”  From the moment people figured out that you could use oil to make kerosene and burn it to light your house at night, oil was a scarce resource, even when it was basically bubbling up from the ground “Beverly-Hillbilly style.”
The real debate has two components.  On the surface, the question is: is the scarce nature of a good going to become an acute and sever problem on a societal level?  So your peak-oil types would have you believe that at some point we sill suddenly “run out” of oil and then all sorts of catastrophes will follow.  The other side, which Ridley calls “economists” say it’s no big deal because we will think of something else.  But this is really not the fundamental issue either.
The real debate underlying all of this is about the best way of dealing with scarcity.  There are essentially two sides.  One side I will call “marketeers.”  This side thinks that the allocation of scarce resources is generally best left to markets.  This is the side I am on but, sadly, I think it is incorrect to suggest that most economists are on this side.  On the other side are “central planners” who think that if people are left alone, they will collectively wander carelessly into some catastrophe and that the government needs to step in at every turn to make sure they don’t do this.
The really sad thing though is that I don’t think the environmentalist types (ecologists, climatologists, and so on) really understand markets.  They get that we are using resources and that it is possible to use them up and they look at current rates of usage and trends over time and try to extrapolate these into the future in some empirical way and if that leads to the conclusion that we will use everything up by a certain time, they freak out and go all Chicken-Little on us.  In short, they imagine that resources are allocated in an arbitrary way.  And if the way they are being allocated (which they assume is arbitrary) doesn’t seem like the ideal way to them, they naturally want the government to intervene and arbitrarily reallocate them in the way that they think is best.  (And of course, this will require the government to maintain a staff of ecologists, climatologists, etc. to perpetually determine the right allocations.)
But market allocations aren’t arbitrary.  Markets tend to allocate goods to their highest-value use.  And in the case of temporal allocation, they are a mechanism for aggregating estimates about the future.  Take oil for instance.  If we had a free market for oil (we don’t but whether what we have can be approximated by a free market is debatable), then you would have a whole bunch of people making calculations about the current and future supply and demand for oil and the market would aggregate those calculations into a price.  If you thought the market price was too high or too low, you could enter the market and put your thumb on one side of the scale or the other.  If you thought that we were heading recklessly toward “peak oil” you could buy oil, either in barrels or in the ground, or in the form of futures or options or whatever.  If you were right, you would make money.  But you would also push the current price of oil up and save some for future consumption out of current consumption.  The more people felt this way, the higher the price would go and the more would be saved.
This is how markets allocate scarce goods over time.  The difference between this and a group of bureaucrats making estimates and then forcing them on the economy is that the people who make the estimates have a financial interest in getting them right and the process of aggregation is open to anyone who wants to be involved not just an enlightened few who are hand-picked by the political elite.
Now when it comes to innovation, it is true that economists tend to have more faith in this phenomenon saving us from increased scarcity than environmentalists do.  But again, the real issue is what process is best to foster this innovation?  The central planners, again, would like the government to step in and subsidize it in a myriad of ways.  But the marketeers believe the market does this best as well.  The reasoning is fairly simple.  If you have a free market for oil and it becomes increasingly scarce, the price goes up.  When the price of oil goes up, the incentive to find alternatives increases.  This puts people to work trying to find those alternatives because there is a lot of money in it.  And the better the alternative solution, the more money you can make with it.  The better the prospects for alternatives, the less upward pressure there will be on oil prices.  So there are a lot of complicated problems involved but people have incentives to figure them out because if they do they can make money.
So it’s true that oil barons saved the whales and fertilizer and the internal combustion engine saved the rainforests.  But this didn’t just happen automatically because of some natural phenomenon called “innovation” that constantly marches forward as the calendar turns over or because some politician decreed that we need more innovation and diverted funds to it.  The incredible amount of innovation over the last 200 years happened because there were (relatively) free markets, and that meant that there was money in innovation.  There was money in innovation in a free market because goods were/are scarce (and getting “scarcer”).
So don’t be worried about running out of fresh water because of the free market.  It’s perfectly foreseeable that in the future people will demand fresh water.  If we are shaping up to be seriously short on it, you can bet that someone will come up with a way to get the salt out of it because it will become profitable to do so.  And don’t worry about running out of electricity because of the market, it’s just a matter of turning a generator.  We use oil and coal because they are the most efficient way to turn them but if supplies get short and prices go up, people will find ways to make them turn because the economic benefits of turning them are enormous.  But while the market allows nearly limitless potential for people to make improvements on all of these problems, the heavy hand of government offers nearly unlimited potential to screw up the workings of the market.  That is what you should be afraid of.  And yet, I can’t help but suspect that in spite of their constant attempts to manage innovation and the use of scarce resources, that if things ever do go wrong, it will be “the free market” that gets blamed.

Technology and Outsourcing

February 24, 2014 11 comments

This post is inspired by the following question on Twitter.

Does anything need to be done to deal with the loss of jobs due to technology or outsourcing?

To which I replied:

 short answer: no (long answer more than 140 characters)

The poster then pointed out that the long answer would fit in a blog post, which was a good point, so here we are.  Let’s start with the long version of the short answer.

In a free market it would not be necessary to do anything about this.  This assertion is standard fare in introductory econ classes and is based on the notion of comparative advantage.  Essentially, the disconnect between economists and others on this issue comes down to a difference in thinking about the labor market.  Many people think about the market being made up of a fixed supply of jobs that have to be distributed among some number of workers.  They then conclude that these things reduce the number of jobs.  This is not a very good way to think about a market.

Economists see an exchange between two parties with a supply and demand for labor.  In a free market we expect “jobs” to exist if the cost of labor is lower than the value of the produce.  Jobs that are worth more will pay more and people will find their most productive occupations by seeking the highest wage/compensation.

Essentially, if there were a totally free market, it would make no sense to say jobs were created or lost.  If someone invented a robot that could build widgets really cheap and all the widget makers got laid off, they would simply find other jobs.  Their comparative advantage would go from making widgets to making something else.  This may make those individuals worse off (though it may not) but the total output of society and the total benefits would increase because it would get cheaper to make widgets which would make them cheaper and everyone would be able to have more of them.  (The same argument applies to outsourcing.)

By the way, these concerns have been around for hundreds of years and so far technology has not destroyed the working class.

Now for the actual long answer.

As I said, the above analysis assumes a free market.  In reality, what we have is far from a free market.  There are a ton of laws and regulations which gum up the works of this process.  Here are some examples

Minimum wage: If the value of your labor in your most efficient production is less than the minimum wage, you’re out of luck.  You might be willing to work making widgets for $6/hour and somebody might be willing to hire you and it might cost people in China $6.50/hour worth of other goods, but if the minimum wage is $7.25 then the widgets get made in China anyway and you end up unemployed.  This makes you worse off as well as the consumers of widgets who must pay more for them.

Unions: Similar situation.  You might be willing to work at a certain wage but the union won’t let you because they have “negotiated” a higher wage for themselves by creating barriers to entry to keep you out.

Licensing: Let’s say after losing your job at the widget factory your new comparative advantage is as a hair stylist.  But you can’t just go out and do that.  You have to go to beauty school for two years, pay a bunch of fees and pass some tests.  Don’t have the time/money for that?  Too bad for you.

Labor Laws: So you could make a widget cheaper than the Chinese but in order for someone to hire you to do so, they would have pay a bunch of taxes, get insurance incase you stub your toe on the way to work and try to sue them, comply with a million OSHA regulations, provide you with healthcare etc.  If the benefit of your labor is not great enough to make it worth it to them to do all of this, again, you are out of luck.

Here are some cases from an older Stossel show.  The moving company who had to get permission from their competition to enter the market is my personal favorite.

So the real answer to the question “should anything be done” is yes, we should liberalize the labor market by getting rid of all these ridiculous regulations which are designed to protect some special interest group.  If we did that, then no further meddling would be necessary.  Of course the other side will argue that we have to do a bunch of other interventions in the economy to try to mitigate the damage that they blame on things like outsourcing and technology but that damage is really the result of those things combined with all the interventions we already have.

Categories: Micro