*This essay is written in the style of Morgenstern’s “Thirteen critical points in contemporary economic theory” (1972)*

Methodological thinkers in economics usually translate decision theory as it pertains to economic decisions as choice theory, that is, rational choice theory. The cutting edge may go as far to admit bounded rationality into their choice theory, but criticism rarely goes further than that. Choice theory as *the* decision theory of economic individuals is widespread and almost never questioned by practitioners of the science. In their first and second year foundational courses graduate students are rarely exposed to caveats to traditional choice theory, except perhaps a chapter or two on Thalerian behavioral economics and the results of economic experiments of the type pioneered by Vernon Smith.

The standard departures from rational choice theory creeping into the mainstream of economic thought certainly never include a departure from analytical closedness. Real analysis is ever and always the formalism of choice theory and of the interactions between economic individuals. The edgiest of departures from mid-20th century rational choice theory still rely on reliable heuristics, partial optimization, or alternative learning methods to arrive at decisions. Real analysis is the formalism of virtually all of mathematical economics as we know it, for reasons I’ll touch on below but that are ultimately out of the scope of this paper.

Note that much of my analysis draws from the discoveries of scientists working in the field of computational complexity. I will refer to a few here, but a general reading is encouraged. The 20^{th} century crowd is comprised of mostly mathematicians and computer scientists, namely, Kurt Gödel, Alan Turing, Stephen Wolfram. Two of the spokespeople for the 21^{st} century crowd are the mathematical economist V. Velupillai and the computer scientist Gregory Chaitin, though others and notably several Austrian economists have their head in this game.

It is important to understand, going forward, that the social systems studied by economists are complex systems. They’re complex enough, moreover, to be capable of computational universality. What’s so special about computational universality? Wolfram’s Principle of Computational Equivalence (PCE) states that any system capable of computational universality cannot be emulated except by a system that is itself capable of computational universality. That is, in order to get an accurate solution to any given problem our model must be as complex as the economic process it is attempting to emulate. The very reason we build models is to reduce a complex process into manageable parts; the PCE implies that complex-enough systems are not coherently reducible to some interacting set of component parts. This feature of any complex-enough system is called *computational irreducibility*. Social systems are computationally irreducible.

The mathematics underlying real analysis are called axiomatic mathematics. Axiomatic mathematics takes a small set of axioms, then, by virtue of deduction, derives a large amount of implications. Axiomatic mathematics relies on proof theory to conduct its derivations, and proof theory relies on the unbreakability of the law of the excluded middle (LEM). The LEM states that if proposition P is true, its negation ~P must be false. All derivations of mathematical realities are based on the unbreakability of the LEM. Any theory based on axiomatic mathematics wherein the LEM does not hold is called an inconsistent theory, in that the theory in some way contradicts itself. The implications of inconsistent theories are unprovable, and their predictions, meaningless.

In 1931, Kurt Gödel, an Austrian mathematician who was friends with Oskar Morgenstern, proved that there exist true propositions that can be neither verified or falsified by axiomatic mathematics. These propositions violate the law of the excluded middle. The proposition at the heart of Gödel’s proof was a simple statement: “This statement is false.” If the statement is true, then the statement is false; if the negation of the statement is false, then it must be false that the statement is false. But it is true that the statement is false (so says the statement itself). That is, both the statement and its negation are true, and we have unearthed an inconsistency.

Both comparative statics and general equilibrium theory have been shown to have pathological behavior, such that we are faced with either undecidable propositions, non-computability of solutions, or both. V. Velupillai, a mathematical economist, explains that “A reasonable and effective mathematisation of economics entails Diophantine formalisms. These come with natural undecidabilities and uncomputabilities” (Velupillai, 2005). Velupillai reviews the formal underpinnings of general equilibrium theory as formulated by Debreu (1959) and describes how, when faced with the need to depart from axiomatic mathematics and develop theories in the realm of constructive mathematics where the LEM does not hold and thus undecidabilities can be taken into account rigorously, several of the theorems that underlie the proof of existence of a general equilibrium as formulated by Debreu are invalid (Velupillai, 2005, p. 862). To put it more plainly, the axiom system of traditional mathematics is not sufficient to derive the existence of general equilibrium as formulated by Debreu.

Choice theory rests on several presumptions, a few of which are 1) existence, completeness, and transitivity of preferences, 2) revealed preferences, 3) the existence, continuity, and uniqueness of a utility relation between preferences and the set of the reals — that is, a cardinalization of the ordinal, complete, and transitive set of preferences. (Kreps, 2012). Given these assumptions, maxima in the case of individual utility and minima in the case of costs, exist, and are unique. The completeness — or analytical closedness — of decision theory is a necessary condition for solutions to exist. An incomplete decision theory is one which contains undecidable propositions. Choice theory and its conclusions are consistent only if its axioms hold, including the axiom whereby solutions (maxima and minima) are proved to exist, and to be unique. Similarly, for social choice theory, equilibria must exist, and be unique.

In 1972, Oskar Morgenstern wrote the essay, “Thirteen critical points in contemporary economic theory.” Many of Morgenstern’s criticisms of economics formalism, which during the height of Samuelsonian Keynesianism had been considered nearly a completed science, have stood the test of time. In his “Thirteen critical points…” Morgenstern addressed the problems with arriving at equilibria using the traditional methods. Morgenstern believed that game theory and its panoply of strategies might serve as a more rigorous replacement of comparative statics. Morgenstern was correct in that the calculability of solutions of linear programming problems on the scale necessary to be consistent with the sheer number of variables in realistic economic problems was an issue. But are game theoretic solutions, like Nash equilibria, any more calculable? And what about undecidable problems in comparative statics and game theory?

Take the linear programming methodology, wherein economists solve traditional problems in comparative statics in a large number of variables, in order to, for instance, calculate equilibrium price vectors. When it comes to employing the linear programming methodology, the more realism we inject into our model, the more variables we need to include. Whether or not we can solve a large set of linear equations depends on our computing power, and the complexity of the system. For instance, we can solve a much larger set of linear equations now than we could in Morgenstern’s day. But it is quite possible that any algorithm we develop to realistically represent a price vector using linear programming methods would never halt, that is, the price vector itself would be non-calculable. Morgenstern didn’t foresee the theoretical non-calculability of equilibrium states in his argument, despite his criticism of linear programming on other grounds. Morgenstern was, understandably, married to game theory as the future of mathematical economics.

Is game theory an analytical way out of the general equilibrium theory briar patch? It turns out that it isn’t; we could simply apply Velupillai’s analysis to game theory in the same way we did to GET. All economics built on real analysis foundations — that is to say, all of mathematical economics — fails the same test.

Why are the shaky foundations of traditional mathematical economics relatively unknown to economic scholars, who typically learn to prove a set is compact their first year in graduate school, if not earlier?

The Stanford economists Levin and Milgrom explain in a short introduction to choice theory why it remains popular, despite the increasing number of deviations from the model we observe in reality (especially behaviorally): “…despite the shortcomings of the rational choice model, it remains a remarkably powerful tool for policy analysis…[m]any of the “objectionable” simplifying features of the rational choice model combine to make such an analysis feasible.” (Levin & Milgrom, p. 24) It is for good or ill that academic economists often moonlight as policy advisors, but it may explain at least part of the reason why the field holds so tightly to axiomatic mathematical analysis. No GDP target means you’re out of a job that the economist willing to supply it will happily fill. The need for social science scholars and those they advise to have some sense of control over social outcomes is another point that, while related, is out of the scope of this discussion.

What, then, is the way forward? Velupillai hinted at it, and so have other agent-based computational economists, like Borrill & Tesfatsion (2011): constructive mathematics. Constructive mathematics is differentiated from traditional (axiomatic) mathematics in that it does not require the law of the excluded middle to be satisfied by any given proposition. Bishop-style constructive mathematics, for instance, requires that all existence proofs be constructive in that they can be implemented (at least in principle) on a computer. That is, an object is said to exist only if it can be physically constructed and demonstrated. Functions, in constructive mathematics, are implementable algorithms, whose definition depends on how they must be implemented in code.

The procedural construction of the empirical patterns we recognize in our own social outcomes may be the beginning of a way towards a more rigorous development of economic theory. I, for one, am very optimistic: in my view, economics is a wide-open field littered with $100 bills for the taking.

** **

**References**

Borrill, P. L., & Tesfatsion, L. (2011). Agent-‐based modeling: the right mathematics for the social sciences?. The Elgar companion to recent economic methodology, 228.

Debreu, G. (1959). Theory of Value. An axiomatic approach to of economic equilibrium. Cowles Foundation, Yale University. New York.

Kreps, D. M. (2012). Microeconomic foundations I: choice and competitive markets (Vol. 1). Princeton University Press.

Levin and Milgrom. “Introduction to Choice Theory.” http://web.stanford.edu/~jdlevin/Econ%20202/Choice%20Theory.pdf

Morgenstern, O. (1972). Thirteen critical points in contemporary economic theory: An interpretation. Journal of Economic Literature, 10(4), 1163-1189.

Smith, A. (1760) The theory of moral sentiments.

Velupillai, K. V. (2005). The unreasonable ineffectiveness of mathematics in economics. Cambridge Journal of Economics, 29(6), 849-872.

Wolfram, S. (2002). A new kind of science (Vol. 5). Champaign: Wolfram media.

Hi,

A friend of mine forwarded your post to me and I found it very interesting.

Nevertheless, I didn’t expect to finish it (leave it open) by Agent Based Modeling (If I am not wrong). I believe Agent-Based Modeling is still in the same category of mathematical modeling approaches that you criticized, even though in a decentralized and open ended way.

I think if we try to find an abstract ground, which acts like an stage for the underlying reasons that governs lots of disciplinary problems, for these problems that you correctly discussed, we might end to the problem of universals and how we represent the objects of interest, which in my opinion have been always based on a set theoretical idealization to the problem of universals, which demands for an arbitrarily closed-ness and consistency.

And of course, as you referred to Choice theory their solid-ness create a common language and some sort of political power which has nothing to do with its validity.

In my PhD thesis, I tried to investigate the domain of computational urban modeling in which looking from the angle of computation, there are many similarities to computational economics.

I would be very happy to discuss more on these issues with you and would be very happy if you could take a look at my thesis: http://e-collection.library.ethz.ch/eserv/eth:48219/eth-48219-02.pdf

Best

Vahid Moosavi