An Ecologically Rational Analysis of Nudge Theory

Here’s a short essay on how nudge theory would be interpreted in an ecologically rational frame. This essay is a bit of a teaser for a paper I’m working on about nudge theory and its parallels to the socialist calculation debate. Enjoy! 

1. Introduction

In 2008, Chicago economist Richard Thaler and Harvard Law School Professor Cass Sunstein published Nudge: Improving Decisions about Health, Wealth, and Happiness. The authors contrast so-called Econs, “rational” individuals who maximize utility using some kind of optimization method, against Humans, who sometimes optimize but sometimes make “mindless” decisions that display empirical biases not present in “rational” decision-making. A “nudge,” in the words of Thaler and Sunstein, is “any factor that significantly alters the behavior of Humans, even though it would be ignored by Econs.” Humans respond to incentives and nudges, which, when properly deployed, “improve people’s lives, and help solve many of society’s major problems” (Thaler & Sunstein 2008: 8).

Nudges change in the context in which choices are made, not the content of choices. That is, how an agent makes her choice depends on the set of choices that she perceives and the methods by which she can transform those choices into the outcome she desires. Suppose we imagine a buffet table with salad on one side and cake on the other. An agent whose firm end is to cut her calories will choose the salad; an agent whose primary end is being polite and secondary end is cutting calories may take cake only if no people are watching her; and the agent who values cake over her self-respect may eat cake now and beat herself up over it later.

Nudges of the type proposed by Thaler and Sunstein require a belief about how a particular agent may make a better choice in the estimation of the agent if the agent were facing a difference choice context. Thaler and Sunstein argue that since individuals always make choices in a particular context anyway—and will face a variety of pre-existing influences—nudging is liberty-preserving. The authors model the effectiveness of nudges in a rational choice framework, whereby successful nudges assist agents in becoming more “rational” and thereby result in welfare gains. Hansen & Jesperson see pro-nudging arguments as splitting into two distinct camps: the “libertarian” camp, which constrains nudge policy to be preferences- and choices-preserving, and the “democratic” camp that would constrain nudge policy no more than traditional interventions are constrained, that is, subject to final review by voters in a representative democracy (Hansen & Jespersen 2013: 12).

It is in the best judgment of this author that agents cannot act rationally in the manner delineated in traditional optimization-based equilibrium economics due to computational and decidability issues (see discussions in Koppl, Velupillai, Wagner, Simon, Axtell for instance), and rather exhibit what Gerd Gigerenzer has called “ecological rationality,” (Gigerenzer & Todd 1999) or what Herbert Simon famously called “bounded rationality” (Simon 1996). Simon’s studies of behavior, in his words, were mostly of the “residual categories” of human behavior, as the reigning theory of choice at the time, and still currently, took into account utility gains only with respect to a very few parameters (Simon 1977).

As we’ve learned since, the “residual categories” of behavior are no less relevant to the decisions people make than the few parameters taken into account by rational choice models. Given the bounded rationality frame made necessary by expanding the analysis of human behavior to an analytically intractable set, it makes sense to analyze the effect of nudge theory in a boundedly rational frame replete with Simon-esque “residual categories.” Candelo & Wagner (hereafter, C & W) take a similar tack in an upcoming article on behavioral economics in an ecologically rational frame, and I see my analysis as an extension or refinement of theirs.

2. The cake-slice nudge

If we reject neoclassical theory as in C & W, we immediately see that nudges proclaiming to move a social system closer to a Pareto social welfare optimum are out of the question, analytically. Without the objective measurement device provided by rational choice theory, we can say little about social welfare, as defined. However, that doesn’t mean we can’t investigate the effects of nudges on the efficiency of agent computation, how well agents are able to learn a solution given a problem to solve, and whether nudges advantage the preferences and ends of policy makers.

An explicatory device used in C & W’s analysis is to outline how behavioral economics ignores Simon’s “residual categories” of behavior. They reference an experiment conducted that shows intransitive preference orderings arising from a choice between differently sized slices of cake (C & W: 7; Rizzo 2015). First an agent is offered three differently sized sliced of cake, and takes the medium sized slice. The experimenter then removes the largest slice and asks the agent to choose again; this time, the agent chooses the smallest slice of the three. C & W explain how what looks like intransitive preferences to a rational choice theorist could be explained by invoking the residual categories of choice ignored by the rational choice theorist: the agent may value being seen as polite, and translate that into never taking the largest slice offered. Therefore, in our analysis, a nudge could not be based on making an agent more rational, as there is no sense in which a choice made isn’t rational in the sense of acting according to one’s values.

An example of a nudge would be to explicitly offer people a choice between two sized slices of cake at first, perhaps written as such on the menu, but only offer the large size on request. This is an example of Thaler and Sunstein’s so-called choice architecture, where no choices are removed but it becomes more costly to make a choice deemed “bad” by policy makers. The policy implications of such a nudge are obvious, and touted with much gusto in hand-wringing documentaries like Morgan Spurlock’s “Super Size Me,” where Spurlock suggests that putting a cap on portion sizes offered to customers will have positive effects on health while having no negative effects on the utility of the customers. The Nudge-esque version of Spurlock’s suggestion would be to remove the super size category from the menu, but still make it available upon request. In effect, the policy maker sees herself not as manipulating market actors but as nullifying existing manipulations.

3. Transferring utility from “bad” decision-makers to “good” decision-makers

Nudge theorists claim they are giving agents avenues to make better choices that agents truly wish to make. How nudge theorists are supposed to know agents wish to make the choices nudge theorists wish them to make is typically explained using anecdotes. Regardless, the implication of nudge theory is that the agent learning mechanism is broken or needs a helping hand. Nudge theorists recognize on some level that instead of optimizing, agents rely on heuristics in order to make choices. Altering the context in which the heuristics of choice operate can short-circuit choice to favor behaviors preferred by policy makers. People can be manipulated, an ancient truth perhaps best explicated by George Orwell in his “Politics and the English Language.” But can a mass of people be manipulated in a general way with no negative consequences for anyone, at the time of the intervention or thereafter?

In an ecologically rational frame, our objects of analysis are no longer an agent’s budget constraint, the set of goods available on the market, and the price vector of those goods. By deviating from traditional optimization in creative ways, agents can lower the cost of making the combinatorially large number of decisions facing them at any particular moment. Agents have different methods of making choices from which to choose. Decisions-making methods are as combinatorially diverse as combinations of goods, and there is as fierce competition between methods as there is goods on the market, with a far smaller multitude of “winning” decision-making methods emerging from the process. Some of these methods are for sale in the form of books, courses, and paid expertise, or granted in exchange for supporting an idea and its leaders, but many more can be gleaned from pure observation and trial-and-error. Analogous to choosing which goods to consume, choosing a decision-making method depends strongly on the characteristics and subjective preferences of the agent.

Making it cheaper to engage in some methods of decision-making is analogous to subsidizing some goods. Agents who would have preferred “bad” decision-making methods will find their preferred methods costlier, or choose less preferred methods by default at a loss of utility unbeknownst to them. There’s no analogy to neoclassical social welfare in an ecologically rational regime, but we can say that a transfer of utility has taken place from agents who would have made the choices policy makers deem as “bad” to agents policy makers deem as “good.” Therefore nudges, as implemented in the real, ecologically rational world, have real, negative effects on some agents in order to benefit others.

References

Candelo, R., Wagner, R. (upcoming). Pareto’s Theory of Action and Behavioral Economics.

Gigerenzer, G., & Todd, P. M. (1999). Simple heuristics that make us smart. Oxford University Press, USA.

Hansen, P. G., & Jespersen, A. M. (2013). Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy. Eur. J. Risk Reg., 3.

Orwell, G. (2013) (originally, 1946). Politics and the English language. Penguin UK.

Simon, H. A. (1977). ‘The logic of heuristic decision-making,’ in R. S. Cohen and M.W. Wartofsky, eds., Models of Discovery. Boston: D. Reidel.

Simon, H. A. (1996). The sciences of the artificial. MIT press.

Thaler, Richard, Sunstein, Cass. (2008) Nudge: Improving decisions about health, wealth, and happiness. Yale University Press.

My problem with Kirznerian surprise

The problem I see with Kirznerian “surprise,” as Kirzner elaborates on in his “Entrepreneurial Discovery and the Competitive Market Process,” (1997, pp. 71-2) is that his exposition of the nature of surprise makes the process of entrepreneurial discovery rather mystical, dependent on some inscrutable quality called “alertness.”

Yes, people do not search the neoclassical way. But people do search in other ways: they try to find new perspectives with which to look at old problems, they purposefully take risks to jolt themselves and their thinking out of their comfort zone, they divide their knowledge with new people and experts, they go to school to learn more about an area they believe will be fruitful to their entrepreneurial search, and so on.

Kirzner was right in that the neoclassical approach to understanding how entrepreneurs made profits from arbitrage was likely incorrect, as it presumed a calculable equilibrium that was within reach given enough search. (Fixed points of this sort are generally incalculable in complex social systems.) But by stripping individuals of all possible deliberative heuristics of search, Kirzner missed an opportunity to talk about how entrepreneurs choose which heuristics to use in which contexts and what kinds of social institutions might have emerged to support this process of choice.

I think Kirznerian surprise can still go through as a viable idea if we entertain the idea that a surprising solution is thoroughly outside an entrepreneur’s expectations — they cannot assign a probability to its existence because they are unable to conceive of it, and thus they have no reliable, methodical way of searching for a surprising solution. A surprising solution is an utter novelty, but a useful novelty, for an entrepreneur’s ends. There may be no methodical way to produce useful novelty, or to even know when you should be trying to produce useful novelty, because you have no way of calculating whether your ends are better furthered by the employment of any particular novelty because you cannot conceive of it.

Published in Public Choice online: My book review of Colander and Kupers “Complexity and the Art of Public Policy”

My book review of Colander and Kupers “Complexity and the Art of Public Policy” has been published online in the journal Public Choice.

http://link.springer.com/article/10.1007/s11127-016-0344-5

This is my first economics publication. You might recognize the review from an earlier blog post of mine, though the published review is more polished and detailed.

Book Review: Complexity and the Art of Public Policy

In Complexity and the Art of Public Policy, David Colander and Roland Kupers introduce the “complexity frame” of policy-making, and argue for bottom-up policy-making like nudging individuals to make decisions more in line with the goals of the public choosers.

In Part 1, “The Complexity Frame for Policy,” Colander and Kupers reduce the old policy debate to one of methodology. Social scientists, they say, have been raised on a strict diet of equilibrium theory and market failure. Those social scientists who accept the market failure hypothesis use equilibrium theory wielded by well-meaning apolitical interventionists as a panacea to all mixed-economic ills. Those social scientists who do not accept the market failure hypothesis use equilibrium theory wielded by marginalist freedom-lovers as a panacea to all mixed-economic ills. Colander and Kupers demarcate their position in this starkly painted landscape as strictly not one (control) or the other (market anarchism), for methodological and practical reasons.

The complexity frame is neutral regarding the level of government, for it doesn’t make assumptions about the nature of government. It does assume, like my professor Richard Wagner likes to say, that “Government is with us, and will probably always be with us.” The nature of government will likely change as a result of the implications of doing policy in the complexity frame. How it will change as an institution – how it is formed, what incentives must propel public choosers compared to now – is expressed somewhat in future parts of the book, but never satisfactorily, in my view. What new things government should do to change social outcomes are elaborated on with greater detail.

The complexity frame as set forth in the book is similar in every way to how social systems have been expressed as complex adaptive systems over the past 20 years especially. Equilibrium theory is described as an inappropriate tool with which to investigate market behavior for many reasons, not the least of which is that equilibrium theory is a static theory in a reality of constant change. The main takeaway is that trends, like the statistics we associate with macroeconomic variables, are generated from underlying, bottom-up processes. As such, the authors claim, policy measures should strive to be as bottom-up as possible, to subvert or mimic natural social processes like shifts in norms.

In Part 2, “Exploring the Foundations,” Colander and Kupers work hard to separate their approach to doing economics from more traditional approaches. For the most part, their methodological case is solid. Where they come in weak is in detailing the goals that bottom-up policy should strive towards. They shower praise upon J. S. Mill, Smith, Pigou, Marshall, and Keynes, and signal disapproval of Milton Friedman, George Stigler, Leonard Read, and Don Boudreaux. Their choice of heroes and anti-heroes colors the book with an ideological tint that I believe obscures their overall message of bottom-up change.

Influencing norms is the vehicle for the authors’ bottom-up policy change (“norms” appears 100 times in the book). The authors suggest that individual decision-making isn’t based on utility and other kinds calculations, rather, experimental and behavioral economics shows that a good deal of decision-making uses heuristics and shortcuts like adhering to prevailing norms. Changing how people make decisions, therefore, is how Colander and Kupers want to affect real and lasting change in the emergent patterns of the social system. Changing norms as a way of changing emergent patterns of social action is not unlike the famous Smithian point about how the chess master moves chess pieces without a thought to whether the internal laws of motion of the chess pieces are in line with the movements forced upon them; in Smith’s view, government works best when these two movements align.

Further, according to the authors, policy effectiveness can’t be measured simply as a function of what they call “material welfare” (p. 87). “Social welfare” is itself an important goal. Sadly, they fail to express how social welfare is supposed to be measured as a part of policy effectiveness accounting, and how the determination of social welfare itself might be colored by the ideology of the authors and, more importantly, the policy-makers. The power of using “material” measurements as a metric for social welfare is their disassociation with any particular subjective measure of good. The authors do not adequately find a replacement metric that isn’t heavily subjective.

The new “policy patterns” suggested by the authors are a laundry list of systems theoretical phenomena: nonlinearity (p. 113), emergence (p. 116), multiple equilibria (p. 117), path dependence and lock-in (p. 118), phase transitions (p. 121), diversity (p. 121), power laws (p. 123), networks (p. 125), and agent-based modeling (p. 127). Multiple equilibria and lock-in are the focus of their ensuing policy suggestions. The authors also rely heavily on game theoretical constructions, as a kind of panacea to traditional utility maximization equilibrium theorizing. The problem with this being is that in general, finding a Nash equilibrium is an NP-hard problem. So while the authors in many places indicate they are aware of the mathematical challenges facing complex system theorizing, they aren’t consistent in their modeling and analysis advice.

In Chapter 9 of Part 2, the authors bring forward a scheme of “nudging” individual behaviors and outcomes. Norms are the primary target of nudges, the authors describe, but nudging is really about manipulating decision-making heuristics. As described above, people don’t always or perhaps even usually utilize maximization calculations when making decisions. Often they choose based on some set of heuristics. The power of policy is to exploit heuristics to steer people to decisions deemed overall “better” for society by some metric of social welfare.

If you’re mystified as how these nudges as such are dissimilar from the kind of top-down control governments have always tried to enact, so am I. The heart of top-down control isn’t in the method prescribed to actualize goals, it’s in the determination by a governing group of public choosers that some goals are better than others, without sufficient proof that these goals are calculably better (as the problem is fundamentally subjective and thus incalculable).

In Parts 3 and 4, Colander and Kupers suggest that while governance structures are created from the bottom-up, these bottom-up structures are somehow free of the apparent behavioral flaws of the individuals whose decisions led to the emergence of these governance structures. To suggest that one (the private individual) is driven by self-gain while the other (the public chooser) is driven by social welfare ignores how most governance structures have traditionally arose. It also ignores the numerous non-governance organizations that arose to promote social welfare throughout history.

The authors introduce the idea of a “for-benefit corporation,” which is pretty much like an old-fashioned fraternal organization that is blessed and nudged into existence by favorable (top-down) policies. They do not examine why fraternal and other charitable organizations vanished in the first place, instead leaning on their implicit assumption throughout the book of an almost Pigovian-like market failure theory of private lock-in due to suboptimal norms and individual decision-making heuristics.

More egregious than their academic oversights and ideological biases, however, is their waving-away of economic truths that are true regardless of the complexity of the social system. The primary truth ignored is the recognition that resources arrogated to government have alternative uses in the private sphere. The authors list many examples of nudging policy in which they explicitly encourage a trial-and-error method of blank checks written to bureaucrats for jolting social systems out of states those same bureaucrats deem as “locked-in” to some suboptimal “basin of attraction” (sounds like Pigovian welfare theory, doesn’t it?). How officials are able to calculate which methods will jolt society in which direction, and what the more optimal basin of attraction is, remains a mystery unexplained in this text.

I saw little indication of new theory in the book. Most of what I saw was an attempt to re-package Pigovian social welfare theory and utilitarianism in complex systems theoretical trappings. It ignores what I see is clearly the biggest hurdle in policy-making in a complexity regime: how government as we know it is premised on a group of people with the power to arrest and tax being able to dream up goals that will make everyone else better off than in that group’s absence.

Overall Impressions:

I enjoyed the introduction of complexity ideas to the realm of public policy-making. I agreed that the top-down “control” method is a naive view of how policy changes (or should change) outcomes.

My main problem with the text is that although it rejects the ideas of equilibrium theory and Pigovian welfare economics in a traditional sense it fully embraces them in a modern sense, without an acknowledgment of why rejection of one does not imply the rejection of both. For example, some of the most-used terms in the book are: 1) lock-in (used 45 times), 2) nudge/nudging (56 times), 3) basin (of attraction) (19 times). These ideas are not qualitatively unlike 1) price/wage/variable “stickiness,” 2) short-term intervention to jolt a system out of a suboptimal sticky point or long-term intervention to prevent from ever getting in the suboptimal sticky place, 3) multiple equilibria, some of which are associated with suboptimal states where everyone could be better off if they were somehow jolted towards a different point on whatever plane of relationships is being considered.

I do not think it is useful to recast complexity-aware social welfare theory as such a close cousin to social welfare theory’s first iteration. It discards in large part the power and insight of complex systems theory, and does not challenge the policy-making status quo nearly as heartily as it deserves.

A Conless Macroeconomics?

I’m reading Vela Velupillai’s “Variations on the Theme of Conning in Mathematical Economics” (2007), which is one of his many contributions on the subject of the pathologies introduced into economic theorizing by insisting on the sole use of axiomatic mathematics (this is the kind of mathematics we think of when we think of mathematics, whereby the Axiom of Choice and the Law of the Excluded Middle holds. For more on this, see my post).

Standard macro theory is a linear aggregation of (axiomatic math-based) micro theory. Its descriptive power suffers from both its presumed linearity and its roots in axiomatic math. But that might not be the real issue. The real issue may be that students are not instructed on how problematic a linear, axiomatic macro might be to the descriptive, predictive, and prescriptive powers of their models. That, in effect, they’re being conned into believing standard macro methodology is the only methodology available to model macro systems, and that it’s just a matter of generating complicated enough systems of equations to get at the heart of real macro dynamics.

My question, on the back of Prof. Velupillai’s article about conless mathematical economics, is what would conless macroeconomics look like? The prevalence, indeed, the routine nature of nonlinearity and feedback effects in social dynamics would seem to preclude a linear aggregation of presumptive equilibriating micro states. Given the likelihood of undecidabilities at more complex levels like the social level, standard macro methodology has two big strikes against it. There are more strikes against standard macro methodology in the form of blatantly ignoring non-trivial fundamentalities of social systems, such that actors are not homogeneous and social networks matter, to name a few. You don’t have to poke around its structure for long before standard macro theory starts looking like a house of cards. A con, that only works given fundamentally wrong assumptions about human behavior and emergent social behavior from the interaction of many individuals.

So, again: what would a conless macroeconomics look like? I have some ideas that I will outline in future posts, but I’m curious as to what you think.

Undecidability, Incompleteness and Irreducibility in Mathematical Economics

This essay is written in the style of Morgenstern’s “Thirteen critical points in contemporary economic theory” (1972)

Methodological thinkers in economics usually translate decision theory as it pertains to economic decisions as choice theory, that is, rational choice theory. The cutting edge may go as far to admit bounded rationality into their choice theory, but criticism rarely goes further than that. Choice theory as the decision theory of economic individuals is widespread and almost never questioned by practitioners of the science. In their first and second year foundational courses graduate students are rarely exposed to caveats to traditional choice theory, except perhaps a chapter or two on Thalerian behavioral economics and the results of economic experiments of the type pioneered by Vernon Smith.

The standard departures from rational choice theory creeping into the mainstream of economic thought certainly never include a departure from analytical closedness. Real analysis is ever and always the formalism of choice theory and of the interactions between economic individuals. The edgiest of departures from mid-20th century rational choice theory still rely on reliable heuristics, partial optimization, or alternative learning methods to arrive at decisions. Real analysis is the formalism of virtually all of mathematical economics as we know it, for reasons I’ll touch on below but that are ultimately out of the scope of this paper.

Note that much of my analysis draws from the discoveries of scientists working in the field of computational complexity. I will refer to a few here, but a general reading is encouraged. The 20th century crowd is comprised of mostly mathematicians and computer scientists, namely, Kurt Gödel, Alan Turing, Stephen Wolfram. Two of the spokespeople for the 21st century crowd are the mathematical economist V. Velupillai and the computer scientist Gregory Chaitin, though others and notably several Austrian economists have their head in this game.

It is important to understand, going forward, that the social systems studied by economists are complex systems. They’re complex enough, moreover, to be capable of computational universality. What’s so special about computational universality? Wolfram’s Principle of Computational Equivalence (PCE) states that any system capable of computational universality cannot be emulated except by a system that is itself capable of computational universality. That is, in order to get an accurate solution to any given problem our model must be as complex as the economic process it is attempting to emulate. The very reason we build models is to reduce a complex process into manageable parts; the PCE implies that complex-enough systems are not coherently reducible to some interacting set of component parts. This feature of any complex-enough system is called computational irreducibility. Social systems are computationally irreducible.

The mathematics underlying real analysis are called axiomatic mathematics. Axiomatic mathematics takes a small set of axioms, then, by virtue of deduction, derives a large amount of implications. Axiomatic mathematics relies on proof theory to conduct its derivations, and proof theory relies on the unbreakability of the law of the excluded middle (LEM). The LEM states that if proposition P is true, its negation ~P must be false. All derivations of mathematical realities are based on the unbreakability of the LEM. Any theory based on axiomatic mathematics wherein the LEM does not hold is called an inconsistent theory, in that the theory in some way contradicts itself. The implications of inconsistent theories are unprovable, and their predictions, meaningless.

In 1931, Kurt Gödel, an Austrian mathematician who was friends with Oskar Morgenstern, proved that there exist true propositions that can be neither verified or falsified by axiomatic mathematics. These propositions violate the law of the excluded middle. The proposition at the heart of Gödel’s proof was a simple statement: “This statement is false.” If the statement is true, then the statement is false; if the negation of the statement is false, then it must be false that the statement is false. But it is true that the statement is false (so says the statement itself). That is, both the statement and its negation are true, and we have unearthed an inconsistency.

Both comparative statics and general equilibrium theory have been shown to have pathological behavior, such that we are faced with either undecidable propositions, non-computability of solutions, or both. V. Velupillai, a mathematical economist, explains that “A reasonable and effective mathematisation of economics entails Diophantine formalisms. These come with natural undecidabilities and uncomputabilities” (Velupillai, 2005). Velupillai reviews the formal underpinnings of general equilibrium theory as formulated by Debreu (1959) and describes how, when faced with the need to depart from axiomatic mathematics and develop theories in the realm of constructive mathematics where the LEM does not hold and thus undecidabilities can be taken into account rigorously, several of the theorems that underlie the proof of existence of a general equilibrium as formulated by Debreu are invalid (Velupillai, 2005, p. 862). To put it more plainly, the axiom system of traditional mathematics is not sufficient to derive the existence of general equilibrium as formulated by Debreu.

Choice theory rests on several presumptions, a few of which are 1) existence, completeness, and transitivity of preferences, 2) revealed preferences, 3) the existence, continuity, and uniqueness of a utility relation between preferences and the set of the reals — that is, a cardinalization of the ordinal, complete, and transitive set of preferences. (Kreps, 2012). Given these assumptions, maxima in the case of individual utility and minima in the case of costs, exist, and are unique. The completeness — or analytical closedness — of decision theory is a necessary condition for solutions to exist. An incomplete decision theory is one which contains undecidable propositions. Choice theory and its conclusions are consistent only if its axioms hold, including the axiom whereby solutions (maxima and minima) are proved to exist, and to be unique. Similarly, for social choice theory, equilibria must exist, and be unique.

In 1972, Oskar Morgenstern wrote the essay, “Thirteen critical points in contemporary economic theory.” Many of Morgenstern’s criticisms of economics formalism, which during the height of Samuelsonian Keynesianism had been considered nearly a completed science, have stood the test of time. In his “Thirteen critical points…” Morgenstern addressed the problems with arriving at equilibria using the traditional methods. Morgenstern believed that game theory and its panoply of strategies might serve as a more rigorous replacement of comparative statics. Morgenstern was correct in that the calculability of solutions of linear programming problems on the scale necessary to be consistent with the sheer number of variables in realistic economic problems was an issue. But are game theoretic solutions, like Nash equilibria, any more calculable? And what about undecidable problems in comparative statics and game theory?

Take the linear programming methodology, wherein economists solve traditional problems in comparative statics in a large number of variables, in order to, for instance, calculate equilibrium price vectors. When it comes to employing the linear programming methodology, the more realism we inject into our model, the more variables we need to include. Whether or not we can solve a large set of linear equations depends on our computing power, and the complexity of the system. For instance, we can solve a much larger set of linear equations now than we could in Morgenstern’s day. But it is quite possible that any algorithm we develop to realistically represent a price vector using linear programming methods would never halt, that is, the price vector itself would be non-calculable. Morgenstern didn’t foresee the theoretical non-calculability of equilibrium states in his argument, despite his criticism of linear programming on other grounds. Morgenstern was, understandably, married to game theory as the future of mathematical economics.

Is game theory an analytical way out of the general equilibrium theory briar patch? It turns out that it isn’t; we could simply apply Velupillai’s analysis to game theory in the same way we did to GET. All economics built on real analysis foundations — that is to say, all of mathematical economics — fails the same test.

Why are the shaky foundations of traditional mathematical economics relatively unknown to economic scholars, who typically learn to prove a set is compact their first year in graduate school, if not earlier?

The Stanford economists Levin and Milgrom explain in a short introduction to choice theory why it remains popular, despite the increasing number of deviations from the model we observe in reality (especially behaviorally): “…despite the shortcomings of the rational choice model, it remains a remarkably powerful tool for policy analysis…[m]any of the “objectionable” simplifying features of the rational choice model combine to make such an analysis feasible.” (Levin & Milgrom, p. 24) It is for good or ill that academic economists often moonlight as policy advisors, but it may explain at least part of the reason why the field holds so tightly to axiomatic mathematical analysis. No GDP target means you’re out of a job that the economist willing to supply it will happily fill. The need for social science scholars and those they advise to have some sense of control over social outcomes is another point that, while related, is out of the scope of this discussion.

What, then, is the way forward? Velupillai hinted at it, and so have other agent-based computational economists, like Borrill & Tesfatsion (2011): constructive mathematics. Constructive mathematics is differentiated from traditional (axiomatic) mathematics in that it does not require the law of the excluded middle to be satisfied by any given proposition. Bishop-style constructive mathematics, for instance, requires that all existence proofs be constructive in that they can be implemented (at least in principle) on a computer. That is, an object is said to exist only if it can be physically constructed and demonstrated. Functions, in constructive mathematics, are implementable algorithms, whose definition depends on how they must be implemented in code.

The procedural construction of the empirical patterns we recognize in our own social outcomes may be the beginning of a way towards a more rigorous development of economic theory. I, for one, am very optimistic: in my view, economics is a wide-open field littered with $100 bills for the taking.

 

References

Borrill, P. L., & Tesfatsion, L. (2011). Agent-‐based modeling: the right mathematics for the social sciences?. The Elgar companion to recent economic methodology, 228.

Debreu, G. (1959). Theory of Value. An axiomatic approach to of economic equilibrium. Cowles Foundation, Yale University. New York.

Kreps, D. M. (2012). Microeconomic foundations I: choice and competitive markets (Vol. 1). Princeton University Press.

Levin and Milgrom. “Introduction to Choice Theory.” http://web.stanford.edu/~jdlevin/Econ%20202/Choice%20Theory.pdf

Morgenstern, O. (1972). Thirteen critical points in contemporary economic theory: An interpretation. Journal of Economic Literature, 10(4), 1163-1189.

Smith, A. (1760) The theory of moral sentiments.

Velupillai, K. V. (2005). The unreasonable ineffectiveness of mathematics in economics. Cambridge Journal of Economics, 29(6), 849-872.

Wolfram, S. (2002). A new kind of science (Vol. 5). Champaign: Wolfram media.

Introduction, and Blogging

I am a second-year PhD student in economics at George Mason University. Given that the first whirlwind year is over, and comprehensives are passed, I wanted to start writing again. I plan to blog something short or long once a week, probably on Fridays. I’m also working on several publications at the moment, and may include interesting snippets if they’re standalone-enough. I’m also into agent-based modeling, and so may occasionally post my own code or links to neat code that I find interesting or useful.