Chủ Nhật, 12 tháng 6, 2011

Some Developments in Economic Theory Since 1940

Tài liệu cũ của tôi (tôi chưa biết
cách đưa file pdf lên trang này:
Some Developments in
Economic Theory Since 1940:
An Eyewitness Account

Kenneth J. Arrow
Department of Economics, Stanford University, Stanford, California 94305;
email: arrow@stanford.edu

Key Words: econometrics, general equilibrium, uncertainty, dynamics, information
Abstract: Any psychologist who has studied eyewitness accounts knows first of all how unreliable they are. I therefore submit this informal account of some developments in economic theory without research, just as a set of recollections. It is also not an autobiography, nor asystematic account of my own work. Rather, I consider primarily those developments in economic theory that have had both general interest in the field and special concern forme. For example, as social choice theory is still a specialized field, I am not going to discuss it at all. Further, as it has turned out, I emphasize the developments of technique, although to some extent I refer to some of the underlying visions of the economy to which they are applied. To evaluate my eyewitness testimony, I give the reader some idea ofmy background and, in particular, the path that led me to be an economist and that has influenced my work and my perception of the development of economics in general. (As my friend Paul David keeps on reminding me and the rest of the world, all development is path-dependent.) I then followupwith  four major aspects of economic research in the last 60 years, the period of my scholarly activity. One, econometric methodology and practice, is of such fundamental importance that it cannot go unnoticed, although I played no role in it. With the other three, general equilibrium, dynamic processes, and uncertainty and information, I was more intimately involved.
Review in Advance first posted online on June 8, 2009. (Minor changes may
still occur before final publication online and in print.)
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.

1. SOME PERSONAL BACKGROUND
I do have to say a few words as to how I came to study economics, as this background
plays some role in my range of interests, and also something of the state of education in
economics at that time. I was fully exposed to the anxieties induced by the Great Depression,
which affected my family acutely, and economic security was a strong goal. I had a
variety of intellectual interests: mathematics, history, and logic; I found the first dominant
and majored in it at the College of the City of New York, but I took quite a few courses in
other subjects, even economics. The question was, how could I make a living out of
mathematics? Having an intuitive feeling for the importance of flexibility, later a research
theme of mine, I found and pursued three lines: high school teacher of mathematics,
statistician, and actuary. The first eventually failed simply because there were no vacancies.
I took courses in statistics and learned a lot about graphic presentation; there was one
course given in the mathematics department by someone who knew little of the subject
but knew excellent references. I read on my own and was fascinated by the papers of
R.A. Fisher and, above all, those of J. Neyman and E.S. Pearson. I also took some of the
examinations of the Actuarial Society, on basic mathematics (not deep but tricky), and
somehow read some of the material on subjects like moral hazard and adverse selection,
knowledge that turned out to be very useful to myself and to economic analysis many
years later. I also had a summer job as an actuarial clerk (sheer accident; I was looking for
any job and passed by an insurance company, so I simply walked in and asked if they
could use someone), which taught me a good deal about pricing.
When I graduated in 1940, I really had no clear employment, so I decided to go to
graduate school to study mathematical statistics. There were in fact few places to study
statistics; one was at Columbia, where Harold Hotelling taught. My family could not
afford any support, so going to Columbia meant I could live at home. There was no
department of statistics and no degree in it, but there was a listing of courses called
Statistics. They were given by Hotelling and an assistant paid for, not by Columbia but
by the Carnegie Corporation. The “assistant” when I was there was Abraham Wald, an
incredible stroke of good fortune. I naively enrolled for graduate study in the Mathematics
Department, as the closest match to statistics. When I wanted to apply for a scholarship
for the following year, Hotelling made it plain that mathematics departments had no
interest in statistics, that his appointment was in Economics, and that the chances were
much better there. I changed my department accordingly. When I have told this story to
former students, they have been shocked that I had followed economic incentives.
Hotelling did give a course called “mathematical economics,” which I took, in my first
term. I was fascinated, although in retrospect it was very limited, being confined to the
theories of the firm and of the consumer with many commodities. I became very expert at
manipulating bordered Hessians, an ability for which I have found little use. All the
important things that Hotelling had done in economics, on exhaustible resources, welfare
theorems, or depreciation, I had to learn from my own reading, not from his course.
The statistics courses given by Hotelling and Wald were superb. We were at the
frontiers very quickly, to which indeed they were major contributors.
The teaching and organization of the Economics Department were polar opposites.
I was in fact exposed to the usual courses and requirements for only one year, 1941–1942.
The reader may find it hard to believe, but there was no course in price theory. Its place was
taken by a course in the history of economic thought. I do not know exactly how this came
1:18.2 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
about, but I would guess that it was the influence of the best-known member of the
department, Wesley Clair Mitchell. Mitchell was a pioneer in emphasizing the accumulation
of data rather than theory of any kind and was the founder of the National Bureau of
Economic Research. To him the great economic problem was the presence of business
cycles; at that moment of history, this emphasis did not seem misplaced. Mitchell usually
taught the course in the history of economic thought, but he was on leave that year. He was
replaced by John Maurice Clark, certainly an interesting thinker, an excellent writer, and an
incredibly boring lecturer.
The department felt that it had to have price theory as a required subject on the oral
comprehensive examination, even if it did not offer a course. To this end, a student was
supposed to arrange with a member of the department a list of books and articles to be
read and to be examined on. The hot topic in price theory at that time was imperfect
competition; I proposed the books of Edward Chamberlin (1933) and Joan Robinson
(1933) and some subsequent papers, and they were accepted by Clark. This work was
indeed what later became non-zero-sum game theory.
Mitchell ordinarily gave a seminar in business cycles, which, to him, was indeed the
most important economic problem. Because he was on leave, the seminar was given by
his chief deputy at the National Bureau of Economic Research, Arthur F. Burns (who
later, of course, became the Chairman of the Federal Reserve System). As I was primarily
interested in statistics and was taking courses in economics just to meet requirements,
I felt that this was an opportunity to learn what a very differently oriented
tradition had to offer. The National Bureau approach was primarily the accumulation
of data with the view that understanding would emerge. There was a statistical methodology,
later published by Burns & Mitchell (1946). The method was about as antithetical
to that I had learned in my mathematical statistics courses as was conceivable.
R.A. Fisher, Hotelling, or Wald would start with a model in which certain parameters
were unknown. The aim was to make observations and use them to draw inferences
about the unknown parameters. Burns and Mitchell had a description from which, even
in principle, it would be impossible to draw any causal inferences; at most, there might
be some kind of prediction about leads and lags, and even for that purpose, their
method seemed to me to throw away most of the relevant information. When Burns
and Mitchell finally published their work, it was subjected to a withering review article
by Tjalling Koopmans (1947).
Apart from reading Burns and Mitchell in the manuscript stage, Burns’s course centered
on the reading of Joseph Schumpeter’s Business Cycles (1939). The latter attempted to
show that the innovation process whose importance he had done so much to emphasize
was also responsible for business cycles. Among other implications, he claimed to find
three kinds of cycles of various lengths, one nested within the other. Burns was concerned
with denying this empirical argument. The implications for growth, which now are taken
to be of dominant interest, were given no stress.
Remember, this is a course on business cycles in 1941–1942. Perhaps the reader will
think of Sherlock Holmes’s calling Inspector Gregory’s attention to “the curious incident
of the dog in the night-time,” in the story “The Silver Blaze.” “‘The dog did nothing in the
night-time.’ ‘That was the curious incident,’ remarked Sherlock Holmes.” The dog in
question is, of course, Keynes; The General Theory (1936) went unmentioned.
I must add that, despite my antipathy to Burns’s statistical methodology and his lack of
attention to the most important current topic in economic thought relevant to Burns’s
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.3
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
chief concern, economic fluctuations, I found him one of the most brilliant and knowledgeable
economists I have ever met.
I do not want to leave the impression that graduate education in economics at Columbia
at that period was a total loss. The courses in public finance, by Robert Haig and Carl
Shoup, were very solid, and Carter Goodrich’s course notes for American economic
history were gems (his oral examination was the most innovative and educational that I
have ever experienced).
The content of graduate education in economics at Columbia was far from typical. In
fact, graduate education then differed across universities much more than it does today.
Price theory was a serious subject at Harvard (Wassily Leontief) and Chicago (Jacob
Viner). Alvin Hansen at Harvard was bringing Keynes’s doctrines to the United States
and attracting as students the leading macroeconomists of the postwar period. Edward
Mason at Harvard was also the leader in industrial organization and again trained many
of the next generation. Chicago was already the leader in stressing laissez-faire doctrines,
as well as emphasizing the quantity theory of money, and this was well before Milton
Friedman and George Stigler joined the faculty.

2. ECONOMETRICS
Perhaps no development in economics since World War II has been as important as the
steady use of formal statistical models. The Econometric Society had been founded in
1933, largely due to the drive of the great Norwegian economist Ragnar Frisch, with the
support of Irving Fisher and Joseph Schumpeter (and the financial support of Alfred R.
Cowles III). It was devoted to the development and use of formal statistical models,
mathematically expressed, in the analysis of economic data. The first steps had been the
use of existing regression methods, applied in economics indeed not long after their
applications in evolutionary biology. Although demand and supply curves had been fitted
intermittently, a much bolder attempt was that begun by Jan Tinbergen in the 1930s to
develop and fit complete systems of equations in order to grasp the nature of business
cycles (see Tinbergen 1939).
Hotelling had been President of the Econometric Society and invited me, as a student,
to join. Econometrica quickly became my main source of information about developments
in economics. Paul Samuelson’s papers were, of course, very important on the theoretical
side, but the developments in statistical method were then equally important to me. It
occurred to me that regression analysis had been developed for single equations and that
statistical inference for a complete system might have some new points. However, I
quickly learned that this point had occurred to others. A number of the European refugee
scholars then living in New York had formed a seminar on mathematical economics and
econometrics within the framework of the National Bureau of Economic Research; the
leader was Jacob Marschak, then a professor at the New School for Social Research. At
one meeting in the spring of 1942, a new approach to statistical inference in simultaneousequations
systems was presented by a Norwegian scholar caught in the United States by
the German invasion of Norway, Trygve Haavelmo. It was disarmingly presented, as an
application of Neyman-Pearson theory to the problem, and I did not understand the full
extent to which new concepts were implied. There was present a tall, very thin Dutch
econometrician, identified as Tjalling Koopmans, who, alone of those present, seemed to
grasp the full novelty. I already knew his name because he had published a paper on the
1:18.4 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
distribution of the serial correlation coefficient (Koopmans 1942), which Hotelling had
regarded as a major unsolved problem, and an important step in time-series analysis. But,
more than anyone else, he recognized the implications of Haavelmo’s proposals.
Koopmans continued his development of simultaneous-equations estimation along
with others stimulated and led by him. His institutional setting was the Cowles Commission
for Research in Economics. This research organization, founded by Cowles, the
financial supporter of the Econometric Society and an investment manager, had started in
Colorado but was housed at the University of Chicago beginning in 1939. In 1943,
Marschak was invited to join the department at Chicago and to be Director of the
Commission. The next year Koopmans joined him in the department and at the Commission
and assembled an extraordinary staff of statisticians and econometricians: Theodore
Anderson Jr., Haavelmo, Leonid Hurwicz, Roy Leipnik, and Herman Rubin. Within less
than two years, the basic concept of identification was clearly articulated and estimation
methods were developed; with further improvements leading to more flexible methods of
estimating single equations of a complete system, the results were published in Koopmans
(1950). Lawrence Klein joined the Cowles Commission to apply these methods to a
complete macroeconomic model (Klein 1950), inaugurating the era of large-scale econometric
models.
I was first in touch with the Cowles Commission and learned of its work in 1946, and I
joined their staff in 1947, remaining there for a little over two years. Although my interests
developed differently, I was immersed in these statistical developments.
To be sure, the use of econometric methods has gone in directions different than those
envisaged by the Cowles pioneers. There is little emphasis on estimating complete systems.
But the underlying importance of identification and its frequent implementation through
the use of instrumental variables are now standard (for a survey, see Angrist & Krueger
2001, which traces pre-Cowles uses, some, but not all, of which were known to and cited
in the Cowles literature).
The wide diffusion of econometrics depended not only on methodological innovation,
but also on two other factors, the great increase in the collection of available data and the
revolution in computing. I have no special knowledge of the first, but I do have one
anecdote about the second. In my courses in statistics, Hotelling had expressed his concern
about the barrier to the use of statistical methods caused by the difficulty of inverting
matrices. He proposed various tricks, but the standard Gauss-Doolittle method (as it was
called), i.e., successive elimination of variables, was the standard. Having had occasion to
fit regressions, usually with eight variables as part of my work in the Army when using
statistical methods for weather forecasting, I can testify with pain to the difficulty; inverting
an 8 8 normal matrix took me about eight hours (and, to anticipate the reader’s
reaction, I performed this task very well). Hotelling had mentioned an idea current at the
Bell Laboratories, the research arm of the then-dominant telephone carrier, American
Telephone and Telegraph, that, logically, diodes could replace relays and so add and
multiply at much higher speeds. During World War II, it became publicly known that an
electronic computer, with the acrostic ENIAC, had been built for the U.S. Navy and was
being used to compute ballistic tables. Just before the end of my military service, one of the
creators of the ENIAC, John Mauchly, came to my office at Langley Field in Virginia. The
computation for the Navy was almost complete, and he was looking for new work.
Somehow he learned that I might be a good customer. I was awaiting discharge (hostilities
had ceased), had no authority to give him a contract, and had no intention of starting
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.5
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
a new project, but I was eager to learn ENIAC’s capability. When I described the number
of additions, multiplications, and divisions needed, he told me that ENIAC could do my
eight-hour job in five minutes. I had learned from my parents always to distrust salesmen,
so I called a naval weather officer I knew to check. He confirmed the numbers but added
that they applied only when ENIAC was running; it was down 80% of the time. Still, I
figured, that meant 25 minutes on the average, a large gain indeed, and I assumed that
reliability problems could be solved.
It was not for another decade that the availability of computer time and ease of
programming permitted considerable use of computers in econometrics, but thereafter
computational time ceased to be a significant factor, not only in estimation but also in
deriving the distributions of the statistics.

3. GENERAL EQUILIBRIUM
The central question in most of my work in economics has regarded, one way or another,
the concept of general competitive equilibrium and its range of applicability, including the
limits on that range. Hotelling’s course provided the microfoundations, in studying the
behavior of firms and of consumers under competitive conditions with many commodities,
but the role of the markets in drawing the individual actors together was taken for
granted.
When I enrolled as a student in the Economics Department at Columbia (1941), I was
assigned a desk in the library stacks near the economics book collection. As is my wont
when placed in the neighborhood of books, I immediately started browsing and ran across
a work by an economist whose name I had never heard mentioned, J.R. Hicks’s Value and
Capital (1939). As apparently happened to other like-minded economics students (e.g., my
good friend, Frank H. Hahn) at that time, it gave me a powerful orientation to economic
analysis. It showed how the techniques of static analysis (already familiar to me from
Hotelling’s course but expressed with more verve and style) could be applied to events
unfolding in time. Savings and investment could be analyzed using the same tools. If the
reader thinks this is obvious, I recommend that he read the controversies over capital
theory, as exemplified by the papers of Frank Knight (1936) and Friedrich von Hayek’s
book (1941) and try to find out, as I tried then, what in the world were the questions being
debated. Hicks had a simple approach: Decisions on commodity consumption and production
today are made jointly with consumption and production in the future. Therefore,
simply put a time subscript on commodities; use the static formalism with the enlarged
commodity space. (To be fair, after reading Hicks, one can understand that Hayek was
saying much the same, but I defy anyone to learn that from reading Hayek alone.)
Of course, Hicks’s s reformulation did not end the difficulties in theories of savings and
investment; rather, it enabled one to understand what they were. In brief, as Hicks explained,
the problem was the nonexistence of a full set of futures markets. There were a
few commodity futures markets, and, more importantly, there were markets for credit. As
Hicks emphasized, the nonexistent futures markets were replaced in the calculations of
firms and households by expectations, and the resulting supply and demand behavior
determined “temporary” equilibrium prices and allocations on the current and existing
futures markets. As Hicks showed, one consistent set of expectations was the equilibrium
set that would have been obtained if all markets existed (what later became known as
“rational expectations”).
1:18.6 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
My enthusiasm for Value and Capital did not exclude a critical attitude to many of its
details. (Indeed, I regard an uncritical attitude to any work as a sign that it is not very
interesting.) On returning from military service, I planned to write a dissertation which
would redo Value and Capital properly, a very foolish idea. I had two motivations. One
was to supply a theoretical model as a basis for econometric estimation.
The other was a strong interest in planning. I would have described myself as a
socialist, although one that had a strong belief in the usefulness of markets. Market
socialism was a widespread view. Hotelling held it. It had been popularized especially by
the works of O. Lange (reprinted in Lipincott 1938) and A.P. Lerner (1946). In the
immediate postwar period, the idea of national planning to supplement markets was
common in Western Europe, and allocation in effect was treated, in principle, as the
solution of a general equilibrium system (although with many simplifications). It appears
in retrospect that the planning had little effect (good or bad) on the development of the
European economies, but a great deal of intellectual energy was expended (see, e.g.,
Malinvaud & Bacharach 1967).
There were two particular points I was considering in attempting to improve the Hicks
model. One was the role of expectations and, in particular, the importance of uncertainty.
Hicks recognized this, but his handling was, as he acknowledged, unsatisfactory. In his
treatment, firms or households might be uncertain about future prices, but they acted as if
there were a certainty-equivalent price. From what I already understood about behavior
under uncertainty, I knew that there were precautionary actions that guarded against
uncertainty and that would not be taken under any conditions known for certain (e.g.,
portfolio diversification), a point already recognized by such economists as Frank Knight,
Irving Fisher, and Albert G. Hart (for a survey I wrote at that time, see Arrow 1951a,
especially section 2).
The second point was the theory of investment by a multiowner firm. Consider a
modern corporation with publicly traded stock; in theory, at least, it has many owners. It
is true that they all have the same objective, maximization of the sum of discounted
profits. But, in the absence of a full set of futures markets, each one computes the optimal
policies according to her expectations of future prices. There is no reason why these
expectations could not differ. As a result, the owners (stockholders) might have different
views on the optimal policy of the firm. It occurred to me that a natural definition of firm
choice would be based on the voting powers of the stockholders. Specifically, investment
project A would be preferred to project B if a (share-weighted) majority of the stockholders
expected a higher sum of discounted profits from A than from B. It occurred to
me that if this is to be a rational theory of the firm, the preference relation so defined had
better be transitive (a concept I had learned in an undergraduate course from the great
logician Alfred Tarski). It took about 15 minutes of figuring to see that majority voting
could easily be intransitive, thus rediscovering what Condorcet had found in 1785, although
I did not become aware of this until 1952. At this moment, I regarded this
discovery as an obstacle, and put it aside until the problem arose in a different form.
The economic behavior of a multiowner firm, however, was still a problem, and its
importance was (independently, of course) stressed much later by Ekern & Wilson (1974).
Their paper gave rise to a considerable literature on “stockholder unanimity.” To my
mind, the issue has still never been resolved.
On top of these considerations, I somehow (I cannot remember how) became aware
that the existence of solutions to the system of equations defining a general competitive
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.7
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
equilibrium had been questioned and that my teacher, Abraham Wald, had given some
kind of answer. On returning from military service, I asked him about it. He gave the
reply, which retrospectively appears rather curious, that it was a difficult problem. I had
learned some German in college, but reading papers in that language was sufficiently
difficult that I took his answer as license not to pursue it. What later intrigued both Gerard
Debreu and myself, when we discussed the matter, was that Wald, in an expository paper
(Wald 1936), announced (section II) an existence theorem for equilibrium with pure
exchange, which, although somewhat less general than the one we published, seemed to
require the same techniques. Wald referred to the use of “subtle methods of modern
mathematics,” a phrase that might indeed refer to identifying general equilibrium with
the fixed point of a suitable transformation. The paper in which the proof was to appear
was never published because of the Anschluss (the German annexation of Austria in
1938). Wald’s interests shifted to mathematical statistics on his migration to the United
States; still, one might have supposed that he would use the occasion of my question to
publish the paper or at least to describe it to me.
A large influence was the publication of von Neumann & Morgenstern’s (1944) great
work on game theory. The normal form was very enlightening, and the minimax theorem
fascinating. But zero-sum games were hardly in the spirit of economics, where gains from
trade are at the very core of the theory. “Economic behavior” was embodied, in this work,
in the theory of cooperative games, a subject whose very meaning is still none too clear.
But if the actual content of the theory was difficult to apply, the techniques were not.
What I learned, reinforced by association with the game theorists at the RAND Corporation,
was the usefulness of convex set theory in understanding what economic theory was
all about. This was reinforced by the emergence of George Dantzig’s linear programming,
and its enthusiastic reception and economic interpretation by Koopmans (see Koopmans
1951). The use of separating hyperplanes enabled me to overcome a discomfort I had with
welfare economics; equating marginal rates of substitution did not account for corners,
cases where households or firms had zero purchases of some good. Interpreting prices as
dual to the household’s and firm’s problem completed the link back to individual optimization
theory (Arrow 1951b.) That these ideas were spreading widely is attested to by the
virtually simultaneous and very similar treatment of welfare economics by Debreu (1951).
The role of game theory in economics and much more changed with the publication of
Nash’s (1950) brief paper on equilibrium in noncooperative games. Many economists,
with some knowledge of imperfect competition theory, recognized that conceptually he
had long been anticipated by Cournot (1838). The paper, however, supplied a firm basis
by providing an existence theorem, although, as with the minimax theorem, it depended
on introducing a wider view of possible actions, the mixed strategies.
But its greatest immediate impact on many of us was not the substance but the method
of proof. We became acquainted with fixed-point theorems, specifically, Kakutani’s version
with point-to-set mappings, as a way of proving existence. This is one situation where
the significance of the originating paper (in this case, Nash’s) was clear, for it was followed
up by a number of scholars, working independently. It struck me fairly soon that Nash’s
method of proof must be applicable to proving the existence of general competitive
equilibrium. When I got down to systematic work, I constructed a somewhat complicated
game whose equilibrium was the same as competitive equilibrium. The “players” were the
consumers, a set of “anti-consumers” choosing the marginal utilities of income (to minimize
the sum of the consumer’s utility, and the product of the budget surplus and the
1:18.8 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
marginal utility of income), the firms, and a “market player” choosing prices to minimize
the value of excess demands. There was a technical problem: The competitive game was
played over unbounded strategy domains, but that was not serious. I “proved” equilibrium
and circulated the paper.
I then received a manuscript from Gerard Debreu. His formulation of the economy was
virtually identical to mine; it showed the common background, particularly in Koopmans’s
work on the production structure in activity analysis, but also a very strong
similarity of outlook. His proof was different in that it used a game where the strategy
domains of some players (the consumers) depended on the strategic choices of others. It
then turned out we had both made the same mistake in that we did not guard against the
possibility of a discontinuity in consumer demand as income approached zero. Some extra
conditions were needed.
This took time. Others, with less meticulous descriptions of the economy, assumed the
continuity that we wanted to derive from other postulates. Lionel McKenzie, equally
influenced by Nash, wrote the first published proof of existence (McKenzie 1954), followed
shortly by Arrow & Debreu (1954).
During this period, I had been brooding about the incorporation of uncertainty into
general equilibrium. I had learned in my undergraduate days that probability distributions
were taken over some underlying space, that the element was the individual “state of
nature.” Assumptions that distributions were normal or could be characterized by two or
three moments seemed not to get to the heart of the matter. While I was still in Chicago (to
1949), Leonard J. Savage was developing his axiomatic characterization, which derived
beliefs in the form of probability distributions as part of rational behavior under uncertainty.
Again, the primitive idea was that of bets on states of nature. It occurred to me,
then, that the natural formulation of behavior under uncertainty was to characterize
commodities as indexed by the state of nature in which they are to be used. That is, the
elementary contract is to deliver one unit of a specified commodity if a specified state of
nature occurs. Any other contract could be regarded as a bundle of these elementary
contingent contracts. This was completely analogous to Hicks’s characterization of commodities
as indexed by time.
I saw lots of issues to be tackled, but my schedule was altered by an exogenous event.
With about three months notice, I was invited to a conference on the economics of risk
bearing organized by two scholars from the French nationalized electricity industry (Electricite
´ de France), Pierre Masse´ and Georges Morlat, to be held in a suburb of Paris. As I
also had other research obligations, my paper was very short, taking account only of pure
exchange in a single period. Further, the conference was held entirely in French, and the
paper (Arrow 1953) was translated for the occasion and published in French. (The original
English text was finally published in 1964). Debreu promptly took up the task of completing
the analysis, particularly by incorporating production and extending the time horizon
to many periods (Debreu 1959, chapter 7).
Despite my pride in this accomplishment, there was an inadequacy that Koopmans
suggested to me at the time. In effect, I was ascribing uncertainty ultimately to exogenous
events. The contracts I called for were contingent on the states of nature, which, in turn,
determined the general equilibrium. But, he urged, much uncertainty was endogenous to
the economic system. Put as I would understand it today, many contingent contracts are
traded, but they are contingencies determined by the equilibrium values of prices and
quantities. For example, the value of a common stock depends on the profits of the
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.9
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
corporation (actually, on expectations of such profits). This is a function of input and
output prices (under competitive conditions), not of exogenous events. Current events
certainly suggest the importance of this viewpoint, but this is not the place to expand on it.
To keep this account in reasonable length, I omit discussion of further work in general
equilibrium theory. I had the benefit of working with many collaborators, most notable
Leonid Hurwicz and Frank H. Hahn. There were a series of papers on the stability
of general competitive equilibrium, mostly but not entirely with Hurwicz, reprinted in
Arrow & Hurwicz (1977), and a survey of the entire field, with a number of new results
(Arrow & Hahn 1971).
4. UNCERTAINTY AND INFORMATION
Statistics is about information and its best use. Along with uncertainty comes the recognition
that it may be reduced by the acquisition of information. Indeed, information is
precisely the observation of a random variable, so that subsequent behavior should be
and can be based on the distribution of the variables of interest conditional on the value of
the observed variable. This principle has guided much of my work, although it has not
given rise to nearly as coherent a body of analysis as general equilibrium theory.
In the summer of 1948, at the RAND Corporation, Meyer A. Girshick (then a staff
member there, later Professor of Statistics at Stanford, unfortunately deceased at an early
age) became interested in the foundations of the then-new concept of sequential statistical
analysis, and drew David Blackwell and me, both summer visitors, into working with him.
Sequential analysis had been one of the great creations of Abraham Wald; developed
during World War II, the ideas were developed in his book (Wald 1947), and its optimality
properties extended further by Wald & Wolfowitz (1948). However, the general logic was
unclear. On study, one could see that there was a recursive element, in that at each stage
there was an optimization problem of the same form. We were able to clarify the underlying
logic (Arrow et al. 1949). The usefulness of recursive methods became apparent to me
for further uses. Richard Bellman (1957) has stated that this paper suggested to him the
fundamental notion of dynamic programming.
The concept of general equilibrium under uncertainty, referred to in Section 3, is one
example of the influence of probabilistic thinking, The influence of uncertainty on individual
behavior had been recognized by economists, as I note above. Interest in the field
was sharpened by the revival of the expected-utility theory of behavior under risk. This
had, of course, been first advanced by Daniel Bernoulli (1738!), as a resolution of the
St. Petersburg paradox, which was circulating among the probability theorists of the early
eighteenth century. But Bernoulli showed a further implication of expected-utility theory,
that there was a market for actuarially unfair insurance (he considered marine insurance).
It is a small step from this observation to an explanation of portfolio diversification.
There seems to have been no genuine application of the expected-utility theory of
individual behavior under risk, even after the rise of neoclassical theory gave central
importance to utility-based explanations, before the 1940s. Attention was revived by von
Neumann & Morgenstern’s derivation (1947, pp. 617–32) in their second edition of the
expected-utility theory from more primitive and highly appealing assumptions about
rational behavior under risk.
There was considerable resistance; I know that I felt that there must be something wrong
with the argument. A good part of the argument was that the newer and more mathematical
1:18.10 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
thinkers were convinced that utility was an ordinal, not a cardinal concept. After a few years
of discussion at conferences and in the published literature, it became clear that there was no
contradiction between expected-utility theory and the choice-based concepts of preference
that led to the ordinalist position. The discussion had, in my view, the effect of making the
expected-utility theory well-known and available for application to specific economic issues.
To me, the greatest stimulation came from the work of James Tobin (1958), which used
the expected-utility theory to derive liquidity preference (i.e., the demand for money for
other than transaction motives). I gave a course on the economics of uncertainty in 1962,
and my attempts to clarify Tobin’s work led to a more systematic treatment. In particular, I
was led to define the concepts of relative and absolute risk aversion and to hypothesize
that the former was increasing and the latter decreasing as wealth increased. Having been
invited to give the Yrjo¨ Jahnsson lectures in Helsinki in December 1963, I presented, and
published, these results there (Arrow 1965, lecture 2). The concepts of relative and absolute
risk aversion were developed independently by John W. Pratt (1964), again one of
those multiple discoveries that shows how the time was ripe.
The presence of uncertainty has one very important implication, the possibility and
importance of information. If there are two random variables that are not independent
and if the payoffs to a range of possible actions are affected by one, then observing the
other means that one’s behavior should rationally be guided by the conditional distribution
of the variable of interest given the observed variable. This is what statistical inference
is all about; this is also the subject matter of Claude Shannon’s (1948) theory of communication,
which attracted widespread interest.
The stress on the importance of information was more thoroughly emphasized to me by
Jacob Marschak (“Jascha,” as we all knew him), my boss as Research Director of the
Cowles Commission in Chicago, and one who became very interested in the economics of
information by individuals and by groups (“teams,” in his terminology), for whom information
not only existed but was transmitted (see Marschak & Radner 1972 for the theory
of teams and Marschak 1959 for the applications of Shannon’s information measures to
microeconomic behavior). Another source, with the greatest influence of all, has been the
introduction of search theory to economic choice, the very important paper of George
Stigler (1961). Although I have written a significant number of papers inspired by these
ideas, they have had little impact.
However, the idea that information was an important consideration in economic analysis
was expressed in a paper on the welfare economics of medical care (Arrow 1963), one
that has had considerable influence, not only in the original field of application. This was
initiated by a request from the Ford Foundation, at that time a major supporter of
economic and other social science research. Victor Fuchs was then an officer of the
foundation. He thought it would be interesting to have pairs of studies in each of several
fields of interest to public policy, each pair consisting of one study by someone engaged in
the field and one by a theorist, to provoke the use of theoretical tools in that area. Fuchs
invited me to be the theorist considering the economics of medical care. I have always
found it difficult to resist challenges like this, and indeed they have been most stimulating.
The insurer did not have as much information about a particular case as the physician, so
the former could not hold payments to some minimum, and the patient was not in a
position to know how well and how devotedly the physician was handling his case.
My own view at the time (and today) is that this situation of asymmetric information,
as it came to be called, was met by the creation of social institutions rather different from
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.11
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
the market. In any case, I understood the applicability of the terms I had learned in my
brief brushes with the actuarial world, moral hazard and adverse selection, now so widely
used in economic analysis.
The concept of asymmetric information has become a dominant analytic tool, to my
mind, the most important development of economic theory after 1950. The scope of
application has gone considerably beyond the medical field, most especially to finance.
Much of it comes under the headings of “principal-agent theory” and “mechanism design”
(for a useful survey, see Laffont & Martimort 2002).
With the same background and with exposure in my summers at the RAND Corporation
to problems of the development of new weapons, I took a similar view to the
production of information (i.e., innovation) (Arrow 1962a). Here, the economic meaning
of information as a commodity became even clearer. It was scarce and costly to acquire,
and it might have value as an input (into treatment or into production of ordinary
commodities), but it was not used up in the sense that ordinary commodities are; if one
agent uses or transmits information, she still has it.
5. DYNAMIC SYSTEMS
In the development of any theory of capital, including inventories or even the holding of
money and securities, an adequate theory of behavior over time (optimal or otherwise) is
called for. This was as true of operations research, with which I was much involved in the
1950s and 1960s, as it was of economic theory. The formal treatment of future commodities
by Hicks did not adequately exploit the fundamentally recursive nature of action over time,
a point I learned from the work on sequential analysis with Blackwell and Girshick, mentioned
above. The point is that the laws governing motion from one time period to the next
are the same over time, although the initial conditions would, of course, be time-varying.
The importance of dynamic systems in this sense was recognized in economics in the 1930s,
with primary emphasis on the explanation of business cycles; a clear statement of the issues,
with strong emphasis on the role of stochastic factors, was the pioneering paper of Frisch
(1933), which was followed up empirically by the work of Tinbergen, discussed above.
Curiously, I had brushed up against dynamic analysis on two separate occasions and
had not really gone further into their economic implications. One was a master’s thesis in
mathematics in which I reviewed the then-existing literature on stochastic processes. The
other, more original, was a paper written while serving as a weather officer, in a research
unit. The question was posed as to how to use forecasts of winds to minimize the flight
time of an airplane. There was a literature on this question, by some very distinguished
mathematicians (Ernest Zermelo, Tullio Levi-Civita` ), but they all assumed a flat Earth.
They were using the calculus of variations in a somewhat unusual application. I had to
learn the calculus of variations (and also struggle with my limited German, the language in
which the papers were written) and succeeded in showing how to vary the heading of the
airplane so as to minimize flight time over a spherical Earth (Arrow 1949).
In summer of 1950, the Office of Naval Research organized a working group on the
question of inventory holdings, a major issue with the military establishment. I worked
with Jacob Marschak and with the mathematician Theodore Harris, an expert in stochastic
processes (Arrow et al. 1951). We quickly realized the existing literature needed a
formulation that was both dynamic and stochastic. There were two discrete branches of
inventory analysis, one emphasizing dynamic deterministic models marked by increasing
1:18.12 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
returns and the other planning for a stochastic demand. Our model, which has had a great
influence, combined these. We drew from the first branch the idea that the optimal policy
would have the so-called (s, S) form, that is, order when the inventory gets down to a
prescribed level, denoted by s, and the amount ordered was the amount needed to get the
total inventory to a fixed level, S.We found the optimal levels of s and S, given the relevant
information, that is, the distribution of demands, the fixed cost of ordering as such, the
cost per unit ordered, and the loss if a demand could not be met.
This was not a true dynamic programming solution, in that the form of the policy was
given instead of being shown to be optimal. Indeed, Dvoretzky et al. (1952, pp. 194–96)
showed by example that, under certain assumptions about costs and distributions, the
(s, S) policy was not optimal. A very plausible sufficient condition for optimality of the
(s, S) policy, involving costs only, was found a few years later by Scarf (1960).
A new research emphasis in postwar economics was that on economic growth. The
systematic analysis by explicit dynamic models was made prominent by Solow’s (1956)
famous paper; as he made clear, he was following up and clarifying earlier work, especially
of Roy Harrod and Evsey Domar. He allowed for a wide variety of aggregate production
functions and assumed both a fixed savings rate and an exogenously given rate of Hicksneutral
technological growth. Although the basic approach became widespread almost
immediately, a number of economists felt both of these last assumptions were limiting.
Some stressed the alternative assumption that the savings rate was itself rationally determined,
to optimize some function of present and future consumptions. This led to the
theory of optimal economic growth. Others were concerned that technological change was
itself the result of economic and other decisions, not to be taken as exogenously given,
leading to endogenous growth theory.
Optimal growth theory had in fact already been started, in two (independent) papers
with similar roots in the calculus of variations, those of Frank Ramsey (1928) and Hotelling
(1931). On the production side, Ramsey introduced the now-standard assumption of
an aggregate production function, with output depending on capital (labor being assumed
fixed) and divided between investment and consumption. The maximand was the integral
of utilities over time (i.e., zero discounting). Hotelling had a very novel idea, making
aggregate output a function of both capital and an exhaustible resource. This was the first
really thoughtful analysis of exhaustible resources; his maximand, like most of the
subsequent literature, is the integral of discounted utilities.
The first important developments of Ramsey’s work came in the independent papers of
David Cass (1965) and Koopmans (1965). The results were very important, but the
methods employed were ad hoc and not easily used for variations and generalizations.
Koopmans (1960) had also given a strong argument that, with an infinite horizon,
discounting was essential to avoid clearly paradoxical conclusions. There have been a
number of subsequent papers that have given other similar arguments, and I am persuaded
completely that it is incoherent to treat all future generations as equivalent to the present
and to each other (despite Ramsey’s precedent).
All this, like the work on inventory theory cited earlier, was in the spirit of dynamic
programming. The formalisms did not explicitly present a set of prices that played a
dominant role in allocation. (Actually, they could have easily been derived but were not.)
A different, although logically equivalent, formalism, the optimal control theory developed
by the Russian mathematician L.S. Pontryagin and his associates (Pontryagin
et al.1962), filled the bill admirably, although they certainly did not have economic
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.13
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
applications in mind. Their methodology enabled immediately a wide variety of dynamic
optimization problems to be solved, in the sense of deriving a set of differential equations
characterizing the optimal paths.
The Pontryagin theory has certainly had wide repercussions in discussions of dynamic
problems, particularly those in which public policy issues (externalities) are at stake.
Mordecai Kurz and I soon undertook a major research into the criteria for public investment,
making extensive use of these methods (Arrow & Kurz 1970).
The latest set of applications by many, including myself, has been stimulated by the
problems of climate change. I have especially learned by working with colleagues such as
Karl-Go¨ ran Ma¨ ler and Partha Dasgupta (Arrow et al. 2003a,b), discussing, for example,
extensions to optimal dynamic behavior when there are some nonoptimalities (dynamic
second-best policies).
The other development, endogenous growth theory, has also seen a great growth. It
necessarily introduces noneconomic factors and economic factors that enter in unusual
ways. I was led to an early exemplar of this approach through the concept of “learning by
doing.” Industrial engineers had found that, when producing the same commodity, the
cost of production fell with cumulative output in a fairly systematic fashion, evidently
because there were a number of improvements in production techniques as the result of
experience. I suggested that the same model might work at a macroeconomic level (Arrow
1962b). Although I think this point is well accepted, there are many other endogenous
factors in technical change; many of them can be found in Aghion & Howitt (1998).
6. SOCIALVALUES IN ECONOMIC BEHAVIOR
I conclude by a speculation as to current and future developments. Economic analysis has
tended to be based on individual behavior. There is assumed to be only one set of social
institutions, namely, markets. Perhaps governments also matter when dealing with market
failures, also called externalities. But clearly social institutions and social interactions
among individuals also matter, and perhaps significantly. I note, for example, that twothirds
of public expenditures in the United States (those on health, retirement, and education)
are essentially for the production of private goods, yet are accepted and only argued
about on the margin. Even the concern about climate change, addressed to the welfare of
generations far in the future, must be regarded as a strong social obligation not naturally
reducible to individual motivation. It is also easy to identify information channels outside
the market and to note other social restrictions on behavior. Network theory has been
imported into economics as a tool, although it may not be all that useful or needed.
DISCLOSURE STATEMENT
The author is not aware of any affiliations, memberships, funding, or financial holdings
that might be perceived as affecting the objectivity of this review.
LITERATURE CITED
Aghion P, Howit P. 1998. Endogeneous Growth Theory. Cambridge, MA: MIT Press
Angrist J, Krueger AB. 2001. Instrumental variables and the search for identification: from supply and
demand to natural experiments. J. Econ. Perspect. 15:69–85
1:18.14 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
Arrow KJ. 1949. On the use of winds in flight planning. J. Meteorol. 6:150–59
Arrow KJ. 1951a. Alternative approaches to the theory of choice in risk-taking situations. Econometrica
19:404–37
Arrow KJ. 1951b. An extension of the basic theorems of classical welfare economics. In Proc. 2nd
Berkeley Symp. Math. Stat. Probab., ed. J Neyman, pp. 507–32. Berkeley: Univ. Calif. Press
Arrow KJ. 1953. Le role des valeurs boursie`res pour la repartition la meilleure des risques. E´ conometrie
11:41–47
Arrow KJ. 1962a. Economic welfare and the allocation of resources for invention. In The Rate and
Direction of Inventive Activity: Economic and Social Factors, ed. RR Nelson, pp. 609–25.
Princeton, NJ: Princeton Univ. Press (NBER)
Arrow KJ. 1962b. The economic implications of learning by doing. Rev. Econ. Stud. 29:155–73
Arrow KJ. 1963. Uncertainty and the welfare economics of medical care. Am. Econ. Rev. 53:941–73
Arrow KJ. 1964. The role of securities in the optimal allocation of risk bearing. Rev. Econ. Stud.
31:91–96
Arrow KJ. 1965. Aspects of the Theory of Risk-Bearing. Helsinki: Yrjo¨ Jahnsson Sa¨a¨ tio
Arrow KJ, Blackwell D, Girshick MA. 1949. Bayes and minimax solutions of sequential decision
problems. Econometrica 17:213–44
Arrow KJ, Dasgupta P, Ma¨ ler K-G. 2003a. The genuine savings criterion and the value of population.
Econ. Theory. 21:217–25
Arrow KJ, Dasgupta P, Ma¨ ler K-G. 2003b. Evaluating projects and assessing sustainable development
in imperfect economies. In Economics for an Imperfect World, ed. R Arnott, B Greenwald,
pp. 299–330. Cambridge, MA: MIT Press
Arrow KJ, Debreu G. 1954. Existence of an equilibrium for a competitive economy. Econometrica
22:265–90
Arrow KJ, Hahn FH. 1971. General Competitive Analysis. Edinburgh: Oliver and Boyd
Arrow KJ, Harris TE, Marschak J. 1951. Optimal inventory theory. Econometrica 19:250–72
Arrow KJ, Hurwicz L. 1977. Studies in Resource Allocation Processes. Cambridge, UK: Cambridge
Univ. Press
Arrow KJ, Kurz M. 1970. Public Investment, the Rate of Return, and Optimal Fiscal Policy. Baltimore:
Johns Hopkins Press
Bellman R. 1957. Dynamic Programming. Princeton, NJ: Princeton Univ. Press
Bernoulli D. 1738. Specimen theoriae novae de mensura sortis. Comm. Acad. Sci. Imp. Petropolitanae
5:175–92
An English translation is available: Bernoulli D. 1954. Exposition of a new theory on the measurement
of risk. Econometrica 22:23–36
Burns AF, Mitchell WC. 1946. Measuring Business Cycles. New York: Natl. Bur. Econ. Res.
Cass D. 1965. Optimum growth in an aggregative model of capital accumulation. Rev. Econ. Stud.
32:233–40
Chamberlin EH. 1933. The Theory of Monopolistic Competition. Cambridge, MA: Harvard Univ.
Press
Cournot AA. 1838. Recherches sur les principes mathe´matiques de la the´orie des richesses. Paris:
Rivie`re
Debreu G. 1951. The coefficient of resource utilization. Econometrica 19:273–92
Debreu G. 1959. Theory of Value. New York: Wiley
Dvoretzky A, Kiefer J, Wolfowitz J. 1952. The inventory problem: I. Case of known distributions of
demand. Econometrica 20:187–222
Ekern S, Wilson R. 1974. On the theory of the firm in an economy with incomplete markets. Bell J.
Econ. Manag. Sci. 5:171–80
Frisch R. 1933. Propagation problems and impulse problems in dynamic economics. In Economic
Essays in Honour of Gustav Cassel, pp. 171–205. London: Cass
Hicks JR. 1939. Value and Capital. Oxford: Clarendon
An English translation
is available: Bernoulli
D. 1954. Exposition
of a new theory on
the measurement of
risk. Econometrica
22:23–36
www.annualreviews.org Some Developments in Economic Theory Since 1940 1:18.15
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.
Hotelling H. 1931. The economics of exhaustible resources. J. Polit. Econ. 39:137–75
Keynes JM. 1936. The General Theory of Employment, Interest and Money. New York: Harcourt
Brace
Klein LR. 1950. Economic Fluctuations in the United States, 1921–41. New York: Wiley
Knight FH. 1936. The quantity of capital and the rate of interest. J. Polit. Econ. 44:433–63, 612–42
Koopmans TC. 1942. Serial correlation and quadratic forms in normal variables. Ann. Math. Stat.
13:14–33
Koopmans TC. 1947. Measurement without theory. Rev. Econ. Stat. 9:161–72
Koopmans TC, ed. 1950. Statistical Inference in Dynamic Economic Models. New York: Wiley
Koopmans TC, ed. 1951. Activity Analysis of Production and Allocation. New York: Wiley
Koopmans TC. 1960. Stationary ordinal utility and impatience. Econometrica 28:287–309
Koopmans TC. 1965. On the concept of optimal economic growth. Pontificae Acad. Sci. Scripta Varia
28:225–300
Laffont J-J, Martimort D. 2002. The Theory of Incentives. Princeton, NJ: Princeton Univ. Press
Lerner AP. 1946. The Economics of Control. New York: Macmillan
Lipincott B, ed. 1938. The Economic Theory of Socialism. Minneapolis: Univ. Minn. Press
Malinvaud E, Bacharach MOL, eds. 1967. Activity Analysis in the Theory of Growth and Planning.
New York: Macmillan
Marschak J. 1959. Remarks on the economics of information. In Contributions to Scientific Research
into Management, pp. 79–98. Los Angeles: Western Data Processing Center, Univ. Calif.
Marschak J, Radner R. 1972. Economic Theory of Teams. New Haven, CT: Yale Univ. Press
McKenzie L. 1954. On equilibrium in Graham’s model of world trade and other competitive systems.
Econometrica 22:147–61
Nash JF Jr. 1950. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 36:48–49
Pontryagin LS, Boltyanskii VG, Gamkrelidze RV, Mischenko EF. 1962. The Mathematical Theory of
Optimal Processes. New York: Interscience
Pratt JW. 1964. Risk aversion in the small and in the large. Econometrica 32:122–36
Ramsey FP. 1928. A mathematical theory of savings. Econ. J. 38:543–59
Robinson J. 1933. The Economics of Imperfect Competition. London: Macmillan
Scarf H. 1960. The optimality of (S, s) policies in the dynamic inventory problem. In Mathematical
Methods in the Social Sciences, ed. KJ Arrow, S Karlin, P Suppes, pp. 196–202. Stanford, CA:
Stanford Univ. Press
Schumpeter J. 1939. Business Cycles. New York: McGraw-Hill. 2 ed.
Shannon CE. 1948. A mathematical theory of communication. Bell Syst. Technol. J. 27:379–423,
623–56
Solow RM. 1956. A contribution to the theory of economic growth. Q. J. Econ. 70:65–94
Stigler GJ. 1961. The economics of information. J. Polit. Econ. 69:213–25
Tinbergen J. 1939. Business Cycles in the United States, 1919–1932. Geneva: League of Nations
Tobin J. 1958. Liquidity preference as behavior towards risk. Rev. Econ. Stud. 25:65–86
von Hayek FA. 1941. The Pure Theory of Capital. Chicago: Univ. Chicago Press
von Neumann J, Morgenstern O. 1944. Theory of Games and Economic Behavior. Princeton: Princeton
Univ. Press
von Neumann J, Morgenstern O. 1947. Theory of Games and Economic Behavior. Princeton: Princeton
Univ. Press. 2nd ed.
Wald A. 1936. U¨ ber eine Gleichungssysteme der mathematischen Wertlehre. Z. Nationalo¨konomie
7:637–70
Wald, A. 1947. Sequential Analysis. New York: Wiley
Wald A, Wolfowitz J. 1948. Optimum character of the sequential probability ratio test. Ann. Math.
Stat. 19:326–39
1:18.16 Arrow
Annu. Rev. Econ. 2009.1. Downloaded from arjournals.annualreviews.org
by 83.76.199.253 on 06/15/09. For personal use only.

Không có nhận xét nào:

Đăng nhận xét