Equity Markets

From: John Conover <john@email.johncon.com>
Subject: Equity Markets
Date: Sun, 17 Nov 1996 21:48:46 -0800


Hi Dave. You had ask the question whether fractal analysis of the
equity markets included such things as market "moods," "nervousness,"
"sentiments," and "beliefs" of the investors. Yes it does-in point of
fact, fractal analysis assumes that these inductively rationalized
"beliefs" are the "engine" that make the markets work. The how and why
is a bit complicated. The attached is from Chapter 2, "Fractal
Analysis of Various Market Segments in the North American Electronics
Industry," John Conover, 1995, so there are some copyright issues. So
please limit distribution.

I choose to present the issues from a game-theoretic tautology,
instead of fractal analysis, since the logic is easer to follow. There
is only one equation-the equation for compound interest. Pay
particular attention to the footnotes regarding self-referential logic
systems-that is the key. The understanding as to why the prisoner's
dilemma has no solution, unlike the game of Mora, is a key
point. There are references in the bibliography to delve further into
the issues.

        John

Generalization

    Consider the general equation for a fractal, which is also known
    as a random walk, or Brownian motion.

    R      = R (1 + (f  * F )) .................................. (1
     n + 1    n       n    n

    Where R is the value of the capital, or cumulative returns, of an
    investment in the n'th time interval, f is the fraction of the
    capital placed at risk, in the n'th interval, and F is a function
    of a random variable. A typical, illustrative application would be
    for a simple tossed coin game. Equation 1 is the recursive
    discreet time formula for the exponential function. If F has as
    many wins as losses, one should, obviously not play the
    game. However, if F has more wins than losses, say P = wins /
    (wins + losses) , then one's wager fraction, f, should be f = 2P -
    1. This will maximize the exponential growth of the capital. The
    issue discussed is the extensibility of Equation 1 to other than
    simple games, for example the equity markets. This will be done
    with an analysis of a two person mixed strategy game, Mora. Then
    the analysis of a two person game, the prisoner's dilemma, where
    it can be shown that there is no complete and consistent strategy
    will be presented. Finally, a full multi-agent market where the
    strategies are, by necessity, inductive and incomplete will be
    discussed. (Note that in Equation 1, if F = 1 for all n, then the
    equation becomes the simple compound interest formula.)

    To reiterate the general concepts presented so far, a fractal is a
    cumulative sum of a random process. In the literature, it is
    sometimes called a Brownian motion, or "random walk," process
    since, at any time, the next element in the process time series is
    a random increment added to the current element in the time
        "We emphasize that in Brownian motion it is not the position
        of the particle at one time that is independent of the
        position of the particle at another; it is the displacement of
        that particle in one time interval that is independent of the
        displacement of the particle during another time interval."

    This is a subtile concept. Note that the term "cumulative sum"
    really means that in any time interval, the position of the
    particle is dependent only on the position of the particle in the
    previous time interval, and a random displacement. But the
    position in the previous time interval was dependent only on the
    position in the time interval prior to that, and another
    displacement, and so on, ie., to make a fractal process, we need
    only know where the particle is at the current time, and add a
    displacement to it, for each interval in time. The subtilty is
    that we need only know where the particle is, and not where it has
    been to calculate where it will be.

    This section will use this concept, and expand the concept of the
    random process to include game-theoretic issues by introducing
    iterated two player mixed strategy games, then a simple
    self-referencing game where no formal strategy can exist, and
    finally multi-player games, where the random process is generated
    by the inconsistency of the self-referential, inductive reasoning
    among the players. In all cases, the iterated time series of such
    games will be argued to be fractal, in nature.

    The Game of Mora

        A simple coin tossing game was analyzed previously.  In this
        section, those concepts will be expanded to include games of
        strategy. The game of Mora, following [pp. 434, Bronowski], is
        very old, (being mentioned in Sanskrit,) and is played between
        two players and, in its simplest version, goes as follows. The
        two players move simultaneously. Each shows either one or two
        fingers, and at the same time guesses whether the other player
        is showing one or two fingers. If both players guess right, or
        both guess wrong, no money changes hands. However, if only one
        player guesses right, the player wins from the other as many
        coins as the two players together showed fingers. The possible
        outcomes of any game are as follows if your call is right, and
        your opponent's wrong:

            1 Guessing your opponent will show 1 finger and showing
            finger you will win 2 coins.

            2 Guessing your opponent will show 2 fingers and showing 1
            finger you will win 3 coins.

            3 Guessing your opponent will show 1 finger and showing 2
            fingers you will win 3 coins.

            4 Guessing your opponent will show 2 fingers and showing 2
            fingers you will win 4 coins.

        The game is fair, but a player who knows the right strategy
        will, with average luck, win against one who does not. The
        right strategy is to ignore courses 1) and 4), and to play
        courses 2) and 3) in the ratio of 7 to 5, ie., the right
        strategy is, in any 12 iterations of the game, to play course
        2) on the average 7 times, and course 3) on the average 5
        times. Obviously, your opponent must not know which course you
        are going to play, so the two courses must be intermixed
        randomly.

        The game is zero-sum, meaning that what one player wins, the
        other looses. The mathematical method by which the best
        strategy was found is called game theory. However, it is not
        hard to verify that the strategy is effective by calculating
        what happens when your opponent counters by using course 1),
        2), 3), or 4), above. Namely, if your opponent chooses course:

            1 Course 1), will, on the average, win 7 times out of 12,
            and will win only 2 coins for each win; whereas losses
            will occur 5 times out of 12, and those losses will be 3
            coins for each loss-making an average loss of 1 coin in 12
            iterations of the game.

            2 Course 2), will have no coins change hands, since either
            both players are right, or both are wrong.

            3 Course 3), will have no coins change hands, since either
            both players are right, or both are wrong.

            4 Course 4), will, on the average, win 5 times out of 12,
            and will win 4 coins for each win; whereas losses will be
            occur 7 times out of 12, and those losses will be 3 coins
            for each loss-making an average loss of 1 coin in 12
            iterations of the game.

        As in previously analyzed in the coin tossing game, the
        objective of each player is to maximize the number of coins
        won over many iterations of the game, ie., to maximize the
        cumulative returns of the game. Note that each player's
        capital, will fluctuate, depending on the outcome of a
        particular iteration-and that fluctuation will be random, and
        either 0, 2, 3, or 4 coins. We would expect that the time
        series representing the fluctuations in a player's capital to
        be a random walk, which could be represented by a formula
        similar to Equation 1.

        It is often convenient to represent the game as a table, which
        lists all the possibilities of the courses for both players,
        and how much the each player would win or loose for each
        course, ie., a "payoff matrix," where one player's
        alternatives are represented by the columns in Table 1, and
        the other player's alternatives are represented by the
        rows. The payoff to a particular game solution is the
        intersection of the row and column of the course played by the
        two players.

                    Table 1, The Game of Mora, Payoff Matrix.

                    +--------------------------------------+
                    |Finger, Guess | 1,1 | 1,2 | 2,1 | 2,2 |
                    +--------------+-----+-----+-----+-----+
                    |          1,1 |   0 |   2 |  -3 |   0 |
                    |          1,2 |  -2 |   0 |   0 |   3 |
                    |          2,1 |   3 |   0 |   0 |  -4 |
                    |          2,2 |   0 |  -3 |   4 |   0 |
                    +--------------+-----+-----+-----+-----+

        The optimal strategy for a game as simple as Mora can be
        derived by game-theoretic methodology[1] [pp. 56, Luce],
        [pp. 441, Hillier], [pp. 419, Dorfman], [pp. 209, Saaty],
        [pp. 127, Singh], [pp. 435, Strang], [pp. 258, Nering],
        [pp. 67, Karloff], [pp. 105, Kaplan], but in many games of
        interest, the rules are too complicated, and may even change
        over time[2].  In these scenarios, the strategy can be derived
        empirically, over time, using "adaptive control" computational
        methodologies. For example, if the strategy of Mora was not
        known, the optimal ratio of courses could be determined by
        varying the ratio, and observing the effect on the cumulative
        reserves over many iterations of the game. Note that such a
        methodology can be problematical since your opponent may be
        doing the same thing. An example of such a scenario is
        presented in the next section.

    Prisoner's Dilemma

        A simple mixed strategy zero-sum game was analyzed in the
        previous section. In the game of Mora, the optimal strategy
        does not depend on how your opponent plays the game over
        time. The prisoner's dilemma game is qualitatively
        different. It is also one of the most commonly studied
        scenarios in game theory[3] [pp. 94, Luce], [Poundstone:PD],
        [pp. 262, Waldrop], [pp. 262, Casti:C], [pp. 295, Casti:AR],
        [pp. 199, Casti:PL] [pp. 297, Casti:SFC], [pp. 439, Strang],
        and [pp. 155, Kaplan] [pp. 170, Davis].

        The rules of the game are simple. There are two players, and
        each player has only two choices for each iteration of the
        "game," and those choices are to chose either "A" or "B." If
        both players pick "A," then each wins 3 coins. If one picks
        "A," and the other "B," then the player picking "B" wins 6
        coins, and the other player gets nothing. However, if both
        players pick "B," then both win 1 coin.

        The payoff matrix for the prisoner's dilemma game is shown in
        Table 2, where, as before, one player's alternatives are
        represented by the columns, the other player's alternatives
        are represented by the rows. The payoff to a particular game
        solution is the intersection of the row and column of the
        course played by the two players.

              Table 2, The Prisoner's Dilemma Game, Payoff Matrix.

                             +-------+-----+-----+
                             |Choice |   A |   B |
                             +-------+-----+-----+
                             |     A | 3,3 | 6,0 |
                             |     B | 6,0 | 1,1 |
                             +-------+-----+-----+

        The prisoner's dilemma is not a zero-sum game-neither player
        can ever loose any money. So there is an incentive to always
        play. The choice "A" is known as a "cooperation strategy," and
        the choice "B" is known as the "defection strategy" for each
        player. It is a very subtile and devious game. Here is why,
        and the logic you would go through. Just before you played an
        iteration of the game, you would think:

            1 If you choose "A," there are two possible scenarios:

                i If your opponent chooses "A," you would get 3 coins,
                and your opponent would get 3 coins.

                ii If your opponent chooses "B," you would get 1 coin,
                and your opponent would get 6 coins.

            2 If you choose "B," there are also two possible
            scenarios:

                i If your opponent chooses "A," you would get 6 coins,
                and your opponent would get nothing.

                ii If your opponent chooses "B," you would get one
                coin, and your opponent would get one coin.

        Note that by choosing "A," the best you could do is to win 3
        coins, and the worst is to win nothing. But, by choosing "B,"
        the best you could make is 6 coins, and the worst is one
        coin. It would appear, at least initially, that "B," is the
        best choice, irregardless of what you opponent does.

        But now the logic of the game gets subtile. Your opponent will
        determine the same strategy, and will never play "A."  So you
        both make one coin with every iteration of the the game.  But
        you could make 3 coins-if you cooperated, by both playing "A."
        But if you do that, there is an incentive for either player to
        play "B," if he knows the other player is going to play "A,"
        and thus make 6 coins. And we are right back where we
        started. Indeed, a very diabolical game.

        It is an important concept that you will be basing your
        decision whether to cooperate, ie., choose "A," or defect,
        ie., choose "B," based on how you think your opponent is going
        to play. But your opponent's decision will be based on
        consideration of how you are going to play. Which, in turn,
        will be based on how you think your opponent will play, ad
        infinitum. It is circular logic, or more correctly, the game
        strategy is "self-referential" [pp. 17, pp. 465, Hofstadter]
        [pp. 361, pp. 379, Casti:SFC], [pp. 335, Casti:PL], [pp. 356,
        Casti:AR], [pp. 84, pp. 103, pp. 215, Hodges], [pp. 101,
        Penrose][4].

        This presents a problem in defining an optimal strategy for
        playing the game of the iterated prisoner's dilemma since no
        "theory of operation" of a self-referential system can ever be
        proposed that will be both consistent and complete, ie.,
        whatever theory is proposed, it will not cover all
        circumstances, or provide inconsistent results in other
        circumstances [pp. 465, pp. 471, Hofstadter],
        [Arthur:CIEAFM]. The best way to play the game is deductively
        indeterminate. This indeterminacy pervades economics and game
        theory [Abstract, Arthur:CIEAFM].

        However, just because such problems do not have axiomatized,
        provably robust solutions does not mean that good strategies
        do not exist. For example, the "tit-for-tat" strategy
        [pp. 239, Poundstone:PD] has been shown to be a very
        effective. The objective is to avoid letting the game
        degenerate into both players playing defection strategies. It
        is very simple, and consists of cooperating, ie., playing "A,"
        on the first iteration of the game, and then do whatever the
        other player did on the previous iteration[5]. Note that it is
        a "nice" strategy, (in the jargon of game theory, a "nice"
        strategy is one that never defects first.) It is also a
        "provocable" strategy-it defects in response to a defection by
        the opponent. It is also a "forgiving" strategy-the opponent
        can implicitly "learn" that there is an incentive for
        cooperating after a defection[6]. An important concept of the
        tit-for-tat strategy is that, unlike the game of Mora, the
        strategy does not have to be kept secret. When one is faced by
        an opponent that is playing tit-for-tat, one can do no better
        than to cooperate. This makes tit-for-tat a stable strategy.

        Unfortunately, tit-for-tat does not do so well when the
        opponent occasionally defects, and then returns to a generally
        cooperative strategy. Neither does it do well when the other
        player is playing a random strategy. As in the case of the
        game of Mora, the strategy can be derived empirically, over
        time, using adaptive control computational methodologies. The
        subject of "inductive reasoning" as an adaptive control
        methodology is considered in the section on "Multi-player
        Games."

        As in the previously analyzed coin tossing game, the objective
        of each player is to maximize the number of coins won over
        many iterations of the game, ie., to maximize the cumulative
        returns of the game. Note that each player's capital, will
        fluctuate, depending on the outcome of a particular
        iteration-and that fluctuation will be random, and either 0,
        1, 3, or 6 coins. We would expect that the time series
        representing the fluctuations in a player's capital to be a
        random walk, which could be represented by a formula similar
        to Equation 1[7]. Computer simulations of the co-evolving
        strategies of iterated multi-player prisoner dilemma scenarios
        where the individual players "learn" how to cooperate further
        support the hypothesis [pp. 170, Davis].

    Multi-Player Games

        A simple coin tossing game was analyzed previously.  In the
        section describing the game of Mora, those concepts were
        expanded to include zero-sum games of mixed strategy, using
        the game of Mora as an example. It was shown in these types of
        games, the optimal strategy does not depend on how your
        opponent plays the game over time. In the section describing
        the Prisoner's Dilemma, a nonzero-sum game, the prisoner's
        dilemma, was analyzed and it was shown that the strategy for
        the game is deductively indeterminate since the game's logic
        is self-referential. The reason for this was that one player's
        strategy depended on how the other player plays the game over
        time. In both cases, the cumulative sum of winnings of a
        player was shown to have characteristics of a random walk,
        Brownian motion fractal. In this section, these concepts will
        be expanded to include multi-player games, where the players
        use inductive reasoning to determine a set of perceptions,
        expectations, and beliefs concerning the best way to play the
        game. These types of scenarios are typical of industrial
        manufacturing and equity markets.

        Inductive Reasoning

            Paraphrasing[8] [Arthur:CIEAFM], actions taken by economic
            decision makers are typically a predicated on hypotheses
            or predictions about future states of the world that is
            itself, in part, the consequence of these hypotheses or
            predictions. Predictions or expectations can then become
            self-referential and deductively indeterminate. In such
            situations, agents predict not deductively, but
            inductively. They form subjective expectations or
            hypotheses about what determines the world they
            face. These expectations are formulated, used, tested,
            modified in a world that forms from others' subjective
            expectations. This results in individual expectations
            trying to prove themselves against others'
            expectations. The result is an ecology of co-evolving
            expectations that can often only be analyzed by
            computational means. This co-evolution of expectations
            explains phenomena seen in real equity markets that appear
            as anomalies to standard finance theory [Arthur:CIEAFM],
            [Arthur:IRABR].

            This concept views such "games" in psychological terms: as
            a collection of beliefs, anticipations, expectations,
            cognitions, and interpretations; with decision-making and
            strategizing and action-taking predicated upon beliefs and
            expectations. Of course this view and the standard
            economic views are related-activities follow from beliefs
            and expectations, which are mediated by the physical
            economy [Arthur:CIEAFM].

            This is a very useful concept because it essentially
            states that economic agents make their choices based upon
            their current beliefs or hypothesis about future prices,
            interest rates, or a competitors' future move in a
            market. These choices, when aggregated, in turn shape the
            prices, interest rates, market strategies, etc., that the
            agents face. These beliefs or hypotheses of the agents are
            largely individual, subjective, and private. They are
            constantly tested and modified in a world that forms from
            their's and others' actions [Arthur:CIEAFM].

            In the aggregate, the economy will consist of a vast
            collection of these beliefs or hypotheses, constantly
            being formulated, acted upon, changed and discarded; all
            interacting and competing and evolving and
            co-evolving. Beyond the simplest problems in economics,
            this ecological view of the economy becomes inevitable
            [Arthur:CIEAFM].

            The "standard way" to handle predictive beliefs in
            economics is to assume identical agents who possess
            perfect rationality and arrive at shared, logical
            conclusions about the economic environment. When these
            these expectations are validated as predictions, then they
            are in equilibrium, and are called "rational
            expectations." Rational expectations often are not robust
            since many agents can arrive at different conclusions from
            the same data, causing some to deviate in their
            expectations, causing others to predict something
            different and then deviate too [Arthur:CIEAFM].

            [Arthur:CIEAFM] cites the "El Farol Bar" problem as an
            example. Assume one hundred people must decide
            independently each week whether go to the bar. The rule is
            that if a person predicts that more than, say, 60 will
            attend, it will be too crowded, and stay home; if less
            than 60 the person will go to the bar. As trivial as this
            seems, it destroys the possibility of long-run shared,
            rational expectations.  If all believe "few" will go, then
            "it all" will go, thus invalidating the expectations. And,
            if all believe "many" will go, then "none" will go,
            invalidating those expectations. Like the iterated
            prisoner's dilemma, predictions of how many will attend
            depend on others' predictions, and others' predictions of
            others' predictions. Once again, there is no rational
            means to arrive at deduced "a-priori" predictions. The
            important concept is that expectation formation is a
            self-referential process. The problem of logically forming
            expectations then becomes ill-defined, and rational
            deduction, can not be consistent or complete. This
            indeterminacy of expectation-formation is by no means an
            anomaly within the real economy. On the contrary, it
            pervades all of economics and game theory [Arthur:CIEAFM].

            It is an important concept that this view of industrial
            and financial markets address such notions as market
            "psychology," "moods," and "jitters."  Markets do turn out
            to be reasonably efficient, as predicted by standard
            financial theory, but the statistics show that trading
            volume and price volatility in real markets are a great
            deal higher than the standard theories
            predict. Statistical tests also show that technical
            trading can produce consistent, if modest, long-run
            profits. And the crash of 1987 showed dramatically that
            sudden price changes do not always reflect rational
            adjustments to news in the market [Arthur:CIEAFM].

            In this market model, inductive reasoning prevails as the
            "engine" of the market since no deductive hypothesis is
            possible because of the Godelian issues of
            self-referential arbitrage.

            It should be pointed out that inductive reasoning in such
            scenarios is not an exact process, and usually relies, to
            some extent, on correlation between events in the
            economy. In self-referential processes, single simplex
            statistical evaluations are not possible, and this can
            lead to misinterpretation of the significance of the
            statistics of the events [pp. 50, Casti:SFC][9].

        A multi-player, self-referential model of an equities market

            Suppose that throughout a trading day, agents line up to
            buy or sell a stock. When a particular agents' turn comes,
            the agent has the option to try to increase or decrease
            the price of the stock from the transaction price of the
            previous agent, (by lowering the price to sell stock the
            agent owns, or raising the price to buy stock from another
            agent.) The agent will have to make this decision based on
            beliefs concerning the beliefs of the agents in the rest
            of the market. This decision process will vary as
            different agents post their transaction through the day,
            based on their personal set of beliefs, cognitions, and
            hypothesis concerning the market.  We would expect that
            the time series representing the fluctuations in a stock's
            price to be a random walk, which could be represented by a
            formula similar to Equation 1 [pp. 8, Arthur:CIEAFM].
            Empirical analysis of many stocks tend to support the
            hypothesis that stock prices can be "modeled" as a random
            walk, or fractional Brownian motion fractal. Additionally,
            computer models of stock market asset pricing under
            inductive reasoning with many agents has been initiated
            and further support the hypothesis [pp. 8, Arthur:CIEAFM].

            Stability Issues

                In the section describing the prisoner's dilemma the
                issues of process stability were mentioned. Note that
                not all processes are stable. For example, consider a
                stock market scenario that historically had cyclic or
                periodic increases and decreases in price. The value
                at the bottom of the cycle would increase, (because
                the agents in the market could exploit a "buy low,
                sell high" strategy that would be predictable,) and
                the price advantage would be arbitrated away, and the
                cyclic phenomena would disappear. Cyclic phenomena
                would then be considered as an unstable
                process-similar to the El Farol Bar problem mentioned
                above. However, note that if the agents in the market
                believed that their financial position could be
                improved by altering their investment strategy, by
                buying or selling of stocks, then, as outlined in the
                previous section, the stock price would fluctuate
                similar to a random walk, and this would be stable
                since it is a self reinforcing situation.

            Extensibility Speculations

                Interestingly, the arguments presented in this section
                are possibly extensible into other areas. For example,
                the Stanford economist Kenneth Arrow has shown that
                the ranking of priorities in a group is
                intransitive[pp. 1, Lenstra] [pp. 327, Luce] [pp. 213,
                Hoffman]. What this means is that there exists no way
                to use deductive rationality to rank priorities in a
                society.  If it is assumed that it is necessary to do
                so, then inductive reasoning would have to be used. If
                it is further assumed that such a situation is
                self-referential, which seems reasonable by arguments
                similar to those presented in this section, then the
                same issues outlined in this section could be
                applicable to social welfare issues, etc. This would
                tend to imply that political issues were fractal in
                nature, and the political process justified-which is
                contrary to the thinking of many. The arguments
                presented in [Arthur:CIEAFM], and [Arthur:IRABR] may
                well be extensible into other fields of
                interest. Other speculations could involve theoretical
                interests in the dynamics of democratic process, legal
                process[10], and organizational process[11].  There
                are probably other applications[12].

                As another interesting aside, the arguments presented
                in this section side-stepped the issue of utility
                theory.

            Conclusion

                In this section, it was shown that markets would be
                expected to exhibit self-referential processes, which
                can not be analyzed by deductive rationality. However,
                when players rely on inductive reasoning to formulate
                strategies to execute their market agenda, the result
                is that the market will exhibit fractal
                dynamics. Previously, in this chapter, it was shown
                that the fractal dynamics can be exploited and
                optimized. Interestingly, in some sense, there appears
                to be a convergence of game-theoretic,
                information-theoretic, non-linear dynamical systems
                theory, and fractal/chaos-theoretic concepts.
                Further, there also appears to be a convergence of
                these concepts with the cognitive sciences.

    Footnotes:

[1] These methodologies are often called "operations research." The
algorithm of choice used to derive the optimal game play seems to be
the "simplex algorithm"-at least for games with a small payoff
matrix. The simplex algorithm is one of a class of algorithms that are
implemented using "linear algebra."

[2] In the game of Mora, the optimal strategy does not depend on the
strategy of the opposing player. In more sophisticated games, this is
not true.

[3] The prisoner's dilemma has generated much interest since it is a
game that is simple to understand, and has all of the intrigue and
strategy of many human social dilemmas-for example, John Von Neumann,
the inventor of game theory, once said that the reason we do not find
intelligent beings in the universe is that they probably existed, but
did not solve the prisoner's dilemma problem and destroyed their
self. The prisoner's dilemma has been used to model such scenarios as
the nuclear arms race, battle of the sexes, etc.

[4] The Penrose citation, referencing Russell's paradox, is a very
good example of logical contradiction in a self-referential
system. Consider a library of books. The librarian notes that some
books in the library contain their titles, and some do not, and wants
to add two index books to the library, labeled "A" and "B,"
respectively; the "A" book will contain the list of all of the titles
of books in the library that contain their titles; and the "B" book
will contain the list of all of the titles of the books in the library
that do not contain their titles. Now, clearly, all book titles will
go into either the "A" book, or the "B" book, respectively, depending
on whether it contains its title, or not.  Now, consider in which
book, the "A" book or the "B" book, the title of the "B" book is going
to be placed-no matter in which book the title is placed, it will be
contradictory with the rules. And, if you leave it out, the two books
will be incomplete.)

[5] The tit-for-tat strategy sounds like a human social strategy
between two people-as well it should. It is known to work well with
human subjects [pp. 239, Poundstone:PD]. It is also strict military
dogma, and has formed the strategy of arbitration of the complexity of
power in many marriages.

[6] Tit-for-tat is kind of a "do unto others as you would have them do
unto you-or else," strategy. The tit-for-tat strategy in human
relationships is very old. Another ancient proverb illustrating
tit-for-tat is "an eye for an eye, a tooth for a tooth."

[7] Assuming that one player, or the other, will, at least
occasionally, alter strategy in an attempt to gain an advantage-in
this case, for example, two players, each playing tit-for-tat will
"lock" in to either a defection strategy, or cooperation
strategy. This is considered a degenerate case of Equation 1.

[8] Actually, plagiarize would be a more appropriate choice of
wording. This entire section is a condensed version of the text from
[Arthur:CIEAFM] and [Arthur:IRABR].

[9] Additionally, there are issues concerning causality. Cause and
effect may not be discernable from each other.

[10] Could the legal system be optimized?  Or is that an oxymoron?

[11] For example, [pp. 81, Senge] has a diagram of the sales
department process in an organization. It has the same schema as
represented in Equation 1. If it could be shown that organizational
complexity is an NP problem [pp. 313, Sommerhalder], [pp. 13, Garey],
then there there could be some reasonable formalization of the
observations presented in [Brooks] and [Ulam].

[12] Others feel a bit more epistemological about the issue-see
[pp. 178, Rucker], the chapter entitled "Life is a Fractal in Hilbert
Space."

    Bibliography:

[Arthur:CIEAFM] "Complexity in Economic and Financial Markets,"
W. Brian Arthur, "Complexity, 1, pp. 20-25, 1995. Also available from
http://www.santa.fe.edu/arthur," Feb, 1995

[Arthur:IRABR] "Inductive Reasoning and Bounded Rationality," W. Brian
Arthur, "Amer Econ Rev, 84, pp. 406-411, 1994. "Session: Complexity in
Economic Theory," chaired by Paul Krugman. Available from
http://www.santa.fe.edu/arthur

[Bronowski] "The Ascent of Man," J. Bronowski, Boston, Massachusetts,
Little, Brown and Company, 1973

[Brooks] "The Mythical Man-Month," Frederick P. Brooks, Reading,
Massachusetts, Addison-Wesley, 1982

[Casti:AR] "Alternate Realities," John L. Casti, John Wiley & Sons,
New York, New York, 1989

[Casti:C] "Complexification," John L. Casti, New York, New York,
HarperCollins, 1994

[Casti:PL] "Paradigms Lost," John L. Casti, Avon Books, New York, New
York, 1989

[Casti:SFC] "Searching for Certainty," John L. Casti New York, New
York, William Morrow, 1990

[Davis] "Handbook of Genetic Algorithms," Lawrence Davis, New York,
New York, Van Nostrand Reinhold, 1991

[Dorfman] "Linear Programming and Economic Analysis," Robert Dorfman
and Paul A. Samuelson and Robert M. Solow, New York, New York, Dover
Publications, 1958

[Feder] "Fractals," Jens Feder, Plenum Press, New York, New York, 1988

[Garey] "Computers and Intractability," Michael R. Garey and David
S. Johnson, W. H. Freeman and Company, New York, New York, 1979

[Hillier] "Introduction to Operations Research," Frederick S. Hillier,
McGraw-Hill, New York, New York, 1990

[Hodges] "Alan Turing: The Enigma," Andrew Hodges, Simon & Schuster,
New York, New York, 1983

[Hoffman] "Archimedes' Revenge," Paul Hoffman, Fawcett Crest, New
York, New York, 1993

[Hofstadter] "Godel, Escher, Bach: An Eternal Golden Braid," Douglas
R. Hofstadter, Vintage Books, New York, New York, 1989

[Kaplan] "Mathematical Programming and Games," Edward L. Kaplan, John
Wiley & Sons, New York, New York, 1982

[Karloff] "Linear Programming," Howard Karloff, Birkhauser, Boston,
Massachusetts, 1991

[Lenstra] "History of Mathematical Programming," J.K. Lenstra and
A. H. G. Rinnooy Kan and A. Schrijver, CWI, Amsterdam, Holland, 1991

[Luce] "Games and Decisions," R. Duncan Luce and Howard Raiffa, John
Wiley & Sons, New York, New York, 1957

[Nering] "Linear Programs and Related Problems," Evar D. Nering and
Albert W. Tucker, Academic Press, Boston, Massachusetts, 1993

[Penrose] "The Emperor's New Mind," Roger Penrose, Oxford University
Press, New York, New York, 1989

[Poundstone:PD] "Prisoner's Dilemma," William Poundstone, Doubleday,
New York, New York, 1992

[Rucker] "Mind Tools," Rudy Rucker, Houghton Mifflin Company, Boston,
Massachusetts, 1993

[Saaty] "Mathematical Methods of Operations Research," Thomas
L. Saaty, Dover Publications, New York, New York, 1959

[Senge] "The Fifth Discipline: The Art and Practice of the Learning
Organization," Peter M. Senge, Doubleday, New York, New York, 1990

[Singh] "Great Ideas of Operations Research," Jagjit Singh, Dover
Publications, New York, New York, 1968

[Sommerhalder] "The Theory of Computability," R. Sommerhalder and
S. C. van Westrhenen, Addison-Wesley, Reading, Massachusetts, 1988

[Strang] "Linear Algebra and it's Applications," Gilbert Strang, Third
Edition, Harcourt Brace Javanovich, San Diego, California, 1988

[Ulam] "Adventures of a Mathematician," S. M. Ulam, University of
California Press, Berkeley, California, 1991

[Waldrop] "Complexity," M. Mitchell Waldrop, Simon & Schuster, New
York, New York, 1992

--

John Conover, john@email.johncon.com, http://www.johncon.com/


Copyright © 1996 John Conover, john@email.johncon.com. All Rights Reserved.
Last modified: Fri Mar 26 18:55:31 PST 1999 $Id: 961117214858.419.html,v 1.0 2001/11/17 23:05:50 conover Exp $
Valid HTML 4.0!