From: John Conover <email@example.com>
Subject: RE: forwarded message from firstname.lastname@example.org.
Date: Fri, 14 Nov 1997 19:15:34 -0800
LAYARD KIRBY writes: > > John; > This stuff really interests me and I have taken courses in Chaos > (sounds like Taos) and I know you really don't care for questions ... > but..... Is the mean over a 100 day sampling 60? How is the variable > distributed about the mean? Gaussian? Is a 100 samples large enought to > bound the next prediction to say < 10% of the total peak to peak (100)? > The market is constantly increasing in value so I have trouble > associating this with market predictions. The market has lots of 1/f > noise (random walks). Some popcorn noise. I don't see these types of > noise in this restaurant example. > The El Faro bar problem is a chaotic (ie., non-linear dynamical system,) with chance. As you know, there are different types of chaotic systems. The most popular is the deterministic system, as described in the last century by Poincare-and typified by the three body problem, for example predicting the position of the Sun Moon and Earth in the future. If we try to describe such a system mathematically, we find that the errors in our predictions increase exponentially into the future. Non-deterministic chaotic systems are governed by a similar set of non-linear equations, but the incremental value, from one time unit to the next, is a random process. If we model such a process with linear equations, (as a mathematical expediency,) it is called a fractal analysis. Indeed, fractals are a subset of a more general class of problems which are called chaotic in the popular press. In fractals, the relationship of something to time is always the square root of time. In Chaotic systems, it can be some other kind of root, for example 0.8, instead of 0.5. Obviously, the El Faro bar problem has to have a Gaussian distribution. I'll offer a heuristic proof. Instead of everyone deciding to go, or not to go, to the bar at once, suppose we serialize the process in a queue. First one person makes a yea/nea decision, then the next, and the next, and so on. Obviously, (see below,) this would be a random process, since some would decide yea, and others nea, and we could never decide what the next person is going to decide, based on what all the previous people have decided. (Assuming there is no talking in line.) Now, suppose that a hundred folks were involved in the serial process. I could model the statistics of our serial process by pitching a penny, a hundred times, and counting the number of times it came up heads, which would give us a binomial distribution. And, in the limit, if we did it enough times, the binomial distribution would become a Gaussian distribution. The remaining issue is why such a thing is a random process. Again, I'll offer a heuristic proof. As a simple intuitive example, suppose the bar is small, with only one table and one chair, and there are only two people who frequent the bar-you and me. I must now make a decision on whether to go to the bar today, or not. But that will depend on whether, or not, I think you are going to the bar. But you are using a like kind of reasoning, so I find myself having to make a decision based on what I think you think I am thinking, ie., my decision process is recursive, (circular logic,) or, in a nut shell, self-referential. (Economist call this kind of process you and I are doing, "arbitrage." There is a whole science of game theory based on it.) What happens is that we will try to second guess each other, (ie., use inductive logic,) and sometimes I will be successful, and others, not. Thus, the random process. John -- John Conover, email@example.com, http://www.johncon.com/