Answers to questions on fractals ...

From: John Conover <>
Subject: Answers to questions on fractals ...
Date: Fri, 16 Jan 1998 00:55:31 -0800

I'm getting a lot of questions concerning intuitive constructions of a
random walk, (ie., fractional Brownian motion,) fractals.  Since most
of the questions come from executives with electrical engineering
backgrounds, I will approach things from that point of view. I will
use only one concept from engineering; noise voltages add root mean

A fractal is a geometric interpretation of a chaotic system in much
the same way that a graph is a geometric interpretation of a
function. (For example, the weather is a chaotic system-a graph of the
amount of water in a dam, do to rainfall, is a fractal.)  Fractals can
be made with random distributions that are not Gaussian, (ie., not a
random walk, and frequently appear in physics-particularly the quantum
mechanics and astrophysics.) In point of fact, the random walk fractal
is but one of a large family of fractals, and has interesting
implications in finance and management.

The way the random walk fractal is made is to add numbers that have a
Gaussian distribution, (like noise in an electronic circuit-in point
of fact, integrating high frequency atmospheric noise makes a
fractal.) Or, as a simple model, suppose we have a black box that
generates random numbers with a Gaussian distribution. Starting a
cumulative sum at zero, we get a number from the black box, and add it
to the cumulative sum to get the value of the fractal in the first
time interval. Then we get another number from the black box and add
it to the cumulative sum for the value of the fractal in the second
time interval, and so on.

Fractals with Gaussian distributions have interesting properties that
are applicable in finance and management.  Here's why. One of the
beautiful abstractions in mathematics is how the probabilities of
random events assemble their self into bell curve distributions-a
system that is a collection of random events does not necessarily
produce random outcomes. For example, two six sided die have a much
greater probability of coming up with a total of seven than either
snake eyes or box cars. So the aggregate of two die, in sum sense, is
less random than either die.

The reason we want to study the way random events assemble their self
into distributions is to develop concepts that can be used to model
uncertainty. The reason we want to study cumulative sums of random
events is that the concepts can be used to model how systems with
uncertainty evolve over time, (for example, a P&L, operating in
uncertain times is such a system. A P&L might contain the cumulative
sum of sales, in uncertain market conditions, over the period of, say,
a month-which would be a fractal.)

It is an important concept that the noise mechanisms in fractals can
be made up of smaller noise mechanisms, which in turn, can be made up
of even smaller noise mechanisms, ie., we can use fractal analysis to
study the evolution of systems that contain uncertainty and unlimited
complexity. (Meaning that it is "scalable," in the way it handles
complexity and uncertainties.)

In much the same way that many roles of dice assemble their
probabilistic outcomes into bell curve distributions, fractals
assemble their outcomes into probabilistic phenomena.

As an intuitive development of the probabilistic nature of a random
walk fractal, if you look at a fractal, (for example, the value of an
equity over time, a company's P&L, an industrial market, or low
frequency noise on an oscilloscope,) it will go above average for
extended periods of time, then below average for extended periods, and
repeat the process in a probabilistic fashion. It would be of obvious
benefit to have some conceptual idea of the probabilities involved in
such things-for example, how far does it go above average, and how
long will it stay there, ie., the run length. We need to derive
probabilistic formulas for such things.

Let me develop an intuitive model of a way to do this. Instead of
using a black box Gaussian random number generator as above to make a
fractal, the box could be replaced with an electronic noise voltage in
each interval-the second added to the first for the second time
interval, the third added to the second, and so on. Note that at each
instant of time, we will see a different fractal. (What I need is a lot
of fractals to derive the probabilities-this is one way to get them.)

If I plot each of these fractals, and then overlay them with all the
other fractals generated over a sufficient period of time, what would
I see?  (If there is an aggregate pattern, we have our probabilistic
formulas.) What we really want here is an "average" of all the
patterns. But that means the patterns add root mean square. So, I
would expect to see a value in the second interval that is the square
root of two times the value in the first interval, (since we added the
noise generators together,) and a value in the third interval that is
the square root of three times the value of the first interval, and so

So, on the average, after a fractal crosses zero, it will diverge from
zero on a trajectory that is proportional to the square root of time,
ie., letting d be the value of the fractal at time t, then on the
average, d = sqrt (t). (How much money you can make on an equity is
determined by this probability. Or, as a simple example, if you take a
pack of cards, shuffle it, pulling cards off the deck in sequence, and
if its black, you move left; if red, you move right; then your
distance, from where you started, on average, will be the square root
of the number of cards pulled from the deck many steps. Kind of
remarkable, when you think about it, that a random process could
generate such a nice probability function.)

But that wasn't the only question. How long will the fractal stay
above, (or below,) average? We can solve that problem, knowing the
average value in each interval for our model, (which we just derived,)
and knowing that the distribution in each interval is Gaussian, with a
root mean square distribution that is the root mean square of how many
noise generators made it.

The first interval is made up of a single noise voltage with a
Gaussian distribution root mean square value of unity.

The second interval is made by a Gaussian distribution, with a root
mean square value of the square root of two, (since it was made by two
equal noise sources, which add root mean square,) centered on the
value of the first interval, which is unity. The lower tail of the
Gaussian bell curve crosses zero at 0.707 standard deviations, giving,
(from the CRC Handbook of Mathematical Tables for the Gaussian
distribution,) a chance of 0.23978 of the second interval being less
than zero, (ie., the run length terminated,) or a chance of 0.76022 of
the run length continuing.

For the third interval, we have a Gaussian distribution centered on
1.4142, and with a root mean square value of the square root of three,
giving a bell curve that crosses zero at 0.81649 sigma, or a chance of
0.20711 of the third interval being less than zero, or a chance of
0.79288 that the run length will continue.

Likewise, for the fourth interval, we have a Gaussian distribution
centered on 1.73205, and with a root mean square value of the square
root of four, giving a bell curve that crosses zero at 0.86603, or a
chance of 0.19238 that the forth interval will be less than zero, or a
chance of 0.80762 that the run length will continue.

To get the cumulative length of the average run length for a fractal,
we multiply the chances that the run length will continue together
until we get approximately one half. It turns out that that is in the
fourth time interval, so, we would expect, on the average, that the
run lengths would be equal to four time intervals.

The mathematically adept would have recognized that what I was doing
was computing the error function, (which, obviously, is related to the
Gaussian distribution.) In point of fact I was computing the erf (1 /
sqrt (t)), which is the probability that a run length will be longer
than t intervals. As an approximation, for t > 1, the error function
can be dropped, and the probability of a run length being longer than
t is p = 1 / sqrt (t).

We now have the average "growth" of a run length, and a calculation
for the probability of how long the run length will last.

For example-back to the deck of cards-what is the chances that you
will spend all the time on one side, (or the other,) of where you
started for the entire deck of all 52 cards? Its about 1 / sqrt (52),
or about 13.9%. And this means that if you were to do the experiment a
hundred times, in about 14 of them, you would have never returned to
where you started.

Want a practical application? How about timing in the stock
market. There is compelling evidence, (both from a theoretical and
empirical point of view,) that equity prices can be modeled as a
random walk fractal-at least as a first order approximation.

The question is when to buy, and when to sell. Well, if the above is
true, (you will have to be your own judge,) then one should wait until
about 4 time units of down movements, and buy, (counter intuitive as
it sounds.) Then, after the purchase, one should hold for about 4 up
movements, and sell, (counter intuitive as that sounds.) And, remember
what I said about fractals being made up of fractals?  Well, I didn't
specify the time units. You can use any time unit you want. It works
on time units of seconds, minutes, hours, days, weeks, months, years,
or decades, (counter intuitive as that sounds.)

And how much would you make? Well, it would be about 2 times the root
mean square value of the stocks fluctuations, (there were 4 time units
of up movement when you held the stock-they add root mean square,) in
8 time units, (of which, you held the stock for 4 time units,) or
about one fourth of the value of the stocks root mean square of the
stock's fluctuations. (If you look at the NYSE, you would have made
about 7% per year, worst case, measured between 1930 and 1940. Not a
good time to be in the market. You would have done better than any
indices, or any mutual fund. It works well on both the simulated
market data, and the actual historical database, also.) Want another
counter intuitive example?  Well, the formula for stock gain says that
you should select your stocks based on the greatest volatility, (as
measured by the root means square of the stock's fluctuation,) ie.,
volatility is good.



John Conover,,

Copyright © 1998 John Conover, All Rights Reserved.
Last modified: Fri Mar 26 18:54:35 PST 1999 $Id: 980116005537.20455.html,v 1.0 2001/11/17 23:05:50 conover Exp $
Valid HTML 4.0!