Re: NtropiX+NNet idea

From: John Conover <>
Subject: Re: NtropiX+NNet idea
Date: 5 Jun 2001 23:11:44 -0000

There are some interesting numbers that are related to the attached.

What would be the expected range for a "typical" stock's, (i.e., one
with a standard deviation of fluctuations of about 4% per day-the best
stocks do about twice that,) value a calendar year from now?

    The standard deviation would be about 0.04 * sqrt (253), for 253
    trading days in a calendar year, or about 0.636X. That means that
    84.13% of the stocks will be in a range of 0.318X gain to a 0.318X
    loss, (a range of 1.318X to 0.682X,) meaning that a "typical"
    stock's maximum value in a year, divided by its minimum would be
    about 1.318X / 0.682X = 1.93.

    The metrics for the decade of the 1990's, for all stock's listed
    on the US exchanges, for all possible 253 trading day intervals,
    is almost exactly 2.0.

What would be the expected value of the fluctuations, at 2 minute
inter-day ticker intervals?

    The standard deviation would be about 0.04 / sqrt (240), for 240 2
    minute intervals in an 8 hour trading day, or about 0.258%.

    The metrics of the US exchange tickers for the decade of the
    1990's, for all stock's listed is almost exactly a quarter of a

Its intriguing because there are 262,800 2 minute trading intervals in
a calendar year, so the fractal dynamics have very reasonable accuracy
over at least a range of five and a half orders of magnitude!

Very few scientific applications have that kind of extensibility,
(including F = MA, which over a range of five orders of magnitude will
usually require the inclusion of relativistic effects.)


BTW, but things do change. Two decades, or so, ago, the inter-day
spreads were closer to a percent-and most stock brokers made their
money off of the spread. Not so anymore with quarter percent spreads.
Its a sign of the times-where information is accommodated almost
instantaneously by the market. The persistence has decreased, too, for
the same reason.

BTW, the above analysis is classic
methodology. When you get a glimpse of an exchange floor on CNBC,
etc., the traders are running around carrying, (and are always
tinkering with,) a little box that looks like a calculator. That's a
Black-Scholes-Merton calculator. Suppose you could get a future,
(e.g., option,) on a "typical" stock with a strike price of 0.682X of
its current price, a year (e.g., the term,) from now.  You know from
the above that 84.13% of the time you would make money on that
future. So, you would invest 2P - 1 = 0.6826X of your portfolio in
this stock, leaving the remainder in stable, low risk T bills, etc.,
and your portfolio would grow at about 29% per year, on average, in
the long run. (Its hypothetical, you don't get strike prices at 68% of
current value, and you don't get terms of years, either. But you get
the idea of how the BSM calculators work, how they are used, and how
much of the exchanges are run on them, and the amass of data that has
been compiled by them over the last 20 years, or so.)

John Conover writes:
> TABLE I of is kind of
> interesting, BTW.
> If you look at the "-d5 -m0 -p -P" (row 2, colum 5,) characteristics,
> the portfolio gain was 1.00329 per day, or 1.00329^253 = 2.2956X per
> calendar year, for 253 trading days per year.
> The best stocks today have Hurst exponents, H, (i.e., daily
> persistence,) of about 0.54, when averaged over a year, or so. Based
> on that, the tsinvest program should have wagered 2P - 1 = 8% per day,
> (on average,) and it should have optimized the gain to be 0.08^2 =
> 0.0064, for a gain per day, G, of:
>     G = (1.08)^0.54 * (1.08)^(1 - 0.54) = 1.00320855854
> which means the theoretical and emperical values are very close,
> (about 1 part in a 10,000,) over the 2.5 year period, (it stumbled
> into ASND, which ran about H = 0.55 for that time interval-but it had
> to work its way through the '93 recession, too.)
> Note that I do not suggest using tsinvest this way.
> Those arguments are very dangerous and risky, (the -m0 allows it to
> drop hedging the portfolio against individual stocks in the portfolio
> under certain conditions-allowing it to invest everything in only one
> stock.)  Its probably more risk than any sane person would want to
> take. (Its sole function is verification of pro forma against theory;
> use the -r to dump the internal data structures, and that will print
> the numbers its using, which can be compared against theory, and other
> methodologies, like proposed by Hurst, etc.)
> Its probably not very practical, either, since TABLE I does not
> include broker's transaction costs, nor does it include broker
> inefficiency, (which is an issue in day-trading.)
> But its nice to know the theoreticals and empiricals are in close
> agreement when exploiting persistence.
>         John
> BTW, I don't use the -d5 -m0 options together. I don't encourage
> anyone else do it, either. However, if one must, then by all means use
> -c option to force the program to clamp down on its data set size
> requirements, (which forces the program to compensate its wagering
> functions for fractal run length dynamics, in addition to standard
> statistical estimates.)
> John Conover writes:
> > Yea, that is exactly what the -d5 option to tsinvest does. It picks
> > its stocks based on "forecastability", i.e., H > 0.5, (among other
> > things,) and maintains a "history" of the patterns of ups/downs, then
> > bases its wager on 2P - 1, which is optimal, where P = H, (and "bets"
> > avg/rms amount of the portfolio on it, if the -a flag is used.)
> >
> > If there is persistence, then a short term pattern will emerge in the
> > ups and downs, by definition. That's what H > 0.5 means.
> >
> > For example, suppose H = 0.6, (and if you can find that, buy it-ASND
> > was the only stock that sustained that for more than a calendar
> > quarter in the last decade,) and, further, suppose the stock moved
> > down yesterday. Then there is a 60% chance that it will move down
> > today, too. So, one waits until it makes an up movement, and buys
> > since there is a 60% that it will move up the following day, too. Then
> > sell on the first down movement, and then start the whole scenario
> > again, (note that it is not a cyclic or periodic phenomena-its
> > stochastic-a probabilistic scenario, so one has to optimize wagering.)
> >
> > On a day-to-day basis, H, (for a "typical" stock,) runs about 53%-56%
> > on the American exchanges, about 62% for currency exchanges, and is a
> > market inefficiency, (specifically, that not all holders of a stock
> > act instantaneously, as per EMH theoreticals, on market information.)
> > So, if something happens in the marketplace that moves a stock's
> > price, some stock holders will react virtually instantaneously. Others
> > will react at the end of the day, others the next day, others at the
> > end of the week, and so on. The H for inter-day trading is quite high,
> > (that is what the day traders attempt to exploit.) By about 3 days,
> > the stock's price has accommodated the information-all share holders
> > have reacted to the information, (meaning that the Lyapunov exponent,
> > defines the horizon of visibility, or predictability, at about 1.5
> > days, or so; the reaction to information is cut in about half, every
> > day.)
> >
> > If information was accommodated by the market instantaneously, then,
> > as you say, the market dynamics would be strict Brownian motion, (H =
> > 0.5,) and the market would be perfectly efficient, and fair, (i.e., no
> > one could have an advantage-and investing would be a short term
> > zero-sum game, too.) H != 0.5 is a market inefficiency, and is
> > exploitable, (see, TABLE I
> > for an example on real data where the -d5 option is used.) In 1965
> > Paul Samuelson, using the approximation that information is
> > accommodated by the market instantaneously, showed that market
> > dynamics would be Brownian. In 1989, Brian Arthur showed that it is
> > the only long term stable solution to a market place, (aggregate
> > arbitrage system,) and all inefficiencies will, eventually, be
> > arbitraged away, (i.e., H != 0.5 is not sustainable, in the long run.)
> >
> > See, also:
> >
> >
> >
> >
> > which are graphs of the tsinvest internal data structures for the -d5
> > option, and the series at:
> >
> >
> >
> >
> >
> >
> >
> > (look at the date, and what happened in the last graph.)
> >
> > And, there is a very subtle caveat. A deterministic system does not
> > have to be a predictable system, (and is not, by definition, in
> > complex systems.) Although it is intellectually satisfying to discover
> > the mechanics of a deterministic system, one still has to address the
> > problem of how to use such knowledge to optimize the wagering
> > function. But that does not require knowledge of the underlying
> > mechanics-it can be determined directly, (by measuring, specifically,
> > the entropy of the system.)
> >
> > I mean, if one does discover the mechanics of a deterministic system,
> > and the predictability of the system is discovered to be, say, 60%,
> > then one would wager a fraction of (2 * 0.6) - 1 on that
> > predictability.  But one can measure the predictability by measuring
> > the entropy, without any knowledge of the underlying mechanics.
> >
> >         John
> >
> > BTW, the numbers we see today for the Hurst exponent, H, are less than
> > they were a quarter of a century ago. Daily H > 0.6 was not uncommon.
> > However, with the advent of modern computer technology and networks,
> > it has been decreasing toward 0.5, (as it theoretically should.) It
> > used to be much easier to exploit market inefficiency, (until the mid
> > 80's,) when information rate increased in the aggregate.
> >
> > Jeff Haferman writes:
> > >
> > > I've been brainstorming a bit... the idea is to choose a
> > > pool of stocks with the Hurst coefficient "far" from 0.5,
> > > and then use these as candidates for neural net training.
> > >
> > > Underlying this is the notion that for H = 0.5, we have
> > > Brownian motion, and the time series is not predictable.
> > > H > 0.5 or H < 0.5 implies that we may be able to forecast
> > > the time-series (and, going further, I suppose we could compute
> > > the Lyapunov exponent to see how fast the time series decays,
> > > eg to get an idea of how far out we might be able to forecast).
> > >
> > > Any caveats before I start to undertake this exercise?
> > >


John Conover,,

Copyright © 2001 John Conover, All Rights Reserved.
Last modified: Tue Jun 5 17:31:30 PDT 2001 $Id: 010605161155.17171.html,v 1.0 2001/11/17 23:05:50 conover Exp $
Valid HTML 4.0!