NdustriX_medium.jpg

 

Software For Business Intelligence Analytics:

Usage


Home | Installation | Usage | FAQs | Utilities | Architecture | QA | Tests | Links | Mailing List | License | Author | Download | Thanks



home.jpg
installation.jpg
usage.jpg
FAQs.jpg
utilities.jpg
architecture.jpg
QA.jpg
tests.jpg
links.jpg
mailinglist.jpg
license.jpg
author.jpg
download.jpg
thanks.jpg

Paradigm of the Analysis

The assumption is that decision making in business has to address uncertainty. Sometimes, decisions work out-other times, they don't, i.e., in some sense, decision making in business is like gambling. If the decisions work out more often than not, then the decisions were a wise "wager", and a business will prosper.

To make wise decisions requires some understanding of the volatility of the marketplace. Volatility is proportional to risk in the marketplace.

The objective of the analysis is to optimize business operations under uncertain conditions. Specifically, to maximize profitability, growth, or market share, while at the same time minimizing decisional risk.

To do this, marketplace volatility has to be measured in a useful way, and shown to be stationary, (i.e., constant over long term intervals, so it can be accommodated by a decision process.)

What this implies is that it is necessary to empirically verify that marketplaces are fractal. To do this requires a significant data set.

For extensibility and flexibility, the analysis uses Unix paradigms:

  • Many small programs, each doing a specific task in the analysis, orchestrated by shell scripts.

  • Consistent Unix flat file database file structures, (either tab or comma delimited structures are acceptable,) that are "standard" for the programs used in the analysis.

This paradigm allows extensibility-new data sets can be added quite expediently with a minimum of effort. (In point of fact, the entire industrial market database at the US Department of Commerce and Federal Reserve was used in the analysis to empirically verify that the volatility was stationary.)

Additionally, graphs of the data and variables can be produced automatically (using "standard" Unix infrastructure in the shell scripts,) and passed to TeX or LaTex as typesetting definitions-complete with logical constructs-to produce a document of the analysis for appropriate scrutiny. (Appendix C and D of the manuscript were completely machine generated-about 600 the total 700 pages in the document.)


Usage of the Software

Change directory the markets directory in the source tree which contains the data constructions for all markets, and fabricated test cases. There are 3 commands that can be issued from the top level:

  1. make, which will enter into the directories, and make the data construct for each and all specific directories/markets/data

  2. make clean, which will enter into the directories, and remove all deciduous files-leaving only those files which are required in the ../doc directory for inclusion in the LaTeX documentation

  3. make realclean, which will enter into the directories, and remove all files constructed, leaving only the starting data, and link files into the Master directory

Each directory/data can be constructed/reconstructed individually by "cd"'ing to the directory and using one of the commands above, since all Makefile files, plot files, etc., are common and "ln -s"'ed to the files in the Master directory.

The Master directory contains a Makefile, plot, and gnuplot file. The Makefile is "ln -s"'ed to each market/data directory. The plot file is also linked into each market/data directory, and can be used to preview the data constructed with the command "gnuplot plot". The gnuplot file is run from the Makefile, and makes the encapsulated postscript files that will be included in the ../doc LaTeX documentation.

Additionally, maketex.awk is an awk(1) script, and is executed from the Master/Makefile during construction of the data in each market/data directory, and makes a file, parameters.tex, that when "input{.../parameters.tex}" from LaTeX, will load the data constructed in each directory into LaTeX for formatting into the document in ../doc. The other awk scripts, 3divide.awk, and cumulativeproduct, are used for calculational issues from the Makefile. See the scripts for documentation.

To add a new market, make a new directory, establish links to ../Master/Makefile, ../Master/gnuplot, ../Master/plot, copy the data for the market as the file name "data", and type make. The data file should be a time series, one record per time value, in temporal sequential order. See the documentation to the program "tsfraction" for other specifics on the data file syntax. To include the market data into the LaTeX documentation in ../doc, see appc.tex and appd.tex for templates. The directory should be included in this directories Makefile, in the 3 sections, all, clean, and realclean if the directory is to be included in a "top down" compile.

In general, the directories with names beginning with us.* are directories containing example data and constructions from various departments of the United States Government. The directories with names beginning with ts* are test cases, where the data files were constructed, in one form or another, from the programs in ../simulation, and are generally used as examples, and/or software or methodological construct testing. There is a NOTES file, that is an executable shell script, that will construct the data file in these directories. All other directories, excepting the Master directory, contain data files and constructions for various market segments, and contain a NOTES file that is a dummy, and links the data for the market segments as the data file.


A license is hereby granted to reproduce this software source code and to create executable versions from this source code for personal, non-commercial use. The copyright notice included with the software must be maintained in all copies produced.

THIS PROGRAM IS PROVIDED "AS IS". THE AUTHOR PROVIDES NO WARRANTIES WHATSOEVER, EXPRESSED OR IMPLIED, INCLUDING WARRANTIES OF MERCHANTABILITY, TITLE, OR FITNESS FOR ANY PARTICULAR PURPOSE. THE AUTHOR DOES NOT WARRANT THAT USE OF THIS PROGRAM DOES NOT INFRINGE THE INTELLECTUAL PROPERTY RIGHTS OF ANY THIRD PARTY IN ANY COUNTRY.

So there.

Copyright © 1994-2011, John Conover, All Rights Reserved.


Comments and/or bug reports should be addressed to:

john@email.johncon.com

http://www.johncon.com/
http://www.johncon.com/ntropix/
http://www.johncon.com/ndustrix/
http://www.johncon.com/nformatix/
http://www.johncon.com/ndex/
John Conover
john@email.johncon.com
January 6, 2006



Home | Installation | Usage | FAQs | Utilities | Architecture | QA | Tests | Links | Mailing List | License | Author | Download | Thanks


Copyright © 1994-2011 John Conover, john@email.johncon.com. All Rights Reserved.
Last modified: Tue Mar 1 16:06:36 PST 2011 $Id: usage.html,v 1.0 2011/03/02 00:20:11 conover Exp $
Valid HTML 4.0!