Article 12754 of alt.sources:
Path: nntpd.lkg.dec.com!pa.dec.com!crl.dec.com!crl.dec.com!bloom-beacon.mit.edu!spool.mu.edu!newspump.wustl.edu!crcnews.unl.edu!backbone!backbone!wayne
From: wayne@backbone.uucp (Wayne Schlitt)
Newsgroups: alt.sources
Subject: XStar-2.0.0  Orbiting star system simulator    Part02/06
Date: Sun, 27 Aug 1995 23:12:27 GMT
Organization: The Backbone Cabal
Lines: 1395
Sender: wayne@backbone (Wayne Schlitt)
Message-ID: <WAYNE.95Aug27171227@backbone.uucp>
Reply-To: wayne@cse.unl.edu
NNTP-Posting-Host: cse.unl.edu
Originator: wayne@cse.unl.edu


Submitted-by: wayne@backbone.UUCP
Archive-name: XStar-2.0.0/part02


--- cut here ---

# Continuation of Shell Archive, created by backbone!wayne

# This is part 2

# To unpack the enclosed files, please use this file as input to the
# Bourne (sh) shell.  This can be most easily done by the command;
#     sh < thisfilename


# ---------- file theory_of_op.ltr ----------

filename="theory_of_op.ltr"

if [ -f $filename ]
then
  echo File \"$filename\" already exists\!  Skipping...
  filename=/dev/null		# throw it away
else
  echo extracting file theory_of_op.ltr...
fi

cat << 'END-OF-FILE' > $filename


                              Theory Of Operation
                              -------------------

	"Any n-body program that doesn't even mention Greenspan's method
         isn't worth a squat."



          When I first saw XGrav, I really liked it.  I thought that the
     stars wandering around on the screen were really neat.  But, it didn't
     seem to be working quite right, so I started to dig into it to see if
     I could "fix" things.  I kept working on it because it seemed to touch
     on a bunch of areas that I am really interested in.  I like to know
     how the physical world works, I like understanding the math behind the
     physics, I like applying the math to a computer program and I like
     doing computer graphics.

          So, creating XStar has really been an exercise in learning about
     these areas.  I find XStar fascinating to watch, but my real interest
     is in what I have learned.  In this document, I am trying to write
     down what I have learned before I forget it.

          In this light, I would be very interested in hearing about any
     errors or misconceptions that I might have in either this document or
     in XStar.




                            Theory of Star Movement
                            -----------------------


     The Background on the N-body problem
     ------------------------------------

          People have known for thousands of years that the planets move
     around the sun in nearly circular orbits, although this fact has also
     forgotten for long periods of time.  Newton proposed his laws of
     gravity in 1687 and showed that the two body problem could be solved
     analytically and that the bodies would move in the form of one of the
     conic sections.  The three body problem (e.g. earth, moon, and sun)
     however, stumped Newton.  It wasn't until 1912 that K. F. Sundman came
     up with a converging infinite series that solved the 3 body problem.
     This series converges too slowly to be of much use though.  To this
     day, there is no known method of solving the general n-body problem
     for n>3, although there are many important special cases with
     solutions for n<=5.

          When I first started to dink around with XStar, I decided to look
     at my physics and math books and see what they had to say about it.
     Unfortunately, they tended to handle only the simplest of cases.
     Often, they would only talk about the 1 dimensional case, and never
     for n>2, other than to mention that the problem has not been solved.
     So, I decided to check out the local university library.  Folks, the
     n-body problem has been _well_ studied over the years.  I found books
     on the n-body problem in the departmental libraries of physics, math,
     chemistry, astronomy and several engineering disciplines.  The physics
     department had almost a half of a book case dedicated to the n-body
     problem.

          There are specialized versions of the n-body problem for tracking
     the orbits of satellites around the earth-moon-sun-satellite system.
     There are versions that deal with charged particles.  There are ones
     that take relativity into account, and ones that take into account
     quantum mechanics.  There are specialized methods that implement a
     "discrete mechanics" system, so that even though you won't get an
     exact answer, the resulting system will still have all the constants
     of motion preserved.  (i.e., the system will still have the same
     total energy, angular momentum, etc.)

          I even found a book entitled _Numerical_Solutions_of_the_
     _N-body_Problem_ (by Andrzej Marciniak) that tells you, in gory
     detail, how to solve the n-body problem for celestial bodies on a
     computer.  Many of the star movement routines implemented in XStar are
     derived from this book.  I also read several numerical analysis books,
     such as _Numerical_ _Recipes_in_C_ (by Press, Flannery, Teukolsky and
     Vetterling), and _Elementary_Numerical_Analysis_ (by Conte and de
     Boor).

          I was quite surprised to find that the results that I found were
     not what these books told me I should find.  The method that I had
     originally created, which was a simple taylor series expansion of
     order 2 with an approximation of the next term to get it to be order 3,
     performed as well, if not better than many of the other methods.  It
     took a lot of work to find a better method, and it isn't even a
     "factor of 2" better.



     Theory of Star Movement
     -----------------------

          Newton laid out the formulas needed to solve the n-body problem
     some 300 years ago.  They are really quite simple.  They are:

          F = m*a
               Force equals the mass times the acceleration.  (Both F and a
               are vector quantities.)

          F = G*m1*m2/(r12^2)
               The force between two bodies (of mass m1 and m2) is equal to a
               constant times the product of the masses, divided by the
               square of the distance between the bodies.  Actually, the
               formula looks more like F=(G*m1*m2/(r12^2))*(r12/|r12|) where
               r12 is the vector between the two bodies and |r12| is the
               length of the vector.  That is, the force is projected along
               the vector connecting the two bodies.

          v = dx/dt = x'
               Velocity is the rate of change of the position.  (Both v and x
               are vector quantities.)

          a = dv/dt = v' = x"
               Acceleration is the rate of change of the velocity.


     A little bit of simple algebra shows that for a given star and one
     other star, d^2x/dt^2 = G*m2/(r12^(3/2))*r12.  For our problem, you have
     to add together one right hand side term for each other star in the
     system.  The result is a vector equation in the form of:

          x" = f( t, x )      (Where f() is a gory function)

     This is known as a differential equation because it relates the second
     derivative of x (aka x double prime, aka x"), to the vector x and time.
     Sometimes there are ways of solving this type of equation, but often
     there isn't.


          When you can't solve it through the standard methods of
     differential equations, you generally have to resort to numerical
     analysis.  Any good introductory numerical analysis book will explain
     the details, so I will just mention the important points and results.

          One way of working with this type of equation is to integrate out
     the derivatives, as follows, and then do numeric integration on the
     result.  In our case, we have to do the integration twice, once to get
     the velocity, once more to get the position.  For this reason, the
     methods used to solve this differential equation are often called
     "Integration of Ordinary Differential Equations."

          x" = f( t, x )
          ->  d^2x/dt^2 = f( t, x )
          ->  d^2x = f( t, x ) dt^2

                  /` /`
          ->  x = |  |  f( t, x ) dt dt
                 ./ ./

          We perform this integration by first breaking time down into a
     series of small, discrete steps.  This is known as discretization of
     the problem.  Then we calculate approximate solutions for each step in
     the series.  This, of course, leads to small "discretization errors"
     which accumulate.  You want these errors to be as small as possible,
     and one way of reducing the discretization error is to reduce the size
     of each step.  There are two problems with that solution though.
     First, it takes a lot longer to come up with the final solution.
     Secondly, there is another source of errors, known as round off
     errors, which are inherent in (almost) all floating point calculations
     on computers.  The smaller the step size, the more calculations will
     have to be done and the larger the accumulated rounding errors will
     be.  So, there is a limit to how small you can make the step size
     without starting to increase the size of the error again.


          The simplest method of solving the above differential equation is
     called Euler's method, and when applied to star movement with a time
     step size of h comes out as:

          v(t+h) = v(t) + a(t)*h
          x(t+h) = x(t) + v(t)*h

          This is the method used by XGrav and many other simple n-body
     programs that I have seen.  Most numerical analysis books use this
     method as their first example and then immediately say "Don't use this
     method.  It gives very large errors."  The reason why is that the
     exact solution will have the form:

          v(t+h) = v(t) + a(t)*h + 1/2 * h^2 * a'(z)

     That is, there is really some other, unknown, term that is on the
     order as the square of the step size h times the value of the rate of
     change of the acceleration at some time period z in the range of (t
     ... t+h).  Euler's formula is really just the "Taylor series
     expansion" of the exact solution truncated after the second term.
     (Discretization error is also know as truncation error for this
     reason.)

          We can use Taylor's theorem to derive a more accurate
     approximation by using more terms.  As an example, we could use:

          x(t+h) = x(t) + x'(t)*h + 1/2 x"(t)*h^2 + 1/6 x'"(t)*h^3
                   + 1/4! x""(t)*h^4 + ... 

     It is Taylor's theorem that gives us the error term (in this case
     E=1/5! h^5 x'""(z) ) which tells us that all the missing terms when
     added up will not exceed this term.

          The error term can be said to be O(h^5), that is, it is on the
     order of h^5.  Thus, the formula is accurate to O(h^4) and so it is
     know as a "fourth order" approximation of the exact solution.

          The problem with Taylor's method is that finding even the third
     derivative of x(t) can be very hard to do in our particular case.  The
     ad-hoc 3rd order taylor series method that I developed estimates
     x'"(t) by using the last few acceleration values to guess at how
     quickly the acceleration is changing.

          All the other methods used to solve this differential equation
     end up having some sort of error term which (usually) can be expressed
     as some constant times some power of the step size times some nasty
     function evaluated during the time interval.  The constant is often
     hard to calculate, the nasty function is very hard to calculate, the
     time value that the nasty function is evaluated at is impossible to
     calculate, so the only easily known quantity of this error term is the
     power of the step size.



     How Do You Decide What the Best Method Is?
     ------------------------------------------

          Ok, I have said that I was able to create an ad hoc method of
     calculating the positions and velocities of stars that was better
     than all the other methods that I could find, with only a few
     exceptions.  What do I mean by a "better method?"  Well, let's try
     this for a definition:

     Definition:
               One method is _better_ than another method if, in a fixed
               amount of time, it is able to calculate the positions and
               velocities with the greatest accuracy.

          Let's look at that definition a little closer.  The "in a fixed
     amount of time" part is important.  It is not any good to have a routine
     that calculates the movement more accurately if it also takes longer
     to do the calculations.  The quicker method could be made more
     accurate by just decreasing the step size a little.  For the same
     reason, it is meaningless to just compare "which is more accurate at a
     given step size", some methods do much more work per step.

          Now, if you look at the definition a little closer, you might
     notice a slight problem with it.  How do we determine which method
     "has the greatest accuracy"?.  Well, let's try this for a definition
     of accuracy:

     Definition:
               One method is more _accurate_ than another if the results
               more closely match what you would get in the real world.

          This really is what we want the definition of accuracy to be, but
     we have a problem:  How can we compare the computer results to the
     real world?  We would need a test environment with zero gravity (not
     just the microgravity of the space shuttle).  Moreover, setting up and
     measuring the experiment to the required accuracy would be a nightmare.
     The only way to check this against the real world would be to compare
     the results of the planet movements, or the movements of the stars in
     the galaxy, or a satellite.  Unfortunately, I don't have any of this
     data.

          In Andrzej's book, he uses a two body system that he can compute
     exact values for, but I don't think that is good enough to judge the
     interactions of 15 stars with.  With more than 2 stars, it is hard to
     tell which of several outcomes is really the "correct" one, or if any
     of them are correct.

          More fundamentally, there is the problem that there are many
     different types of deviations from the exact result, and it is hard to
     say which ones are worse than others.  I have based my judgements of
     which methods are "better" by looking at the types of errors that I
     see and comparing them with other methods that have a smaller step
     size.  If a bunch of different methods all agree that a particular
     star system should end up a certain way when a very small step size is
     used, then I can use that information to judge the methods when they
     use a larger step size.  So the definition of accuracy that I have had
     to use is:

     Definition:

          One method is more _accurate_ than another if the results seem to
          more closely match the results of several other methods when they
          used "substantially" smaller step sizes, or more cpu time or
          both.  Signs that a method has the types of errors that I don't
          like cause that method to be downgraded.

     Talk about a cop out of a definition....



     Types of Deviations In Star Movement
     ------------------------------------

          There are a bunch of different types of deviations from the ideal
     star movement that show up from the discretization error and the round
     off error.  Some of the more typical results that we will see are
     these:


     * Gaining or losing energy.

          Instead of a star making a perfectly circular orbit around a
     collapsar, the orbits decay and either spiral in or spiral out.  So,
     the star movement looks like this:

                                     +++
                                   +     +
                                 +         +
                                +   +++     +
                               +   +   +     +
                               +   + *  +    +
                                +   ++  +    +
                                  +    +
                                    ++

          This type of error tends to make a very uninteresting system
     because you will quickly lose a lot of stars.  If I remember
     correctly, Euler's method tends to lose energy, but ab4 tends to gain
     energy.


     * Distortion

          The taylor3 method both gains and loses energy, depending on
     where a star is at.  When a star orbits a collapsar, it keeps it's
     elliptical orbit, but the orbit is more elongated than it should be.
     It is hard to even notice this unless you monitor the total energy
     level as the system progresses.

          It is hard to say what this type of error will do to a system.
     With only two stars it is hard to tell that this type of error even
     exists, but with many stars it is clear that the taylor3 method does
     not give as accurate of results as other methods.


     * Perihelion shift

          Some methods maintain the elliptical orbit around a collapsar, but
     the perihelion (the spot on the ellipse closest to the collapsar)
     rotates around the collapsar over time.  Energy is not gained or lost
     over time, but the angular momentum changes.

          The result of this error is actually kind of neat.  Stars that
     orbited close to a collapsar would form a solid disk of colors.  Does
     that make this error better than the rest?


     * Overstep phenomenon

          This error is directly caused by the discrete movement of the
     stars.  Say you have a star that is falling toward a collapsar.  As it
     gets closer to the collapsar it picks up speed and at each step, the
     force of gravity gets much larger.  When it gets to step 5 in the
     diagram below, it will be so close to the collapsar that the next step
     will take it well past the collapsar onto the other side, when in
     reality, it would have collided.  At point 6, it is now far enough
     away from the collapsar and moving at such a high speed that it will
     just keep on moving.  The net result is that the star has gained a
     great deal of energy out of thin air.

        .. .  .    .        . *              .  --->
         1 2  3    4        5                6

          It is important to note that this same error can (and usually
     does) occur when a star is moving around the collapsar or another star
     on a curved path.  Also, the star may not gain so much energy that it
     shoots off the screen.  Instead, it can end up mixing this added
     energy into the rest of the stars.

          Mathematically this is the result of trying to integrate over the
     singularity when the distance is zero, or a pole in the complex plane
     when the star is on a curved path.  This causes the interval of
     convergence for the Taylor series to be violated, or in methods that
     don't directly depend on Taylor's theorem, it causes similar
     violations of basic assumptions.

          There are several ways of fixing this problem.  You could try and
     check to see if a star moved through any other star after each step.
     This would be very hard to do and very expensive.  You could simply
     say that the stars collide at a fairly large distance, say around step
     4 in the above diagram.  This is what XStar currently does.  You could
     also use a movement method such as the Runge-Kutta method that handles
     singularities better, thus letting the collision distance be smaller.


     * Slingshot effect

          This isn't an error, this is a real part of physics and it is has
     actually been used by the interplanitary space probes.  However, the
     slingshot effect looks so much like the overstep phenomenon, that it
     is very hard to tell the difference without actually watching the
     total energy levels.

          Basically, it is possible for stars to transfer energy among
     themselves by making close passes to each other.  I have thought about
     this quite a bit, but I still don't think I have a clear understanding
     of exactly how it works.  It goes something like this.  Say you have a
     small star (the one denoted by the dots) that is coming up from behind
     a more massive star.  While it is catching up, the gravitational
     forces cause the lighter star to speed up a lot and the heavier star
     to slow down a little bit.  Then, the small star makes a pass in front
     of the large star, causing the small star to change directions so that
     it is now moving quickly away from the large star.  Because the small
     star spends more time catching up to the large star than it does when
     it is moving away, the lighter star gets more energy out of the
     interaction than it had originally.  The larger star, of course, has
     less and the total energy is conserved.


         v start
         .....
          ->  ...
                 ..
                   ..
                     .
                      .
                       .
           *   *  * ***.**  ->
                       .

                     .

                   .



          This slingshot effect can never happen with collapsars because
     they don't move.  The overstep phenomenon happens most often with
     collapsars because they are so heavy.



     Types of Star Movement Methods
     ------------------------------

          There are many different methods that can be used to solve the
     differential equation x" = f( t, x ) and thus can be used to calculate
     the movement of the stars.  We have already mentioned two of them,
     Euler's and the Taylor series expansion, but there are many others.
     Here are a few of them, many of which are implemented by XStar.



     One Step Methods:

          The Euler's method, the Taylor series and Runge-Kutta's methods
     are all classified as "one step methods" because they calculate x(t+h)
     using only the initial starting point x(t) and go from x(t) to x(t+h)
     in one step.  This is an important property because they are all self
     starting methods.  Other methods, such as the multi-step methods often
     use one-step methods to build up a "history" of x values.


     * Euler's method:  (not implemented)
          x(t+h) = x(t) + h*f(t,x)      E=1/2 * h^2 x"(z)

          As we have seen before, this is the simplest method, but the
     error term (E) is only O(h^2), so the method is only O(h) making this
     method only good for very quick and dirty approximations.


     * Taylor series:  (not implemented)

          x(t+h) = x(t) + h*f(t,x) + 1/2 h^2*f'(t,x) + ...
                   + 1/n! h^n*f^(n)(t,x)

          E = 1/(n+1)! h^(n+1)*f^(n+1)(t,x)

          We have seen this one before too, and while in theory, it can be
     very accurate, it is not very practical because it can be very hard to
     calculate the higher derivatives of x.   It should be noted that
     Euler's formula is simply the Taylor series truncated after the second
     term. 


     * Runge-Kutta's method:  (-m rk4)

          Runge and Kutta developed their method by trying to create a
     formula that could match the terms of the Taylor's series, but without
     the use of higher derivatives.  That is, something in the form of:

          x(t+h) = x(t) + a*k1 + b*k2 + ...

          Where k1 = h*f(t,x(t))
                k2 = h*f(t+A*h,x(t)+B*k1)
                ...
          and the constants a, b, ...  A, B, ... need to be determined.

          By using some auxiliary equations, an under determined systems
     of equations can be found.  Then, you can choose the values of a few
     of the constants and solve for the others.  Some values of the
     constants have better properties than others, others values of the
     constants can be used as additional ways of deriving methods such as
     Euler's method.

          The error term for Runge-Kutta's method can be hard, but not
     impossible to determine.  It can be shown that the error term is
     always going to be on a order less than or equal to the number of k
     terms and if there are more than 4 k terms, the order must be strictly
     less than the number of k terms.  So, using five k terms does not
     increase the order of the method, although it can reduce the constant
     in the error term.  Thus, using four k terms is the "most efficient"
     of the Runge-Kutta methods.

          The most popular constants for the 4th order Runge-Kutta method
     yield:

          x(t+h) = x(t) + 1/6 ( k1 + 2*k2 + 2*k3 + k4 )

          where k1 = h*f(t,x(t))
                k2 = h*f( t+h/2, x(t) + 1/2 k1 )
                k3 = h*f( t+h/2, x(t) + 1/2 k2 )
                k4 = h*f( t+h, x(t) + k3 )

          E = O(h^5)


          The down side to the Runge-Kutta methods is that they require
     several evaluations of f(), which in our case is very expensive.



     Multi-Step Methods:

          Multi-step methods use the previous values of x to help determine
     the future values.  For XStar in particular, it seems a waste to
     just ignore the incredibly long history of star locations.

          The down side to all of the multi-step methods is that they
     require some sort on one-step method to get them started.  Worse, the
     "getting started" phase is not limited to just the first time that the
     stars are created, but also when stars collide or bounce.  So,
     bouncing star systems that use multi-step methods are often using the
     one-step "starting" method for a significant percentage of the time.


     * Modified Taylor Series method:  (-m taylor3)

          Before I had even really looked into the books on solving this
     kind of differential equation, I knew that I could use the previous
     values to help out.  XStar v1.1.0 used the formulas:

          x(t+h) = x(t) + h*x'(t) + 1/2 h^2 x"(t) + 1/6 h^3 x'"(t)

          But the x'"(t) term (the rate of change of the acceleration, or
     Dt a) is very hard to calculate, so I used the approximation:

          Dt a = a(t) - a(t-1)

          Simply adding this approximation made a dramatic improvement in
     the overstep problem when stars passed close to collapsars.  The
     reason for this should be fairly apparent.  When an overstep occurs,
     the acceleration goes from being a large positive number to being a
     fairly large negative number.  The difference is going to be a very
     large negative number.  This dramatically reduces the amount of energy
     that is gained.  The result looks like the star 'just missed' the
     collapsar. 

          One of my next thoughts was to try and approximate x'"(t) even
     closer by using more terms.  This is done, as any Numerical Analysis
     book will tell you, by taking the derivative of the Lagrange
     polynomial.  The Lagrange polynomial, L(t), is the fairly straight
     forward polynomial that matches a given set of output values for a
     given set of values of t.  Between those selected times, however, the
     values of the Lagrange polynomial can fluctuate widely.  The more
     terms that the Lagrange polynomial has, the more it can fluctuate.
     Thus, when you take the derivative of the Lagrange polynomial, the
     result can get worse with additional terms.

          I tried using the Lagrange polynomial of order 2 (i.e. the above
     formula), 3 and 4.  The results were clearly the best for the 3rd
     order derivative.


     * Adam-Bashford's method:  (-m ab7 and -m ab4)

          The Adam-Bashford method uses any number of previous values to
     help calculate the next value of x"=f(t,x).  There are many different
     orders to this method, but they are all in the form of:

          x(t+h) = x(t) + h/r ( a1*f(t,x(t)) - a2*f(t-h,x(t-h)) + ... )

          Where  r, a1, ... , an  are constants

          When I first looked at these formula's, I thought that they would
     suffer the same problem that adding more terms to the derivative of
     the Lagrange polynomial suffered from, namely, they would get worse
     with additional terms.  After all, how much is a value from a long
     time ago really going to help?  Some of the results in Andrzej's book
     seemed to confirm this suspicion, but it turns out that they do not
     get worse.  There does get to be a point where the rounding error
     starts to get worse, so for practical reasons we are limited to a 7th
     order Adam-Bashford formula.

          In fact, the 7th order Adam-Bashford formula is the default for
     XStar, for reasons that will be explained later.




     Predictor-Corrector Methods:

          Another technique that can be used is that once you have an
     approximate value at a given time, there might be ways of improving
     this estimate.  Indeed, there are several methods of this form, some
     of are quite old.

          Predictor-Corrector methods seem to have a lot of folklore
     associated with them.  It is possible to call the corrector method
     several times, but folklore has it that this is usually not worth
     while.  Also, it is possible to have a predictor and a corrector with
     different orders.  Folklore says that the corrector should be equal to
     or only one order higher than the predictor.  It is also said that
     predictor and corrector formulas usually have "the same form",
     although it is not clear exactly what is meant by that.

          According to most of the books, Predictor-Corrector formulas are
     passe'.  Now a days people use either Runge-Kutta's or
     Gragg-Bulirsch-Stoer's methods.  On the other hand, the books imply
     that the straight multi-step formulas were made obsolete by
     predictor-corrector methods, and I didn't find this to be true.



     * Modified Euler's method or Heun's method:  (not implemented)

          Since we know that the acceleration is not constant, just using
     the value of the acceleration at time t for the time timer interval
     from t to t+h doesn't seem to be very reasonable.  A better
     approximation would be the average of the acceleration at time t and
     at time t+h.  We can do this via:

          y(t+h) = x(t) + h*f(t,x)
          x(t+h) = x(t) + h*[ ( f(t,x) + f(t+h,y(t+h)) )/2 ]

          E = O(h^3)

          This method improves Euler's method from a first order to a second
     order formula at the cost of one additional evaluation of f().


     * Adam-Moulton's method:  (-m am7)

          The Adam-Moulton's method is similar to the Adam-Bashford method
     except the formula uses the future value f(t+h,...), instead of just
     previous values of f().  To get this initial estimate of f(t+h,...),
     we use the Adam-Bashford formula as a predictor, and then use the
     Adam-Moulton formula.  So the method has this form:

          y(t+h) = x(t) + h/r ( a1*f(t,x(t)) - a2*f(t-h,x(t-h)) + ... )
          x(t+h) = x(t) + h/s ( b1*f(t+h,y(t+h)) + b2*f(t,x(t)) - ... )

          Where r, s,  a1, ... , an,  b1, ..., bn  are constants


     * Other formulas  (Mid-point method)   (none are implemented)

          There turns out to be many other formulas of the form:

          x(t+h) = x(t+n1) + x(t+n2) + ...
                   + h/q ( c1*f(t+m1,x(t+m1)) + c2*(f(t+m2,x(t+m2)) + ... )

          where the n's and m's may reference either into the future or
     into the past.  Some of these have interesting properties, such as the
     formula:

          x(t+h) = x(t-h) + 2*h*f(t,x(t))    E = 1/3 h^3 y'"(z)

          This formula is no more expensive to calculate than Euler's
     method, but it is of O(h^2) instead of O(h).  This is called the
     mid-point method because it is saying that the best estimate for the
     average acceleration is at the middle of the time period.

          It has the down side that it is not very stable.  If you look at
     it, you will notice that the even time periods (t, t+2h, t+4h) depend
     only on the other even time periods and the same is true for the odd
     time periods.  So, it is easy for the time periods to get out of sync
     and for the movement to bop back and forth between two extremes.
     (Hey, that might be a neat error effect...  Maybe I should implement
     it... :-)



     The Gragg-Bulirsch-Stoer Method:  (-m gpemce8)

          This method is literally in a class by itself.  It seems to be
     the current favorite for solving this type of differential equation.

          The basic idea behind this method is that when you decrease the
     step size, you should get a more accurate answer.  If you take a
     fairly large interval H, and use a variety of smaller step sizes over
     that large interval, you will get a variety of different answers, but
     the answers should be converging as the step size gets smaller.  Well,
     why not take these converging answers and create a polynomial
     approximation of the answers as a function of the step size?  Then we
     can put in the magic step size of h=0 into this polynomial, and you
     should get a very good approximation of the answer as if you had not
     discretized the operation at all.

          This same basic idea is also used in Romberg's integration, and
     the whole idea was pioneered by Richardson with his idea of
     "extrapolating to the limit."

          There several pieces to making this method work:

          First, as I have already mentioned, we need Richardson's idea
     that the final answer could be thought of as a function of the step
     size and you can extrapolate what the answer would be if you put in
     the limiting case of a zero step size.

          Secondly, Gragg figured out that the mid-point method mentioned
     above, slightly reworked, has error terms that has only even powers of
     h.  By careful manipulation then, you can knock off two orders of the
     error term at a time by carefully cancelling the error terms.

          Thirdly, Bulirsch and Stoer figured out that using rational
     approximation instead of Gragg's polynomial approximation allowed you
     to break the limits of convergence of the taylor series around
     singularities and poles in the complex plain.  They also figured out a
     good method of decreasing the the step size (the Bulirsch sequence).


          Combining all these things together should yield a very accurate,
     very fast and _VERY_ complicated method.  It is so complicated in
     fact, that this is the only method that I didn't write the code for
     myself.  Instead I used the example code out of Andrzej's book, fixed
     a few bugs and made a few tweaks.  The only problem is that Andrzej
     only implemented Gragg's method of polynomial extrapolation instead of
     using rational extrapolation.  So, when XStar uses this method and two
     stars pass close together, it can give very wrong results.  (But the
     results are often spectacular.  Try -m gpemce8 -a .25 sometime...)



     Speed Hacks
     -----------

          There are a couple of important speed hacks that I have used in
     XStar.  First, I use a time step of 1, which in most methods causes
     all the h^n's to drop out of the calculations.  Secondly, I set the
     gravitational constant G to 1.  This saves many multiplications when
     evaluating f().

          On the surface, these changes just seem to make XStar work in
     some non-real universe where gravity has a different strength, but
     under closer inspection, this is not true.  The value of G is
     different in imperial units (32 ft/s^2) and in metric (9.8 m/s^2).  I
     simply have chosen units of measure and time such that G turns out to
     be one.  You can still model the same physical systems this way, you
     just have to convert everything from normal units to these weird
     units.

          Now, in the above descriptions of the movement methods, I have
     talked about increasing and decreasing the step size h.  With h being
     defined as 1, how can you change accuracy of the system?  Well, if you
     work through the units of G and such, you find that to make the star
     movements twice as accurate, you need to cut the speed of all the
     stars in half, and decrease the mass by a factor of 4.  The smaller
     masses mean less acceleration, but the lower speeds keep the stars
     from flying off.



     The Error Term
     --------------

          We have mention the error term quite a bit, but it is time to
     examine it a little closer.  You may recall that there are three basic
     parts of the error term:

          * some constant

          * some power of h

          * the evaluation of a derivative of f() at some point in the time
            interval from t to t+h.

          i.e.  E = c * h^n * f^(n)(z)

          Now, I just said that I am using a step size of h=1.  Suddenly,
     it appears that the only part of the error term that is known for most
     methods has suddenly become meaningless.  Guess what.  It is.  Sort of. 


          Let's look at the taylor series expansion again.  Recall that it
     was:

          x(t+h) = x(t) + h*f(t,x) + 1/2 h^2*f'(t,x) + ...
                   + 1/n! h^n*f^(n)(t,x)

          E = 1/(n+1)! h^(n+1)*f^(n+1)(t,x)

          Now when 0<h<1, each term of the taylor series gets smaller and
     smaller and eventually converge toward zero.  When h>1, the terms
     still converge toward zero because 1/n! will decrease faster than h^n
     will increase.  (Consider the case where n=2*h.  h^(2*h) will have 2*h
     h's multiplied together, but (2*h)! will also have 2*h numbers
     multiplied together and half of them will be larger than h.)  Anyway,
     when h>1, it would appear that it would require many more terms of the
     Taylor series to get the same accuracy.  However, for our problem,
     this is not true.

          A star moving at 1/2 mile per minute (h=1/2) could also be
     considered to be moving at 30 miles per hour (h=30).  Simply changing
     the units of measure doesn't change the physical system.  The nature
     of the f() function cancel out any changes in choice of h (as long as
     the velocities and masses are changed accordingly).

          So does this mean that the order of the method is meaningless?
     That it is just as good to use a third order taylor series expansion
     as it is to use a 7th order Adam-Bashford method?  Well, sometimes it
     is better, sometimes it isn't.  The critical parts of the error term
     are the constant (which is unknown in some methods) and the value of
     the unknown time z which the derivative of f() is evaluated at.  So,
     we can't say that one method is always going to be better than another
     method.

          We can, however, look at how the error term changes as we change
     the step size.  Say we have two methods, one of O(h^2) and one at
     O(h^3).  If we double the step size, then the first method's error
     term will change to be E1 = c1*(2h)^3*x'"(z) = 8*c1*h^3*x'"(z), while
     the other method will have E2 = c2*(2h)^4*x""(z) = 16*c2*h^4*x""(z).
     When we take the ration E2/E1, we see that it is equal to 2.  So, the
     higher order method _improves_faster_ than the lower order method.  It
     also means that it will get _worse_faster_ as you decrease the
     accuracy. 

          The Taylor series's constant for the error term tends to be
     smaller than the other methods, and the Runge-Kutta's method tends to
     have smaller error term constants than the Adam-Bashford or
     Adam-Moulton methods.  This is why for low accuracy levels, the
     taylor3 method is better than the ab7 method.

          The last part of the error term, the nth derivative of f(), is
     also important.  Sometimes this value will be very small, sometimes
     it will be large.  Sometimes it will cancel out the error from a
     previous step, sometimes it will make things worse.  In Andrzej's
     book, his 7th order Adam-Bashford method turned out to give much worse
     results than his 4th order Adam-Bashford method.  In my testing I
     found a case where the 4th order Adam-Bashford method with -a 1 gave
     much better results than the 7th order Adam-Bashford method with -a 4.
     It also gave better results than -m ab4 -a 2.  Obviously, sometimes a
     method just gets lucky (or unlucky).

          One more point.  If I have done my math correctly, each time you
     take the derivative of f(), a constant is pulled out.  This is due to
     the inverse square property of f(), so the first derivative of f()
     cause the constant -2 to be pulled out, the second derivative pulls
     out the constant -3.  So the constant of the error term of a 7th order
     method has to be multiplied by -9! = -362880.  Yow.  Talk about a
     handy cap.



     Variable Step Size Methods
     --------------------------

          There is no reason that you have to uses a constant step size
     during the entire run of the program.  The system needs the most
     accuracy when a star is making a sharp curve around another star, and
     it needs the least accuracy when it is slowly moving off in a fairly
     straight line.

          Many of the methods, the Runge-Kutta and the Gragg-Bulirsch-
     Stoer methods in particular, can estimate the accuracy of the results
     that they are returning and it is not too hard to have them adjust the
     step size accordingly.

          The problem with variable step size methods is that they tend to
     have the smallest step size, and thus run slower, when the stars are
     moving the fastest.  If you implemented a variable step size method
     and didn't do anything to counteract this, you would see stars "slow
     down" as they approached collapsars and "speed up" when they are far
     away from them.  For this reason, no variable step size methods were
     considered for XStar.  (sort of.  See the section on future
     possibilities.)



     Final Analysis of the Methods
     -----------------------------

          According to the books on numerical analysis that I have read,
     the multi-step Adam-Bashford method shouldn't even be in the running
     for the best method, and my taylor3 method should be a joke.  After
     all, the Adam-Bashford method isn't even as sophisticated as a
     predictor-corrector method and the Taylor series is rarely talked
     about except for its use as a fundamental theory.

     The highly regarded _Numerical_Recipes_in_C_ has these comments on the
     subject:

               Runge-Kutta succeeds virtually always; but it is not usually
          fastest.  Predictor-corrector methods, since they use past
          information, are somewhat more difficult to start up, but, for
          many smooth problems, they are computationally more efficient than
          Runge-Kutta.  In recent years Bulirsch-Stoer has been replacing
          predictor-corrector in many applications, ...  However, it
          appears that only rather sophisticated predictor-corrector
          routines are competitive.

          [ The straight Adam-Bashford method can hardly be considered a
            "sophisticated" method... ]
          ...

               The techniques described in this section [Bulirsch-Stoer]
          are not for differential equations containing nonsmooth functions.
          ... A second warning is that the techniques in this section are
          not particularly good for differential equations which have
          singular points _inside_ the interval of integration.

          ...

               We suspect that predictor-corrector integrators have had
          their day, and that they are no longer the method of choice for
          most problems in ODEs. [ODE=Ordinary differential equations, i.e.
          what we have]  For high-precision applications, or applications
          where evaluations of the right hand sides are expensive,
          Bulirsch-Stoer dominates.  For convenience, or for low-precision,
          adaptive-stepsize Runge-Kutta dominates.  Predictor-corrector
          methods have been, we think, squeezed out in the middle.  There
          is possibly only one exceptional case:  high-precision solution
          of very smooth equations with very complicated right-hand sides,
          as we will describe later.

          ...

               Our prediction is that, as extrapolation methods like
          Bulirsch-Stoer continue to gain sophistication, they will
          eventually beat out PC methods in all applications.  We are
          willing, however, to be corrected.

          [ I don't think I trust my analysis enough to tell them that
            they need to be corrected... :-]


          Let's look at the properties of our particular problem.  First,
     the evaluation of the right-hand side, i.e. f(), is _very_ expensive.
     It requires calculating distances from one star to every other star in
     the system and that involves the slow operations of division and
     sqrt().  The star system is normally very smooth, but it can also have
     singularities or poles in the complex plane when stars pass close to
     each other.  The movement routines need to be accurate, but since the
     results are just used to draw pictures, very high accuracy is not
     really required.  It just has to be accurate enough to avoid the worst
     of the movement errors.

          Another weird property of our star movement problem is that, the
     smaller the step size is, the less a 'near collision' looks like a
     singularity and the more it looks like a smooth path.  This is because
     the stars tend to move around each other so the point of the
     singularity (i.e., the other star) tends to move out of the way.  So,
     by using a simpler method such as the Adam-Bashford method which
     requires a very smooth function, you can make the step size very small
     and thus get a smooth function.  But if you use a more expensive
     method, such as the Gragg-Bulirsch-Stoer's you have to use a much
     large step size and thus you get the singularities that it can't
     handle.


               The following are two graphs of the x-axis component of the
          acceleration vector as a star makes a close pass to a collapsar.
          The time t=0 is arbitrarily chosen as the time when the star is
          the closest to the collapsar.
     
				    |
				  * |
				    |
				    |
				    |
				    |
			     *      |
			            |
		        *           |
              *    *                |
           -------------------------+-----------------------
			            |	   	     * 	  * 
			            |	       	*	
			            |
			            |      *
				    |
				    |
				    |
				    |
				    | *
				    |
          Graph of f() with a large step size.  Note the apparent
          singularity at t=0.


				    |
				    |
				    |
				**  |
			       *  * |
			      *     |
			    **      |
			 ***       *|
		    *****           |
              ******                |
           -------------------------*-----------------------
			            |	   	     ****** 
			            |	       	*****
			            |*	     ***	
			            |      **
				    |     *
				    | *	 *
				    |  **
				    |
				    |
				    |
          Graph of f() with a small step size.  Note that f() now looks
          like a continuous function and has smaller peaks.


          The predictor-corrector methods seemed to be slightly worse than
     the Adam-Bashford method.  The fact that you have to evaluate f()
     twice per step, and thus the predictor-corrector methods need twice
     the step size, seems to cancels out the improved error term.
     Apparently it is more important to have a smaller step size and thus
     make the function appear smoother than it is to have a more accurate,
     but larger step.

          The Runge-Kutta method is hampered because it has to take trial
     points into the future, but the movement of the stars changes how the
     gravitational force field will look in the future.  So, the rk4 method
     has to not only calculate f() 4 times, it also has to do 4 trial star
     movements, which are not going to be completely accurate.

          The taylor3 method that I developed is good at low accuracy
     levels because its error term has a very small constant.  It also, by
     it's very structure, tends to cancel out the effects of the overstep
     phenomenon, which is one of the worst of the error types.  It not just
     a coincidence that the taylor3 method is right at the borderline of
     being the most accurate method.  It was the method that I used to
     develop most of XStar with, so if I had noticed an excess amount of
     accuracy, I would have either increased the display rate or increased
     the number of stars.  So, all the other methods are having to compete
     with the taylor3 method at the very peak of its efficiency.




                         Theory of Star System Creation
                         ------------------------------


          I have put a lot of work into the code that creates the initial
     star system configuration.  The results have been somewhat
     disappointing.  XGrav just created a bunch of stars with random
     locations and velocities.  There was a whole 9 lines of code dedicated
     to the task.  In contrast, XStar has some 1500 lines of star
     initialization code (more that XGrav in its entirety) yet the star
     systems that XStar creates last only about 2-3 times as long.  Unlike
     the movement routine, I wasn't even able to learn about a well
     research subject, nor was I able discover a whole lot on my own.

          The obvious goal of the star system creation routines is to
     create "interesting" star systems.  One way this can be done is by
     creating one of a large number of "special" star configurations.
     XStar does this, but to me, the challenge was to figure out how to
     create a large number of stars that would move around a lot and make
     close passes, but would not collide.   Many of the "special" star are
     actually test patterns that I created to help solve the "main"
     problem.

          For a "normal" star system, the obvious goals for making an
     interesting star system are: first, keep the stars on the screen;
     second, don't let them collide; and third, make it general so that any
     number of stars can be handled well.  There are a variety of things
     that interfere with these goals.



     Keeping the Stars On the Screen
     -------------------------------

     * The whole system drifts off to one side

          This is caused by having the total linear momentum of the star
     system being non-zero.  It is fairly easy to adjust the star system so
     that it has zero linear momentum, and this adjustment does not effect
     how the rest of the system interact.

     * The whole star system is off center

          This is caused by the having the center of gravity of the star
     system off center.  Like the linear momentum, this is easily fixed and
     doesn't effect other things.

     * The star system gains energy over time

          This can be caused by several things.  First, the movement
     routine might not preserve the total energy very well.  This is easily
     solved by using a better movement routine.  Euler's method has this
     problem, most of the rest do not.  Secondly, the overstep phenomenon
     can cause stars to gain energy.  This effect can be reduced by using a
     larger collision distance (-l <num>).

     * Stars leave the system very early on

          This is caused by giving stars more kinetic energy than they
     should have.  For stars that are moving in a circular orbit, having a
     kinetic energy to binding energy ratio of around 1/2 seems to work
     well.  At exactly 1/2, the stars will make an exact circle, but you
     can deviate a little to get elliptical orbits.  Stars that are moving
     tangentailly to the center of gravity can have a higher ratio than
     stars moving directly in line with the center of gravity.

     * After many collisions, the stars seem to drift apart

          I don't think there is anything that can be done about this.
     Usually when stars collide, it is because they have transferred energy
     and angular momentum to other stars so that the colliding stars can
     fall together.  Because energy and angular momentum are two of the
     conserved constants of motion, it means that the other stars must now
     have a larger orbit.  [Does this mean that in order for galaxies,
     stars and planets to form, the universe must expand?]



     Keeping the Stars From Colliding
     --------------------------------

     * The whole star system collapses toward the center

          This is caused by not giving the stars enough kinetic energy and
     thus having the kinetic energy to binding energy ratio too low.


     * Stars collide at too large a distance

          This is adjustable with the -l parameter and the default is a
     fairly reasonable 1 pixel.  If you adjust this to be less than one
     pixel, make sure that you also greatly increase the accuracy (via the
     -a parameter).  Otherwise, the overstep phenomenon will cause other
     undesirable effects.

     * Stars collide early on

          This is hard to fix, especially if one considers any collision to
     be too early.  In the star creation code, I do add in an adjustment to
     the velocity to try to avoid nearby stars, but this does not always
     work very well.

     * Stars collide after a while

          I don't think there is anything that can prevent stars from
     eventually colliding.  I do put a counter-clockwise spin on things
     because it seems to help quite a bit with the long term stability of
     the system.  On the other hand, this may be part of the what causes
     the star system to expand after many collisions.  This spin causes the
     system to have a fairly large angular momentum, which is preserved
     throughout the life of the star system.  When there are only a few
     stars left, they have to have a large separation.

          Another thing to think about on the subject of long lasting star
     systems is how large you should create the initial star system.  The
     star system can be scaled up or down by changing STARAREA in xstar.h
     and the resulting star systems will act just like you zoomed in our
     out on the star system.  However, if you make the star system smaller,
     then stars that would have made near passes to each other will now
     collide.  On the other hand, since all star systems expand over time,
     if you make the star system larger, then you will reach the point where
     the stars are all off the screen sooner.  Well, sort of.  Since the
     star system tends to expand as stars collide and larger star systems
     have fewer collisions, this expansion effect is somewhat mitigated.
     XStar is configured so that the initial star system configurations
     just barely fit on the default 512x512 window.

          A clue to how difficult it is to try and create a truly long
     lasting star system is evident from how just changing a few of the
     magic numbers by 5-15%, you can change the length of time that a
     particular star system lasts by a factor of 2-10.  While it is
     possible to tweak the constants to make a particular configuration run
     for a long time, you must run a large number of star systems to see if
     those tweaks make an overall improvement.



     Making the Star Creation General
     --------------------------------

          One of the more successful aspects of the star system creation
     code is that it is fully generalized.  It works well over a very wide
     range of stars, from 2 to at least 50 stars.  There are few magic
     constants, and most of those can be traced back to the fundamental
     physics of the problem.  Now, the fact that the current algorithm can
     handle any number of stars fairly well doesn't mean I think it is a
     elegant algorithm.  It is a flexible and general hack, but it is still
     an ad-hoc system that can not be fully derived from a theory.




                           Theory of the Display Code
                           --------------------------

          The display code is, of course, one of the major parts of XStar.
     It is, after all, an X screen saver.  While I have not put as much
     work into the display code as I have into the star movement code or
     the star system creation code, there was still a lot of thought put
     into it.  Figuring out how to do everything that XStar does in a quick
     and memory efficient manor took some thinking.

          The display code has to be able to add points as they are
     created, delete points when they get too old, replot the screen on
     expose events or mode switches, and correctly handle the case where a
     star trail overwrites an old trail.  To do all this, I have set up two
     major data structures.

          First, there is a ring buffer that is used to store the displayed
     points in.  In this structure is stored the x,y location of the point,
     what star created this point, what color of the rainbow it should be
     and a time stamp of what step this point was created on.  A pointer is
     kept to the beginning of the list so we know where to add stars to, and
     to the end of the list so that we know what points to erase.  The
     timestamp is used to determine how quickly a star should be erased.
     The stars are always kept in timestamp order.

          The second structure is a hash table that is used to locate a
     specific x,y point in the ring buffer.  If the x,y location is found
     in the ring buffer when a new point is being added to the ring, then
     the old location is marked as being unused.  This keeps the erase code
     from clearing a point that was overwritten later on.  I had considered
     other options such as doing a linear search, but that would be way too
     slow.  A binary tree was also considered, but since the points tend to
     be added in a very non-random order, I would have had to use some sort
     of self-balancing trees (such as red-black tress).  Also, with the
     amount of storage taken up with the two links, it is possible to
     create a hash table that is so large that you will rarely have to
     search more than a few table entries.  So, the hash table is probably
     the quickest, most memory efficient and simplest method.

          The hash table uses linear probing on a hash collision because I
     need to both add and delete items out of the hash table.  Linked lists
     of collision chains would require too much memory and using a
     secondary hash function to resolve collisions would make it very hard
     to delete items.  According to Knuth, you can estimate the number of
     items that you must check to find a match, or verify that an entry
     isn't in the hash table, by the formulas:

                                   Unsuccessful        Successful
          Separate Chaining:       1 + a/2             (a+1)/2
          Linear Probing:          .5 + 1/(2*(1-a)^2)  .5 + 1/(2*(1-a))
          Double Hashing:          1/(1-a)             - ln(1-a)/a

          Where a = the loading of the table.  i.e., the ratio of used
                    entries to total entries.

          Experiments with XStar seemed to be show that the linear probe
     estimate is fairly close.  XStar does not use a truly random hash
     function and it also has to keep placeholder tags that increase the
     number of entries that need to be searched by a little bit.  Still,
     with the default settings XStar will normally locate a duplicate point
     in one or two checks.  It can verify that a point has not been displayed
     in only 1-4 checks.  This is fewer compares than a binary search would
     take, although you have to include the cost of the hash function in
     deciding which method is the fastest.

          The hash function is a home grown method.  I simply took two
     large random numbers and found the next prime after them.  Then, I
     take the x and y coordinates, multiply them by these primes and add
     the result together.  I chose large numbers so that a single bit
     change in one of the coordinates would cause, on average, half of the
     bits in the hash to change.  I chose prime numbers so that common
     factors could not cause the same hash to be generated by just flipping
     the x and y coordinates around.  The hash function needed to be fairly
     random, but it also needed to be quick.  My testing seems to show that
     it is random enough and quick enough for XStar.




                                  Future Stuff
                                  ------------

          The following are notes on how I might implement some future
     enhancements.  Don't expect them to appear any time soon though...



     Variable Step Methods
     ---------------------

          Implementing a variable step method could make a very large
     improvement in the speed if XStar.  The problem has always been that I
     think that XStar should give a feel about how fast something is moving
     and a variable step method would actually make stars appear to move
     slowest when they are really moving the fastest.

          With the recent creation of the display points list, it is now
     possible to create star positions before they need to be displayed.
     We can fill up the buffer with new star positions, and display them at
     a constant rate.  There are a lot of messy details that have to be
     worried about, but variable step sized methods are now a very real
     possibility and my estimate is that they could make XStar 2-10 times as
     fast.

          The easiest way to do this is when you want to change the step
     size of the system, you would simply rescale the velocities and
     masses.  This would be much quicker than actually keeping a step size
     variable around that has to be taken into account during the
     movements.  The display list code would need some work because the
     timestamps that are stored with each point are really "step numbers"
     and they must remain integers.  Knowing how quickly stars will need to
     be displayed, however, should help in determining how quickly we need
     to erase stars and still keep the buffer from overflowing.

          A more sophisticated variable step size method would be to have
     each star have its own step size.  There is no reason why a star that
     is moving slowly off by itself should have a smaller step size just
     because two other stars are making a close pass to each other.



     Speed Hacks for Calculating Star Attractions
     --------------------------------------------

          For large numbers of stars, calculating the total acceleration of
     a given star dominates almost everything else.  This calculation
     involves find lots of distances and angles between various stars.
     Now, most of the time these distances and angles are not changing
     rapidly.  It is only the near by stars that rapidly change their
     contribution to the total acceleration.  You should be able to do some
     sort of caching of the old values from the distant stars and make
     things run a whole lot faster.



     Going to 3-D
     ------------

          Going to a three dimensional system instead of just a two
     dimensional system would probably make the star systems last much
     longer.  After all, a one dimensional star system will either expand
     forever or collapse to a single point.  With two dimensions, stable
     orbits can be set up, although there are still a lot of collisions.
     Adding a third dimension should greatly reduce the number of
     collisions and make XStar behave more like the real galaxy.

          Maybe we could use color to denote the location in the third
     dimension or something.  I am also curious about what kind of looping
     patters would be created in such a system.
END-OF-FILE

if [ "$filename" != "/dev/null" ]
then
  size=`wc -c < $filename`

  if [ $size != 63816 ]
  then
    echo $filename changed - should be 63816 bytes, not $size bytes
  fi

  chmod 660 $filename
fi

echo end of this archive file....
exit 0
--- cut here ---
-- 
Wayne Schlitt can not assert the truth of all statements in this
article and still be consistent.


