Figure 2 shows a classification of the optimization
methods included in the preliminary release of the library.
Except for the linear solvers, all methods can communicate with
user-supplied objective functions.
For algorithms such as conjugate gradient, the required gradient
information can be suppied by the user if it is available, otherwise
*COOOL* will compute it numerically by finite differences.
Using the freely available package *EXPECT*, we are able to use an
especially simple model of communication between the optimization
methods and the objective function for obtaining information on both
function values and their derivatives.
This is illustrated in Figure 3.
The model consists of a list of numbers , up to
. The optimization routine first sends out a flag which tells
the objective function whether it requests a function value or a
gradient vector, followed by these numbers to its standard output; the
objective function then reads the numbers from its standard
input, computes the function value or gradient vector according to the
request, and writes that to its standard output; finally, the optimization
routine captures the objective function value or gradients via its own
standard input and then decides how to update the model.
This simple model allows us to consider algorithms as diverse as
conjugate gradient and simulated annealing within a unified framework.

**:**
An overview of the optimization
algorithms contained in *COOOL* (see text below). Included are
linear solvers for rectangular linear systems using both least
squares and general norms and local methods based on direct
search and quasi-Newton methods. Monte Carlo global optimization
methods will be part of a future release of *COOOL*.

**:** A model of communication between the objective function
and the optimization routine.
Each
objective function consists of a stand-alone program which reads models
from its standard input and writes numbers to its standard output.

A key point is that the communication
between the optimization routine and the objective function is
transparent to the user.
If you want to use one of the optimization methods on your own
objective function, all you have to do is write the objective function
as a stand-alone program which reads a flag followed by a model from
standard input and writes a number to standard
output-until an end-of-file is reached. On the other hand, if you
want to add your own optimization algorithm to the library, you do not
have to know how this communication mechanism works. Your
application can still take advantage of this communication capability
by declaring an instance of an objective function class within the
optimization code.
(*Instantiation* is the Object-Oriented
Programming (OOP) jargon for this. The closest C analog of this would
be to declare a variable as being of a given type. But in C++,
instantiation of an object may result in the execution of complex
instructions.) The objective function class inherits the ability to
perform this communication.

The linear solvers are really a separate issue. The codes are written in the same OOP style as the rest of the optimization methods, and therefore benefit from the same ideas of encapsulation, inheritance, etc. (see below). However, they require the specification of a matrix, which is often huge and sparse.

Sun Feb 25 12:08:00 MST 1996