Back to Joe's Java IndexBack to Joe's Java Index.


Neural Network Package

jahuwaldt.tools.NeuralNets

 

This library is provided as free software under the terms of the Library General Public License.

I have written a Java class package for experimenting with simulated Artificial Neural Networks (ANNs).

For those of you who don't know, an ANN is a system of interconnected units (nodes or neurons). Signals are sent along the connections between nodes and each node typically makes a decision about sending a signal on to those nodes connected to it based on the signals it receives from the nodes that it is connected to. An important characteristic of ANNs are their ability to learn from examples rather than having to be explicitly programmed. If given a pattern of inputs, the strengths of the connections between nodes can be adjusted (trained) to produce a desired pattern of outputs. This is important for there are often problems where you know what the correct result should be, but it is not possible to lay down the precise procedure for finding the result. In such cases, providing examples will enable an ANN to develop its own implicit rules in terms of the strength of the connections between nodes. In addition, ANNs often have the characteristics of association and classification. When presented with patterns that the network was not trained on, the network will attempt to classify the pattern based on what it does know. Thus an ANN can deal with unexpected situations in ways that traditionally programmed computers can not.

My simulated neural network package is designed such that the type of neuron used is quickly and easily changeable. You can create a network architecture (or use one of the ones that I provide) and simply plug in whatever neuron type you want to use. Everything else can remain unchanged. I provide two different neuron types: a basic neuron that uses the standard sigmoid function and a similar neuron that uses a hyperbolic tangent function instead.

I provide an overall FeedForward class that implements a standard multi-layered feed forward network (given a set of input patterns, it can be trained to give a set of output patterns by feeding a signal forward through the network from one side to the other; no recursion). The basic feed forward network does not specify a learning method and thus you can easily extend this class to implement whatever learning method (on whatever neuron type) you wish to use.

I provide two extensions to the FeedForward class that implement different training algorithms.

The first, and most simple, is called "FeedForwardBP". This class provides a simple implementation of back propagation learning (backprop). Back propagation is a standard (though VERY slow and troublesome) method of training feed forward networks by propagating the error in the output nodes back to the nodes in the internal hidden layers. The errors at each node are used to adjust the strength of the connections between it and the nodes that input to it. Back propagation is actually a steepest descent numerical optimization algorithm -- no gradient information is used.

The second is called "FeedFowardSCG" . This class uses a more advanced training algorithm known as the Scaled Conjugate Gradient (SCG) method. This algorithm uses gradient information and avoids an expensive line search by using a Levenberg-Marquardt approach to scale the step size. This algorithm requires orders of magnitude fewer iterations for a given network than the back propagation algorithm, however, it is vastly more complex computationally and requires 6 times as much memory during training.

If you know of a better way to solve this programming design problem, please let me know. Also, if you happen to extend my NeuralNets package, I'd love to hear about it. What techniques are you working with, what process are you using, etc?

If you download my package and use it. Let me know what you think.

My NeuralNets package can be downloaded by clicking here: NeuralNets.zip, Version: August 11, 2002.

Version history .

Need help?


Created: June 14, 1998
Modified: August 11, 2002