Neural Networks Source Code 
This page attempts to compile a suite of Neural network source codes for hobbyists and researchers to tweak and have fun on.
Source code for 18 are from Karsten Kutza. More Source codes are within this directory.
Network (Application)  Description 

1. Adaline Network Pattern Recognition

The Adaline is essentially a singlelayer backpropagation network.
It is trained on a pattern recognition task, where the aim is to classify
a bitmap representation of the digits 09 into the corresponding classes. Due to the
limited capabilities of the Adaline, the network only recognizes the exact
training patterns. When the application is ported into the multilayer
backpropagation network, a remarkable degree of faulttolerance can be achieved.

2. Backpropagation Network TimeSeries Forecasting

This program implements the now classic multilayer
backpropagation network with bias terms and momentum. It is used to detect
structure in timeseries, which is presented to the network using a simple tapped
delayline memory. The program learns to predict future sunspot activity from
historical data collected over the past three centuries. To avoid overfitting, the
termination of the learning procedure is controlled by the socalled stopped
training method.

3. Hopfield Model Autoassociative Memory

The Hopfield model is used as an
autoassociative memory to store and recall a set of bitmap images. Images
are stored by calculating a corresponding weight matrix. Thereafter, starting from an
arbitrary configuration, the memory will settle on exactly that stored image, which is
nearest to the starting configuration in terms of Hamming distance. Thus given an incomplete
or corrupted version of a stored image, the network is able to recall the
corresponding original image.

4. Bidirectional Associative Memory Heteroassociative Memory

The bidirectional associative memory can be viewed as a generalization
of the Hopfield model, to allow for a heteroassociative memory to be implemented.
In this case, the association is between names and corresponding phone
numbers. After coding the set of exemplars, the network, when presented with a
name, is able to recall the corresponding phone number and vice versa. The memory even
shows a limited degree of faulttolerance in case of corrupted
input patterns.

5. Boltzmann Machine Optimization

The Boltzmann machine is a stochastic version of
the Hopfield model, whose network dynamics incorporate a random component
in correspondence with a given finite temperature. Starting with a high temperature and
gradually cooling down, allowing the network to reach equilibrium at any step, chances are
good, that the network will settle in a global minimum of the corresponding energy
function. This process is called simulated annealing. The network is then
used to solve a wellknown optimization problem: The weight matrix is chosen such that the
global minimum of the energy function corresponds to a solution of a particular instance
of the traveling salesman problem.

6. Counter propagation Network Vision

The counterpropagation network is a competitive
network, designed to function as a selfprogramming lookup table
with the additional ability to interpolate between entries. The
application is to determine the angular rotation of a rocketshaped object, images of
which are presented to the network as a bitmap pattern. The performance of the network
is a little limited due to the low resolution of the bitmap.

Control

The selforganizing map (SOM) is a competitive
network with the ability to form topologypreserving mappings
between its input and output spaces. In
this program the network learns to balance a pole by applying forces at the base of the
pole. The behavior of the pole is simulated by numerically integrating the differential
equations for its law of motion using Euler's method. The task of the network is to establish
a mapping between the state variables of the pole and the optimal force to keep it
balanced. This is done using a reinforcement learning approach:
For any given state of the pole, the network tries a slight variation of the mapped force.
If the new force results in better control, the map is modified, using the pole's current
state variables and the new force as a training vector.

8. Adaptive Resonance Theory
Brain Modeling

This program is mainly a demonstration of the basic
features of the adaptive resonance theory network, namely the ability to
plastically adapt when presented with new input patterns while remaining stable at
previously seen input patterns.

Zip archive of Source Code for 18
Created on 31 Jul 1998. Last revised on 31 Jul 1998.
Tralvex Yeap