Neural Network Applications

 


1.0 Introduction


Intro | Apps | Ref | Software
Useful Links | nap.ppt

An Artificial Neural Network is a network of many very simple processors ("units"), each possibly having a (small amount of) local memory. The units are connected by unidirectional communication channels ("connections"), which carry numeric (as opposed to symbolic) data. The units operate only on their local data and on the inputs they receive via the connections.

The design motivation is what distinguishes neural networks from other mathematical techniques: A neural network is a processing device, either an algorithm, or actual hardware, whose design was
motivated by the design and functioning of human brains and components thereof.

There are many different types of Neural Networks, each of which has different strengths particular to their applications. The abilities of different networks can be related to their structure, dynamics and learning methods.

Neural Networks offer improved performance over conventional technologies in areas which includes: Machine Vision, Robust Pattern Detection, Signal Filtering, Virtual Reality, Data Segmentation, Data Compression, Data Mining, Text Mining, Artificial Life, Adaptive Control, Optimisation and Scheduling, Complex Mapping and more.


next


g-top.gif (311 bytes)

2.0 Applications

There are abundant materials, tutorials, references and disparate list of demos on the net. This work attempts to compile a list of applications and demos  - those that comes with video clips.

The applications featured here are:

PS: For those who are only interested in source codes for Neural Networks



next


g-top.gif (311 bytes)


2.1 CoEvolution of Neural Networks for Control of Pursuit & Evasion
Dave Cliff & Geoffrey Miller, University of Sussex
.
The following MPEG movie sequences illustrate behaviour generated by dynamical recurrent neural network controllers co-evolved for pursuit and evasion capabilities. From an initial population of random network designs, successful designs in each generation are selected for reproduction with recombination, mutation, and gene duplication. Selection is based on measures of how well each controller performs in a number of pursuit-evasion contests. In each contest a pursuer controller and an evader controller are pitched against each other, controlling simple ``visually guided'' 2-dimensional autonomous virtual agents. Both the pursuer and the evader have limited amounts of energy, which is used up in movement, so they have to evolve to move economically. Each contest results in a time-series of position and orientation data for the two agents.

These time-series are then fed into a custom 3-D movie generator. It is important to note that, although the chase behaviors are genuine data, the 3D structures, surface physics, and shading are all purely for illustrative effect.

pursuit.gif (11377 bytes)

1. P0 vs E0 (2.33Mb Color Mpeg). Best pursuer from generation 0 versus best evader from generation 0. Generation 0 is the initial random generation, so the behaviors shown here are produced by randomly generated recurrent dynamical neural network architectures. As can be seen, the pursuer is not very good at pursuing, and the evader is not very good at evading.

2. P200 vs E200 (2.31Mb Color Mpeg). Best pursuer from generation 200 versus best evader from generation 200. Pursuer chases evader, but soon runs out of energy, allowing the evader to escape.

3. P999 vs E999 (1.53Mb Color Mpeg). Best pursuer from generation 999 versus best evader from generation 999. Pursuer chases evader, but uses up all its energy just before the evader runs out of energy.

4. P999 vs E200 (1.34Mb Color Mpeg). Best pursuer from generation 999 versus best evader from generation 200. After a couple of close shaves, the pursuer finally catches the evader.


next


g-top.gif (311 bytes)


 

2.2 Learning the Distribution of Object Trajectories for Event Recognition
John Neilson & David Hogg, University of Leeds
.
This research work is about the modelling of object behaviours using detailed, learnt statistical models. The techniques being developed will allow models of characteristic object behaviours to be learnt from the continuous observation of long image sequences. It is hoped that these models of characteristic behaviours will have a number of uses, particularly in automated surveillance and event recognition, allowing the surveillance problem to be approached from a lower level, without the need for high-level scene/behavioural knowledge. Other possible uses include the random generation of realistic looking object behaviour for use in Virtual Reality (see Radiosity for Virtual Reality Systems at Section 2.3), and long-term prediction of object behaviours to aid occlusion reasoning in object tracking.

 

1. The model is learnt in an unsupervised manner by tracking objects over long image sequences, and is based on a combination of a neural network implementing Vector Quantisation and a type of neuron with short-term memory capabilities.

jn-learn.jpg (20003 bytes)
1. Learning mode (8mb, Mpeg)

2. Models of the trajectories of pedestrians have been generated and used to assess the typicality of new trajectories (allowing the identification of `incidents of interest' within the scene), predict future object trajectories, and randomly generate new trajectories.

jn-pred.jpg (18506 bytes)
2. Predict mode (8.6mb, Mpeg)


next


g-top.gif (311 bytes)


 

2.3 Radiosity for Virtual Reality Systems (ROVER)
Tralvex Yeap & Graham Birtwistle, University of Leeds
.
The synthesis of actual and computer generated photo-realistic images has been the aim of artists and graphic designers for many decades. Some of the most realistic images (see Graphics Gallery - simulated steel mill) were generated using radiosity techniques. Unlike ray tracing, radiosity models the actual interaction between the lights and the environment. In photo realistic Virtual Reality (VR) environments, the need for quick feedback based on user actions is crucial. It is generally recognised that traditional implementation of radiosity is computationally very expensive and therefore not feasible for use in VR systems where practical data sets are of huge complexity. In the original thesis, we introduce two new methods and several hybrid techniques to the radiosity research community on using radiosity in VR applications.

On the left column, flyby, walkthrough and a virtual space are first introduced and on the left. On the right, we showcase one of the two novel methods which was proposed using Neural Network technology (download pdf thesis, 123pgs / 2.6mb).

 

Introduction to Flyby, Walkthrough and Virtual Space

ty-flyby.jpg (8569 bytes)
Flyby
(1.1mb, Mpeg)

ty-walk.jpg (19293 bytes)
3D Walkthrough (6.6mb, AVI)

ty-qtvr.jpg (12333 bytes)
Virtual Space (163kb, QTVR)

(A) ROVER Learning from Examples

ty-walkr.jpg (47906 bytes)

Sequence 1 (935kb, Quicktime)

Sequence 5 (2mb, Quicktime)

Sequence 8 (2.2mb, Quicktime)


(B) ROVER Modelling

ty-w-all.gif (5016 bytes)


(C) ROVER Prediction

ty-wpred.gif (3145 bytes)


next


g-top.gif (311 bytes)


 

2.4 Autonomous Walker & Swimming Eel
Anders Lansner, Studies of Artificial Neural Systems
.
(A) The research in this area involves combining biology, mechanical engineering and information technology in order to develop the techniques necessary to build a dynamically stable legged vehicle controlled by a neural network. This would incorporate command signals, sensory feedback and reflex circuitry in order to produce the desired movement.

al-climb.gif (37396 bytes)
Walker   (484kb, Mpeg)


(B) Simulation of the swimming lamprey (eel-like sea creature), driven by a neural network.

al-lamp.jpg (7193 bytes)
Swimming Lamprey   (261kb, Mpeg)


next


g-top.gif (311 bytes)


 

2.5 Robocup: Robot World Cup
Neo Say Poh & Tralvex Yeap, Japan-Singapore AI Centre / Kent Ridge Digital Labs
.
The RoboCup Competition pits robots (real and virtual) against each other in a simulated soccer tournament. The aim of the RoboCup competition is to foster an interdisciplinary approach to robotics and agent-based AI by presenting a domain that requires large-scale coorperation and coordination in a dynamic, noisy, complex environment.

RoboCup has three different leagues to-date. The Small and Middle-Size Leagues involved physical robots; the Simulation League is for virtual, synthetic teams. This work focus on building softbots for the Simulation League.

Machine Learning for Robocup involves:

  1. The training of player in the process of making the decision of whether (a) to dribble the ball; (b) to pass it on to another team-mate; (c) to shoot into the net.
  2. The training of the goalkeeper in process of intelligent guessing of how the ball is going to be kick by the opponents. Complexities arise when one opponent decides to pass the ball to another player instead of attempting a score.
  3. Evolution of a co-operative and perhaps unpredictable team.

Common AI methods used are variants of Neural Networks and Genetic Algorithms.

 

sp-robo.gif (10854 bytes)
KRDL Soccer Softbots (3.1mb, AVI)


next


g-top.gif (311 bytes)


 

2.6 Using HMM's for Audio-to-Visual Conversion
Ram Rao, Russell Mersereau & Chen Tsuhan, Georgia Institute of Technology / AT&T Research
.
One emerging application which exploits the correlation between audio and video is speech-driven facial animation. The goal of speech-driven facial animation is to synthesize realistic video sequences from acoustic speech. Much of the previous research has implemented this audio-to-visual conversion strategy with existing techniques such as vector quantization and neural networks. Here, they examine how this conversion process can be accomplished with hidden Markov models (HMM).

 

(A) Tracking Demo: The parabolic contour is fit to each frame of the video sequence using a modified deformable template algorithm. The height between the two contours, and the width between the corners of the mouth can be extracted from the templates to form our visual parameter sets.

rr-track.jpg (4299 bytes)
Tracking (314kb, AVI)

(B) Morphing Demo: Another important piece of the speech-driven facial animation system is a visual synthesis module. Here we are attempting to synthesize the word "wow" from a single image. Each frame in the video sequence is morphed from the first frame shown below. The parameters used to morph these images were obtained by hand.

rr-morph.jpg (3991 bytes)
Morphing (164kb, AVI)


next


g-top.gif (311 bytes)


 

2.7 Artificial Life: Galapagos
Anark Corporation
.
Galapagos is a fantastic and dangerous place where up and down have no meaning, where rivers of iridescent acid and high-energy laser mines are beautiful but deadly artifacts of some other time. Through spatially twisted puzzles and bewildering cyber-landscapes, the artificial creature called Mendel struggles to survive, and you must help him.

Mendel is a synthetic organism that can sense infrared radiation and tactile stimulus. His mind is an advanced adaptive controller featuring Non-stationary Entropic Reduction Mapping -- a new form of artificial life technology developed by Anark. He can learn like your dog, he can adapt to hostile environments like a cockroach, but he can't solve the puzzles that prevent his escape from Galapagos.

 

Galapagos features rich, 3D texture-mapped worlds, with continuous-motion graphics and 6 degrees of freedom. Dramatic camera movement and incredible lighting effects make your passage through Galapagos breathtaking. Explosions and other chilling effects will make you fear for your synthetic friend. Active panning 3D stereo sound will draw you into the exotic worlds of Galapagos.

galapago.jpg (45493 bytes)
Galapagos (2.5mb, Quicktime)


next


g-top.gif (311 bytes)


 

2.8 Speechreading (Lipreading)
Günter Mamier, Marco Sommerau & Michael Vogt, Universität Stuttgart
.
As part of the research program Neuroinformatik the IPVR develops a neural speechreading system as part of a user interface for a workstation. The three main parts of the system include a face tracker (done by Marco Sommerau), lip modeling and speech processing (done by Michael Vogt) and the development and application of SNNS for neural network training (done by Günter Mamier).

Automatic speechreading is based on a robust lip image analysis. In this approach, no special illumination or lip make-up is used. The analysis is based on true color video images. The system allows for realtime tracking and storage of the lip region and robust off-line lip model matching. The proposed model is based on cubic outline curves. A neural classifier detects visibility of teeth edges and other attributes. At this stage of the approach the edge between the closed lips is automatically modeled if applicable, based on a neural network's decision.

To achieve high flexibility during lip-model development, a model description language has been defined and implemented. The language allows the definition of edge models (in general) based on knots and edge functions. Inner model forces stabilize the overall model shape. User defined image processing functions may be applied along the model edges. These functions and the inner forces contribute to an overall energy function. Adaptation of the model is done by gradient descent or simulated annealing like algorithms. The figure shows one configuration of the lip model, consisting of an upper lip edge and a lower lip edge. The model edges are defined by Bezier-functions. Outer control knots stabilize the position of the corners of the mouth.

 

gm-lip1.gif (5134 bytes)

Fig 2.8.1 The model interpreter enables a permanent measurement of model knot positions and color blends along model edges during adaptation to an utterance. The resulting parameters may be used for speech recognition tasks in further steps.

 

gm-lip2.jpg (23468 bytes)
Lipread (190kb, Mpeg)


next


g-top.gif (311 bytes)


 

2.9 Detection and Tracking of Moving Targets
Defense Group Incorporated
.
The moving target detection and track methods here are "track before detect" methods. They correlate sensor data versus time and location, based on the nature of actual tracks. The track statistics are "learned" based on artificial neural network (ANN) training with prior real or simulated data. Effects of different clutter backgrounds are partially compensated based on space-time-adaptive processing of the sensor inputs, and further compensated based on the ANN training. Specific processing structures are adapted to the target track statistics and sensor characteristics of interest. Fusion of data over multiple wavelengths and sensors is also supported.

Compared to conventional fixed matched filter techniques, these methods have been shown to reduce false alarm rates by up to a factor of 1000 based on simulated SBIRS data for very weak ICBM targets against cloud and nuclear backgrounds, with photon, quantization, and thermal noise, and sensor jitter included. Examples of the backgrounds, and processing results, are given below.

The methods are designed to overcome the weaknesses of other advanced track-before-detect methods, such as 3+-D (space, time, etc.) matched filtering, dynamic programming (DP), and multi-hypothesis tracking (MHT). Loosely speaking, 3+-D matched filtering requires too many filters in practice for long-term track correlation; DP cannot realistically exploit the non-Markovian nature of real tracks, and strong targets mask out weak targets; and MHT cannot support the low pre-detection thresholds required for very weak targets in high clutter. They have developed and tested versions of the above (and other) methods in their research, as well as Kalman-filter probabilistic data association (KF/PDA) methods, which they use for post-detection tracking.

Space-time-adaptive methods are used to deal with correlated, non-stationary, non-Gaussian clutter, followed by a multi-stage filter sequence and soft-thresholding units that combine current and prior sensor data, plus feed back of prior outputs, to estimate the probability of target presence. The details are optimized by adaptive "training" over very large data sets, and special methods are used to maximize the efficiency of this training.

 

dgi-trk.jpg (12423 bytes)
Figure 2.9 (a) Raw input backgrounds with weak targets included,
(b) Detected target sequence at the ANN processing output,
post-detection tracking not included. Video Clip (274kb, Mpeg)


next


g-top.gif (311 bytes)


 

2.10 Real-time Target Identification for Security Applications
Stephen McKenna, Queen Mary & Westfield College
.
The system localises and tracks peoples' faces as they move through a scene. It integrates the following techniques:
  • Motion detection
  • Tracking people based upon motion
  • Tracking faces using an appearance model

Faces are tracked robustly by integrating motion and model-based tracking.

 

(A) Tracking in low resolution and poor lighting conditions

sm-ppl1.jpg (13864 bytes)
Jon (133kb, Mpeg)

(B) Tracking two people simultaneously: lock is maintained on the faces despite unreliable motion-based body tracking.

sm-ppl2.jpg (16140 bytes)
Double Tracking (664kb, Mpeg)


next


g-top.gif (311 bytes)


 

2.11 Facial Animation
David Forsey, University of British Columbia
.

Facial animations created using hierarchical B-spline as the underlying surface representation. Neural networks could be use for learning of each variations in the face expressions for animated sequences.

The (mask) model was created in SoftImage, and is an early prototype for the character "Mouse" in the YTV/ABC televisions series "ReBoot" (They do not use hierarchical splines for Reboot!). The original standard bicubic B-spline was imported to the "Dragon" editor and a hierarchy automatically constructed. The surface was attached to a jaw to allow it to open and close the mouth. Groups of control vertices were then moved around to created various facial expressions. Three of these expressions were chosen as key shapes, the spline surface was exported back to SoftImage, and the key shapes were interpolated to create the final animation.

df-face.jpg (1961 bytes)
Mask (193kb, Mpeg)
df-haida.jpg (2459 bytes)
Haida (209kb, Mpeg)


next


g-top.gif (311 bytes)


 

2.12 Behavioral Animation and Evolution of Behavior
Craig Reynolds, Silicon Graphics
.
This is a classic experiment (showcase at Siggraph-1995) and the flocking of ``boids,'' that convincingly bridged the gap between artificial life and computer animation.

Each boid has direct access to the whole scene's geometric description, but reacts only to flockmates within a certain small radius of itself. The basic flocking model consists of three simple steering behaviors:

1. Separation: steer to avoid crowding local flockmates.
2. Alignment: steer towards the average heading of local flockmates.
3. Cohesion: steer to move toward the average position of local flockmates.

In addition, the more elaborate behavioral model included predictive obstacle avoidance and goal seeking. Obstacle avoidance allowed the boids to fly through simulated environments while dodging static objects. For applications in computer animation, a low priority goal seeking behavior caused the flock to follow a scripted path.

 

cr-boid1.jpg (15390 bytes)
Siggraph Video (10mb, Quicktime)

cr-boid2.gif (994 bytes)
Boids Applet (625kb, AVI)


next


g-top.gif (311 bytes)


 

2.13 A Three Layer Feedforward Neural Network
Feng Yutao, Duke University
.
A three layer feedforward neural network with two input nodes and one output node is trained with backpropagation using some sample points inside a circle in the 2D plane. The evolution of decision regions formed during the training are shown in the following MPEG movies.

fy-seq3.jpg (6795 bytes)
Sequence 3 (237kb, Mpeg)

fy-seq10.jpg (5182 bytes)
Sequence 10 (264kb, Mpeg)


next


g-top.gif (311 bytes)


 

2.14 Artificial Life for Graphics, Animation, Multimedia, and Virtual Reality: Siggraph '95 Showcase
University of Toronto
.
Some graphics researchers have begun to explore a new frontier--a world of objects of enormously greater complexity than is typically accessible through physical modeling alone--objects that are alive. The modeling and simulation of living systems for computer graphics resonates with the burgeoning field of scientific inquiry called Artificial Life. Conceptually, artificial life transcends the traditional boundaries of computer science and biological science. The natural synergy between computer graphics and artificial life can be potentially beneficial to both disciplines. As some of the demos here demonstrate, potential is becoming fulfillment.

The demos demonstrate and elucidate new models that realistically emulate a broad variety of living things--both plants and animals--from lower animals all the way up the evolutionary ladder to humans. Typically, these models inhabit virtual worlds in which they are subject to physical laws. Consequently, they often make use of physics-based modeling techniques. More significantly, however, they must also simulate many of the natural processes that uniquely characterize living systems--such as birth and death, growth, natural selection, evolution, perception, locomotion, manipulation, adaptive behavior, intelligence, and learning. The challenge is to develop sophisticated graphics models that are self-creating, self-evolving, self-controlling, and/or self-animating by simulating the natural mechanisms fundamental to life.

 

sg95-dog.jpg (13079 bytes)
A.Dog (14mb, Quicktime)

sg95-evc.jpg (3344 bytes)
Evolved Virtual Creatures (10mb, Mpeg)
sg95-sac.jpg (14165 bytes)
Sensor-Based Autonomous Creatures (14mb, Quicktime)
sg95-af.jpg (12983 bytes)
A.Fish (15mb, Quicktime)


next


g-top.gif (311 bytes)


2.15 Creatures: The World Most Advanced Artificial Life!
Cyberlife
.
Creatures is the most entertaining computer game you'll ever play which offers nothing to shoot, no puzzles to solve or difficult controls to master. And yet it is mesmerising entertainment.

One have to raise, teach, breed and love computer pets that are really alive. They are so alive that if it is not taken care of, they will die. Creatures features the most advanced, genuine Artificial Life software ever developed in a commercial product, technology that has blown the imaginations of scientists world-wide. This is a look into the future where new species of life emerge from ordinary home and office PCs.

creature.jpg (11622 bytes)
Creatures
(760kb, AVI)


next


g-top.gif (311 bytes)


2.16 Framsticks Artificial Life
By Maciej Komosinski and Szymon Ulatowski (FAL Website)
.
Framsticks is a three-dimensional life simulation project. Both physical structure of creatures and their control systems are evolved. Evolutionary algorithms are used with selection, crossovers and mutations. Finite elements method is used for simulation. Both spontaneous and directed evolutions are possible.

"Antelope" attacks "Spider". The broken "Spider" becomes energy/food source. On your right, an MPG movie - 778 KB, 450 frames 320x240 (18 seconds). High quality MPG here 1.71 MB.


KU-Fram
(800kb, Mpeg)


next


g-top.gif (311 bytes)


 

3.0 Conclusion

3.1 Past and Present

The development of true Neural Networks is a fairly recent event, which has been met with success. Two of the different systems (among the many) that have been developed are: the basic feedforward Network and the Hopfield Net.

In addition to the applications featured here, other application areas include:

3.2 The Future

The future of Neural Networks is wide open, and may lead to many answers and/or questions. Is it possible to create a conscious machine? What rights do these computers have? How does the human mind work? What does it mean to be human?


next


g-top.gif (311 bytes)


 

References

  • CoEvolution of Neural Networks for Control of Pursuit & Evasion
http://www.cogs.susx.ac.uk/
users/davec/pe.html

 

  • Learning the Distribution of Object Trajectories for Event Recognition
http://www.scs.leeds.ac.uk/
neilj/research.html

 

  • Radiosity for Virtual Reality Systems
http://tralvex.com/rover

 

  • Autonomous Walker & Swimming Eel
http://best.nada.kth.se:8080/
sans/jml.cgi/res_demo.jml

 

  • Robocup: Robot World Cup
http://tralvex.com/robodemo

 

  • Using HMM's for Audio-to-Visual Conversion
http://www.ece.gatech.edu/users/
rr/papers/mmsp/paper.html

 

  • Artificial Life: Galapagos
http://www.anark.com/
Galapagos/info.shtml

 

  • Speechreading (Lipreading)
http://www.informatik.uni-stuttgart.de/ipvr/
bv/projekte/Neuroinformatik/model.html

 

  • Detection and Tracking of Moving Targets
http://www.ca.defgrp.com/detect.html

 

  • Real-time Target Identification for Security Applications
http://www.dcs.qmw.ac.uk/research/
vision/track_people_face.html

 

  • Facial Animation
http://www.cs.ubc.ca/nest/imager/
contributions/forsey/dragon/anim.html

 

  • Behavioral Animation and Evolution of Behavior
http://hmt.com/cwr/boids.html

 

  • A Three Layer Feedforward Neural Network
http://www.ee.duke.edu/~yf/bptr.html

 

  • Artificial Life for Graphics, Animation, Multimedia, and Virtual Reality: Siggraph '95 Showcase
http://www.cs.utoronto.ca/
~dt/siggraph96-course/

 

  • Creatures: The World Most Advanced Artificial Life!
http://creatures.wikia.com/wiki/Cyberlife

 

  • Framstricks Artificial Life
http://www.frams.poznan.pl/


next


g-top.gif (311 bytes)


 

Useful Links

b-green.gif (919 bytes) Introduction to Neural Networks (pdf lecture notes) - University of Minnesota  
b-green.gif (919 bytes) The Data Analysis BriefBook b-purp.gif (1525 bytes) StatSoft Statistics & Neural Networks (including a 4.7mb e-textbook). STATISTICA Neural Networks
b-green.gif (919 bytes) Yahoo NN b-purp.gif (1525 bytes) Statistics Resource Portal
b-green.gif (919 bytes) HTML Text Book b-purp.gif (1525 bytes) HyperStat Online: Software
b-green.gif (919 bytes) NN Using GA b-purp.gif (1525 bytes) N Computing App Forum
b-green.gif (919 bytes) NN Around the World b-purp.gif (1525 bytes) An tutorial on HMM
b-green.gif (919 bytes) Hidden Markov Models for Interactive
Learning of Hand Gestures
b-purp.gif (1525 bytes) Tutorial on HMMs (LU)
b-green.gif (919 bytes) CMU AI: Neural Networks b-purp.gif (1525 bytes) NN @ Yahoo 
b-green.gif (919 bytes) Imagination Engines Papers b-purp.gif (1525 bytes) AI Bibliographies, NN Bibliography, COGANN (GA+NN).


next


g-top.gif (311 bytes)

 


Created on 3 Mar 1998. Last revised on 22 Dec 2006
Tralvex Yeap