Alan Blair's Research Interests
Hierarchical Evolution and Computer-Generated Art
- HERCL: I have developed a novel genetic programming language called HERCL,
specifically designed for Hierarchical Evolutionary Computation,
and applied it to ciphers [13.B],
nonlinear control [14.B] and
classification tasks [15.B].
- Art: Jacob Soderlund, Darwin Vickers and I have used HERCL
to evolve string processing functions
and to produce evolutionary art
Our most recent work involves
adversarial training between a HERCL generator
and a GAN-style LeNet critic
The resulting artworks do not attempt to mimic the style of human artists,
but instead emerge from a tradeoff between
low algorithmic complexity and the need to fool the critic
into thinking they are real photographs.
Reinforcement Learning and Games
- Chess: In 2009 Joel Veness, David Silver, Will Uther and I
developed a novel algorithm for bootstrapping from game tree search,
and used it to demonstrate for the first time that
a neural network Chess player could be trained to Master Level,
entirely by self-play
This technique has since been applied to several other games, including Duchess
- Go: In 2008/9 I pioneered the use of parallel convolutional neural networks for the game of Go [B08], including tree search with separate networks for move selection and board evaluation [B09].
GPU's were not very poweful in those days, and my players were only on a par with old players like AmiGo from the 1980's.
There was considerable hostility to convolutional neural networks within the Go community at that time.
The techniques I developed include what would today be called SymNets and Pixel Recurrent Convolutional Networks.
- Backgammon: In 1997 Jordan Pollack and I trained a neural network to play Backgammon using a self-play evolution strategy
[98.PB], and applied the same
algorithm to Tron
and Simulated Hockey
Our technique of only fractionally updating the weights of the champ in the direction of the mutant
has recently gained renewed interest from the RL community,
with massive parallelization and application to OpenAI Gym environments.
Neuroevolution, Deep Learning and Language Processing
- (with Alexander Hadjiivanov) improved neuroevolution with complexity-based speciation [16.HB]
- (with Anthony Knittel) developed Abstract Deep Networks -
a combination of deep neural networks and learning classifier systems
- introduced Internal Symmetry Networks - convolutional neural networks
with a novel weight sharing scheme based on representations of symmetry groups
- (with Oliver Coleman and Jeff Clune) worked on evolving plastic neural networks for online learning
- (with Robin Harper)
explored Dynamically Defined Functions
and novel crossover operators
for Grammatical Evolution.
- (with Stephan Chalup) showed for the first time
that a simple recurrent neural network could be
trained to predict a context-sensitive language
- developed a new method for analysing the behavior of
dynamical recognizers trained to recognize or predict formal languages
- have also studied loanword formation
and evolution of communication
- (with Brad Tonkes) introduced a new paradigm for decentralized data fusion
based on exponentials of polynomials
- (with Jason Thomas and Nick Barnes)
developed an efficient optimal trajectory planner for multiple mobile robots, using parametric cubic splines, which was deployed in the
F180 League of Robocup
Return to Alan Blair's home page