Research Summary Table

Research ThemeProject
Office RoboCup Rescue AUV UAV Humanoid AIBO/Robocup
Navigation Malcolm, Cameron Malcolm Alan Brad ???
Planning Malcolm, Maurice Malcolm
Multi-Agent Systems ???
Human-Robot Interaction Claude, Waleed, Claire Waleed Waleed Claire
Learning Claude Waleed Solly
Manipulation ???
Vision Waleed Alan, KMK Brad ??? Claude
Locomotion Malcolm
Entertainment

Research Projects «all details»

A brief description of the office demonstrator/project.

A description of the office demonstrator/project.

Personnel:

References:

RoboCup Rescue «details»

A brief description of the office demonstrator/project.

A description of the RoboCup Rescue project.

Personnel:

References:

Autonomous Underwater Vehicle (AUV) «details»

A brief description of the AUV project.

A description of the AUV project.

Personnel:

References:

Unmanned Aerial Vehicle (UAV) «details»

The UAV projects concern the control and visual navigation of two indoor μUAVs.

Behavioural Cloning for μUAV Control: DraganFlyer/DraganEye

Most helicopter platforms are inherently unstable and are notoriously difficult to control. Shifting wind currents and power fluctuations all act to upset the balance of a helicopter, forcing constant corrections from the pilot/controller. The traditional engineering approach to this problem is to measure the changing state (velocities, rotations, etc.) of the platform and correct them using what is known as a PID (Proportional-Integral-Derivative) controller. In fact, such a controller was designed and implemented by a pair of students in 2004. An alternative solution is to observe how human operators act in different circumstances and to copy them. This technique is known as behavioural cloning.

Because the DraganFlyer presents such a difficult control problem, investigation of behavioural cloning problems on this platform represent a challenge for the state-of-the-art. The DraganFlyer has now been fitted with a camera and video transmitter (and redubbed the DraganEye), and the intention is to perform behavioural cloning using data from the camera (perhaps in the form of a feature-point tracking algorithm) and the gyroscopes and accelerometers that are currently used for the PID controller.

In terms of demonstrations, the goal of this project is to have a "behaviourally cloned" controller for the DraganEye that is capable of:

  • maintaining the platform in a steady hover,
  • moving the platform between defined targets,
  • implementing joystick-like control for the platform (?),
  • others?

Insect-Inspired Visual Navigation: Proxflyer Mosquito

Nature provides an abudance of examples of autonomous systems that are capable of rapid, robust and reliable navigation of unpredictable terrains. Insects are one such case where, despite incredibly limited neural resources, amazing feats of navigation are regularly performed. Consider foraging ants that can always find the direct path back to their nest, honeybees that can navigate back to its hive and then tell other honeybees where a food source is, the ability of flies to land on a moving surface in windy conditions or the dragonfly that hunts on the wing.

Study into how insects perform these feats has led to the development of many inspired-by-nature approaches to algorithm design. One such technique is based on optic flow. Roughly, optic flow is what is observed when three-dimensional motion is projected onto a two-dimensional surface. For example, consider mounting a video camera on the front of a car and then driving it towards a brick wall. The optic flow is how each part of the image moves from one frame to the next. (In this case every part of the image moves away from the centre.) This pattern of expansion can be used to deduce that the camera is heading toward an obstacle (or that the obstacle is heading toward the camera). Moreover, the particular rates of expansion can be used to deduce the amount of time before the camera and wall collide. This project aims to investigate vision-based obstacle avoidance using optic flow on both ground (Pioneer, Argo) and aerial (Mosquito) platforms.

Weighing just 100 grams, the Proxflyer Mosquito is an indoor μUAV platform whose unique design produces a stable aerial platform, unlike traditional helicopters (or the DraganFlyer) and which is also capable of hovering (unlike a winged platform). As such, it presents an ideal platform for testing insect-inspired navigation strategies allowing the research focus to be on navigation rather than control. The Mosquito has been fitted with a wide-angle lens (190°) and video transmitter so that images can be processed off-board and a control signal relayed back to the vehicle via the standard radio controller. Other platforms for this project include the ground-based, indoor Pioneer robots, and hopefully at some point the outdoor Argo.

Current and Completed Projects:

  • PID controller for DraganFlyer.
  • DraganEye: Vision processing for the DraganFlyer.
  • Real-time, software implementation of optical flow.
  • Real-time, hardware (FPGA) implementation of optical flow.

Proposed Projects:

  • Behavioural cloning for DraganFlyer control.
  • Fusing laser rangefinder and optic flow information (AGV only).
  • Optic-flow based control strategies for μUAV (Mosquito).
  • Optic-flow based control strategies for AGV (Pioneer, Argo).

Techniques:

  • behavioural cloning
  • optic flow
  • feature-point tracking

Personnel:

References:

Humanoid «details»

A brief description of the Humanoid project.

Personnel:

References:

AIBO/RoboCup «details»

The Sony AIBO robots are well-known for their role in the Robocup competition, but they also make a versatile platform for a variety of different research problems. Some of the possible projects are described below. All of these projects could be considered for entry into the "Open Challenge" at the next Robocup.

Walking on uneven terrain

Can an AIBO walk uphill? Down stairs? Over bumpy terrain? The AIBOs have a large collection of sensors: vision, inclination, distance, foot fall, etc, and yet the current walking behaviours operate without any feedback. Can these sensors be used to allow the AIBO to traverse more difficult terrain? Could an AIBO learn how to do this by its own experimentation?

Human recognition

Can an AIBO learn to recognise people and interact with them? Could you get it to look you in the eye when you talk to it? To learn to associate your voice with your face? To come when you call and follow you around?

Speech interface

Can an AIBO recognise its master's voice? Can it tell if you are pleased or angry with it based on your tone of voice and react accordingly? Can it be made to obey verbal commands, like "sit", "stay", "fetch", "heel"?

Cooperation

Sure they can play soccer, but can AIBOs play netball? You can't dribble the ball in netball, so passing is key. This involves solving a lot of interesting cooperation problems. When to pass? Who to pass to? Where to position your receivers? How to block?

Personnel:

References:

Additional Projects

Additional, non-core projects are listed separately.

Research Themes «all details»

Planning «details»

Planning is control at the level of sequences of operations defining a complete task; this is sometimes called "high-level" control and even "intelligent" control. The challenge in planning is to evolve truly intelligent behaviour from lower-level control functions. Research in planning will focus on: a) The development of algorithmic models for planning under uncertainty, especially methods based on hybrid systems and modal logic; b) the development of methods for learning or adapting behaviour through interaction with the environment; and c) the development of cognitive languages for high-level agent control and for human-machine interaction. Research in planning aims to develop robust and adaptive high-level control methods.

Personnel:

Multi-Agent Systems «details»

Learning is the process of adapting behaviour in response to events in the environment. Without learning, a system is limited by the ability of its designer to foresee all situations that might occur. The aim of the learning research programme is to massively increase the range of operation and robustness of autonomous systems.

Personnel:

Human-Robot Interaction (HRI) «details»

As robots become more commonplace, there will be an increasing amount of interactions between humans and robots.

Personnel:

Learning «details»

Learning is the process of adapting behaviour in response to events in the environment. Without learning, a system is limited by the ability of its designer to foresee all situations that might occur. The aim of the learning research programme is to massively increase the range of operation and robustness of autonomous systems.

Personnel:

Manipulation «details»

Manipulation blurb.

Personnel:

Vision provides an information channel that provides good sampling characteristics (a significant amount of the environment can be sampled from a substantial distance and with a cheap sensor) at a high rate (images can be taken tens of times per second). The downside is that computer interpretation of images remains problematic. The vision research program aims to increase autonomous systems' ability to exploit vision for tasks such as navigation, object recognition and for interfacing with humans.

Personnel:

Locomotion «details»

How does a robot get around? In most cases we choose the simple solution: wheels, but wheels are limited to appropriately engineered terrains (flat floors, roads, etc). When we go "off-road" wheels often get stuck, or slip, or damage the terrain. Nature, on the other hand, demonstrates a wide variety of different kinds of locomotion: crawling, slithering, walking, running, jumping, pronking (that's what antelopes sometimes do) and more. Can we build robots that do these things?

Personnel:

Entertainment «details»

There is a growing market for entertainment robots both as toys or "pets" in the home and as animatronic actors in television and film. The entertainment field presents many interesting challenges for robot research. How do we make robots that are expressive, which can convey emotion, which not only take commands and perform tasks but which can actually relate to people and entertain?

Personnel:

Personnel

CAS Staff

Prof Claude Sammut

Home Page

Dr Alan Blair

Home Page

Dr Maurice Pagnucco

Home Page

Dr Waleed Kadous

Home Page

Dr Malcolm Ryan

Home Page

Dr Brad Tonkes

Home Page

CAS Postgraduate Students

Cameron Stone

Home Page

Supervisor: Prof Claude Sammut

Claire D'Este

Home Page

Supervisor: Prof Claude Sammut

Solly Brown

Home Page

Supervisor: Prof Claude Sammut

Robot Resources

ResourceClassQuantityDescriptionLocationProjects
ActivMedia Pioneer 3 AGV 2 The Pioneers are indoor robots fitted with sonar rings, scanning laser rangefinder, wireless networking, Pan/Tilt/Zoom (PTZ) camera and on-board Pentium-3 class computer. ActivMedia::Pioneer 3 CSE Office
ActivMedia PeopleBot AGV 1 The PeopleBot extends the Pioneer with an LCD screen and PTZ camera at heights designed for human interaction. It also speaker/microphone for vocal interaction. ActivMedia::PeopleBot CSE Office
Argo AGV 2 The Argo is an eight-wheeled, skid-steered(?) amphibious vehicle large enough to seat two people. One Argo has been retro-fitted for autonomous control, while the controls on the other have been instrumented to record driver behaviour. ACFR::Argo ACFR -
Truxar(?) AGV 1 An RC Monster Truck retro-fitted with stereo vision and on-board computer for autonomous control(?). This project has been discontinued. CSE Rescue
RobHaz DT-3 AGV 1 An autonomous ground vehicle designed for hazardous environments. This robot is intended for use in the RoboCup Rescue competition. To arrive December 2004. RobHaz::DT3 (Korean) CSE Rescue
Sony AIBO ERS-7 Legged 2 The ERS-7 is the latest generation of Sony's robot dogs used in RoboCup competition. Sony::AIBO CSE Robocup/AIBO
DraganFlyer IV UAV 1 The DraganFlyers are a range of quad-rotor, differential-powered helicopters with rotors positioned on the ends of two cross-arms. The school's DraganFlyer, now dubbed the DraganEye, has been fitted with camera and wireless transmitter. RCToys::DraganFlyer IV CSE UAV
Proxflyer Mosquito UAV 1 A mechanically-stable helicopter-like μUAV designed for indoor use, fitted with extra-wide angle (190°) lens camera and wireless transmitter. Proxflyer::Mosquito ACFR UAV
Blimp UAV 1 A Helium-filled RC blimp. This platform has proven too susceptible to air-conditioning currents for serious research use. CSE UAV
Oberon AUV 1 The ACFR's autonomous submarine, designed for monitoring coral and starfish populations on Queensland's Great Barrier Reef. ACFR::Oberon ACFR AUV
Cycloid III Humanoid 2 A small, humanoid robot, To arrive, December 2004. Tribotix::Cycloid CSE Humanoid
Arms/Hands Humanoid 2 To arrive, December 2004. CSE Humanoid
Snake Robots Snake 8(?) SnakeRobots.com CSE -

Glossary

ACFR
The Australian Centre for Field Robotics, one of CAS's member institutions.
AGV
Autonomous Ground Vehicle.
AIBO
Sony's robotic dog, see http://www.sony.com/aibo.
AUV
Autonomous Underwater Vehicle; robotic submarine.
CAS
The Centre for Autonomous Systems, and ARC-funded research group with members from UNSW's School of Computer Science and Engineering and School of Mechanical Engineering, UTS's School of Mechatronics, and The Australian Centre for Field Robotics at the University of Sydney.
PTZ
Pan-Tilt-Zoom - describing a camera control that can be panned (moved side-to-side), tilted (move up-and-down) and zoomed (focus moved in and out).
RoboCup
RoboCup is an international competition held annually in which teams of robots play one another in a modified game of soccer. RoboCup comprises several leagues included a legged league which is held between teams of Sony's AIBOs. The RoboCup has now expanded to non-soccer competitions, most notably including a Rescue competition. See http://www.robocup.org.
RC
Radio-Controlled.
UAV
Unmanned Aerial Vehicle, such as a helicopter, blimp, aeroplane or ornithopter.
μUAV
A micro-UAV; small, typically indoor unmanned aerial vehicle.