Research from the University of Bristol and the University of the West of England (UWE) used artificial evolution to enable the robots to automatically learn swarm behaviours that are understandable to humans.
The new advance has been published in Advanced Intelligent Systems.
Until now, artificial evolution has typically been run on a computer, which is external to the swarm, with the best strategy then copied to the robots. However, this approach is limiting as it requires external infrastructure and a laboratory setting.
By using a custom-made swarm of robots with high-processing power, embedded within the swarm, the Bristol team were able to discover what rules give rise to desired swarm behaviours. This, the team adds, could lead to robotic swarms that are able to continuously and independently adapt in the wild, to meet the environments and tasks at hand. By making the evolved controllers understandable to humans, the controllers can also be queried, explained and improved.
Lead author Simon Jones, from the University of Bristol’s Robotics Lab, says: “Human-understandable controllers allow us to analyse and verify automatic designs, to ensure safety for deployment in real-world applications.”
Co-led by Dr Sabine Hauert, the engineers took advantage of the recent advances in high-performance mobile computing, to build a swarm of robots inspired by those in nature. The ‘Teraflop Swarm’ has the ability to run the computationally intensive automatic design process entirely within the swarm, freeing it from the constraint of offline resources. The swarm reaches a high level of performance within just 15 minutes, much faster than previous embodied evolution methods, and with no reliance on external infrastructure.
Dr Hauert, senior lecturer in Robotics at the Department of Engineering Mathematics and Bristol Robotics Laboratory (BRL), says: “This is the first step towards robot swarms that automatically discover suitable swarm strategies in the wild. The next step will be to get these robot swarms out of the lab and demonstrate our proposed approach in real-world applications.”
Professor Alan Winfield, BRL and science Communication Unit, UWE, adds: "In many modern AI systems, especially those that employ Deep Learning, it is almost impossible to understand why the system made a particular decision. This lack of transparency can be a real problem if the system makes a bad decision and causes harm. An important advantage of the system described in this paper is that it is transparent: its decision-making process is understandable by humans."