Learning to Locomote
As robots evolve they are becoming increasingly “smarter”, but still can not and do not learn the way humans learn. For example, humans have evolved to move by walking or running; however, humans can learn how to move by other means (e.g. swimming, skateboarding, riding bikes, etc.). I don’t know of any current robot that can learn to move by a different means than what it was design for.
A robot might be able to learn to swim if it were able to map how well it moves in the water to every position it can be in and every action it can take for each of the those positions. The problem with getting a robot to be able to do this mapping is that there are too many possible positions (or states) it can be in and too many different actions it can take. It would never have enough swimming lessons to be able to build an accurate map. In Machine Learning this is known as the “Curse of Dimensionality”.
To mitigate the curse of dimensionality and allow robots to learn how to move on their own we have proposed a novel learning algorithm that learns actions based on boundary conditions and not system states. The details of this work will be available soon through conference and journal publications. Some primarily results can be seen in the video below.
Rowland O’Flaherty and Magnus Egerstedt. Learning to Locomote: Action Sequences and Switching Boundaries. 9th IEEE International Conference on Automation Science and Engineering, August 2013. Best paper finalist