Google’s artificial intelligence bot has taught itself how to run and jump, without any human help.
The computer program, from DeepMind, did not have the grace of Usain Bolt or anything – but it was still impressive.
The AI had not been given any information on how to move, and instead managed to figure it all out by itself.
A video showed an avatar created by the program navigating obstacles that had been placed in its way.
All it needed was an incentive to reach each point.
A report on the research, which was published in Cornell University Library, explained how the AI utilised a reinforcement learning paradigm.
This allowed it to perform ‘complex behaviours’ that were learned from ‘simple reward signals’.
Researchers had examined whether obstacles and difficult terrain made it easier to learn movement.
The paper added: ‘Our experiments suggest that training on diverse terrain can indeed lead to the development of non-trivial locomotion skills such as jumping, crouching, and turning for which designing a sensible reward is not easy.’
‘We believe that training agents in richer environments and on a broader spectrum of tasks than is commonly done today is likely to improve the quality and robustness of the learned behaviours – and also the ease with which they can be learned.
‘In that sense, choosing a seemingly more complex environment may actually make learning easier.’