Revealing principles of autonomous thermal soaring in windy conditions using vulture-inspired deep reinforcement-learning

Yoav Flato, Roi Harel, Aviv Tamar, Ran Nathan, Tsevi Beatus*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Thermal soaring, a technique used by birds and gliders to utilize updrafts of hot air, is an appealing model-problem for studying motion control and how it is learned by animals and engineered autonomous systems. Thermal soaring has rich dynamics and nontrivial constraints, yet it uses few control parameters and is becoming experimentally accessible. Following recent developments in applying reinforcement learning methods for training deep neural-network (deep-RL) models to soar autonomously both in simulation and real gliders, here we develop a simulation-based deep-RL system to study the learning process of thermal soaring. We find that this process has learning bottlenecks, we define a new efficiency metric and use it to characterize learning robustness, we compare the learned policy to data from soaring vultures, and find that the neurons of the trained network divide into function clusters that evolve during learning. These results pose thermal soaring as a rich yet tractable model-problem for the learning of motion control.

Original languageEnglish
Article number4942
JournalNature Communications
Volume15
Issue number1
DOIs
StatePublished - Dec 2024

Bibliographical note

Publisher Copyright:
© The Author(s) 2024.

Fingerprint

Dive into the research topics of 'Revealing principles of autonomous thermal soaring in windy conditions using vulture-inspired deep reinforcement-learning'. Together they form a unique fingerprint.

Cite this