Research (under construction :) )

Introduction

One of the peculiarities about space is that you just can’t gather all the data you want. Nowadays it is relatively cheap (no one said easy) to grab a camera and a bunch of sensors and make a cool dataset. However, this doesn’t apply to space. Usually you need between 2000 to 10,000 $ to send a kilogram into a low-earth-orbit. Then you need to make sure that your payload will survive the launch and the space conditions, not to mention that you’ll need to transmit the data to ground. Then things start to get a bit more complicated if you want to gather images from other spacecraft or go into deep space.

To avoid creating a new space company every time a new dataset is needed, the community has created amazing computer-based simulators that can render images of different planets and spacecraft. But as we know, “there ain’t no such thing as a free lunch“, and algorithms trained over synthetic data don’t seem to transfer very well to the real domain of images.

In my research I’m trying to deal with this issue of sim2real transfer for spaceborne vision-based navigation. One could be tempted to overcome this problem by designing an algorithm that by nature is robust to any situation, but this seems a bit optimistic. Instead, other way of approaching the problem is to employ test-time adaptation via self-supervision: first train an algorithm in a supervised manner then use self-supervision when adapting to the new domain.

Exploring self-supervision for monocular depth estimation and visual odometry in rover scenarios

During this part we explored the concepts of self-supervision for monocular depth estimation and visual odometry in rover scenarios.

Find the paper here and here

Exploring the domain-gap in spacecraft pose estimation