My research activities are focused on implementing perception of the environment algorithms for robots. My contributions in this domain concern the development of signal and image processing methods to (i) detect and recognize objects or obstacles, (ii) localize a robot in its environment, and (iii) map its surroundings. All these algorithms have been implemented on different real robots (terrestrial or underwater) in autonomous navigation conditions. Here are some more details about my research contributions.

Light-field vision for industrial robots and autonomous vehicles

A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene, that is to say, the direction and intensity of light rays coming from the scene. This contrasts with conventional cameras, which record only light intensity. Different light-field cameras exist: multi-camera array or conventional camera equipped with an array of micro-lenses (MLA) placed between the sensor and the main lens. The idea of these MLA-based light field cameras is to acquire several points of view of the observed scene (several images) on a single camera sensor. The number of images called sub-images varies with the camera models. Using these cameras, we are trying to develop machine vision and perception algorithms for robots or autonomous vehicles. In the first case, we try to exploit the images to automatically detect defects and to reconstruct in 3D reflective objects using the concept of deflectometry. In the second case, we try to develop more precise monocular visual odometry methods to locate autonomous vehicles in conditions where the GPS is not available.

Electric sense perception for underwater robots

This work takes part in the European H2020 FET project subCULTron (2015-2020). This project consisted in developing a swarm of underwater robots capable of collaborating and collecting environmental data for several weeks such as, for example, temperature, turbidity, salinity, direction, and strength of the current or information on fauna and flora. The particularity of this robot swarm was that it has to be deployed in shallow and turbid waters like the Venice lagoon. In this specific context, vision and even sonar are very difficult to exploit. The first, because the light conditions and turbidity degrade the quality of the images. The second, because the shallow and cluttered environment makes the interpretation of acoustic data impossible. To enable the robots to perceive their environment, we were inspired by the electrical sense used by some species of fish living in dirty and shallow waters. The perception by the electric sense is very short range. It consists in generating a weak dipolar electric field in the environment and then inferring information about potential obstacles from the detected perturbations. The information obtained on the environment is less rich than visual or acoustic information but it allows fish to navigate by avoiding obstacles, to localize and recognize shapes, and finally, to communicate with their fellow fish. In this context, we were interested in two main problems: localization and pattern recognition, and reactive navigation of underwater robots.

Multi-sensors SLAM on quadruped robots

This teamwork focused on improving the robustness and capabilities of the HyQ (Hydraulic Quadruped) robot designed to navigate autonomously over rough terrain. HyQ is a prototype hydraulic robot weighing 80kg. It is a complex machine whose operation requires expertise in many different fields: mechanics, electronics, control, trajectory calculation, and perception. Initially, this robot was only equipped with proprioceptive sensors (encoders, force sensors, IMU). In order to improve its autonomy in terms of navigation, I worked on the development of its cognitive capacities by integrating exteroceptive sensors (stereo camera, Kinect, Velodyne) and a pan-tilt unit. In particular, I worked on developing its simultaneous localization and mapping (SLAM) capabilities. I developed real-time onboard localization and mapping algorithms based on passive and active vision together with IMU. We then used these perception algorithms to develop new foot-step planning and trajectory planning methods.

Visual SLAM for mobile robots

The objective of this research work was to propose an efficient alternative to SLAM-based navigation methods (simultaneous localization and mapping) in order to offer a simple and adaptable localization and mapping system on light platforms without active sensors. Using image processing algorithms from monocular passive sensors, I first addressed the problems of loop-closure detection and topological mapping. I then developed a topo-metric SLAM approach by integrating metric data from odometers to the topological maps. The topo metric mapping of the environment consists in representing a map with a graph of images whose metric position can be estimated. Nevertheless, even if odometry provides metric data that allow a richer mapping and more robust navigation of the robot, it also generates de facto a cumulative error (drift) that must be compensated. This compensation was applied by relaxing the graph when the robot recognizes a known location (loop-closure). For loop-closure detection, the approaches used are based on visual word bags and a Bayesian filter taking into account odometer data.
From this method, two types of robot behavior have been developed: mapping and surveillance. In the first case, the robot was remotely controlled and the algorithm produces a coherent topo-metric map of the environment. In the surveillance case, the robot navigates on a known topo-metric) map (previously constructed) and odometry was used for navigation. The robot’s trajectory was corrected at each recognized image to compensate for odometry drift.

I was also involved in 2010 and 2011 in the robotics competition CAROTTE (CArtographie par ROboT d’un TErritoire) whose goal was the autonomous exploration and mapping of an indoor environment. The robot used in this competition was equipped with a laser, sonar, and two cameras. Within the team, I worked on the integration of the different modules on the PACOM robotic platform, the implementation of the global behavior of the robot and the capture of visual data.

Underwater image processing

In the underwater context, and unlike the sonar sensor which is still the most used at long range for detection and classification, the video camera is effective at short range during the approach, object recognition, and intervention phases. It has notable advantages such as high resolution, ease of interpretation, and low cost. Today, almost all scientific, industrial or military underwater vehicles are equipped with it. They are currently rather remotely operated and it is rare that automatic visual processing is used. These automatic treatments are however essential technologies for the emerging developments of autonomous underwater robots, which are very popular today in the context of expanding markets related to security and exploitation of maritime resources. This work aims to bring the necessary innovations and to promote the use of the video sensor in the underwater domain. The proposed study concerns the development of automatic processing of object recognition in underwater video. The underwater scenes observed are classically simpler and more limited in depth of observation than urban scenes or the interior of a building. However, this context presents specific difficulties: turbidity, poor lighting conditions, color diminished. This makes processing difficult and therefore requires the creation of new robotic vision algorithms. I first developed an automatic enhancement method based on wavelet denoising, homomorphic filtering, and color correction. Then, from these pre-processed images, I developed a generic and unprejudiced object recognition algorithm based on statistical edge characterization and regular shape detection.

a. Image originale, b. Pré-traitement, c. Détection des contours, d. Sélection des contours et labellisation.

This method has been validated in situ in the framework of a collaboration with THALES Underwater Systems SAS (TOPVISION Project: Operational Test of Underwater Video for the Identification of Harmful Objects). This project was part of the Techno-Vision program launched in 2005 by the Ministry of Research and the Ministry of Defense.

I also worked on second method of object recognition based on their color, a method that took into account the deformation of the color by the water medium. Initiated in the context of the international robotics competition SAUC-E (Student Autonomous Underwater Challenge – Europe), this method was validated during one of the challenge’s events, the objective of the challenge was to allow an autonomous robot to detect objects to move towards them and touch them.