RESULTS: Change Detection
Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past.

In this work, we propose a method that detect the changes between underwater sequences of images using the 3D model of the scene in addition to the camera positions in order to get rid of the effect of the relief. The relief of the ocean floor and the camera positions and orientations (camera poses) of one sequence of images are estimated using robust computer vision methods that allow recovering the accurate camera poses and a large set of 3D points. This step requires high overlap among the images and a good calibration of the intrinsic parameters of the camera.

Image to be matched with the 3D model Then, by matching the images of the second sequence with their overlapping correspondent in the first one, their camera poses are estimated. Next, a set of synthetic images is generated for each image of the second sequence. These synthetic images are the modeled images warped in such a way that their textures are seen from the camera pose than the images in the second sequence. This is useful in order to reduce the effect of the 3D relief and to provide better registration. Photometric matching techniques are used to correct the differences in illumination and/or differences caused by the image sensors between the two datasets.

Synthethic images from the 3D model textures Synthethic images photometrically matched
Finally, once the images viewing the same area and from the same location as the image of the second sequence is generated, standard change detection algorithms can be applied. This process is applied to every image of the second sequence and its corresponding set of synthetic images. As a result, we have a binary change mask for each pair of images that says whether there is or not a change between the two images. Those change masks can be merged into a general mask representing the changes of the surveyed area.