Published on March 11, 2026

A method to fuse time series of images from different optical satellites

  • News

  • Datasets

  • Satellite images

New artificial intelligence techniques make it possible to fuse satellite time series from two satellites in order to increase spatial or temporal resolution while removing clouds from the data.

© CESBIO/CNES

An increasing number of space missions now regularly and systematically observe the Earth's surface. In terms of optical missions with a decametrical resolution, we benefit from two space missions, Landsat and Sentinel-2, the characteristics of which are summarised in the table below. These missions provide free, high-quality data that is relatively easy to use.

However, the simultaneous use of multiple sources of space data is still rare. For instance, searching all scientific literature published in 2025 for 'Sentinel-2' yields 6462 references, 'Landsat' yields 5569, and 'Landsat and Sentinel-2' yields just 987. This is probably due to the difficulty of combining data with different resolutions, spectral bands and acquisition dates.

Main features of Sentinel-2 and Landsat

MissionSentinel-2Landsat
Resolution10 to 20 m according to bands30 to 100 m and one band at 15m
Spectral bands1310
Revisit period5 days at equator9 days at equator
Thumbnails of Landsat and Sentinel-2 images obtained during a whole year (25 thumbnails for Landsat on the left and 40 on the right for Sentinel-2). The fully cloudy images are not displayed.
Exemple of Landsat time series on the left, Sentinel-2 on the right, used as input of the data fusion method. Fully cloudy images are not displayed. © CESBIO/CNES

A new method for spatio-temporal fusion

In their new article, Julien Michel and Jordi Inglada have proposed a solution to this problem. Imagine you have a year's worth of Sentinel-2 and Landsat-8 data with clouds, irregular acquisitions, different resolutions and spectral bands. Their proposed learning method makes it possible to provide all the data at the same resolution (10 metres), free of clouds, on any desired date throughout the year and for any available band in either sensor.

We will not go into detail here about the architecture of the AI method (Temporal Attention Multi-Resolution Fusion, or TAMRF), but a brief description can be found in this blog post, with more detail provided in the scientific aticle. The self-supervised learning strategy is inspired by that used in language models: a certain amount of data provided by the two satellites is hidden from the tool, which is then trained to estimate the content of this data from the other images in the time series. The model will therefore learn to interpolate the data temporally, spatially and spectrally. They have developed an original cost function with two innovative terms: one that promotes high spatial resolution details in the predicted images and another that teaches the model to ignore uninformative pixels, such as cloudy pixels or areas of missing data. All of this is explained in detail in Julien Michel's thesis (the thesis was succesfully defended on the 2nd of February)

séries de vignettes en sortie prédites pour le 15 de chaque mois. En haut pour les bandes bleu vert rouge, et en bas pour les bandes proche et moyen infra-rouge.
Output time series from the data fusion method (zoom). Top : surface reflectances predicted for the day 15th of each month, for the blue, red and green bands, and bottom, the same for the near and short wave infrared bands.

A well validated fusion

Data fusion not only produces beautiful images, but also well-validated data. The accuracy of the reflectances was assessed by withholding data from the TAMRF and comparing the predicted image with the actual image. For instance, when predicting a Sentinel-2 image, the following performance was achieved:

  • If a Landsat image was available on the same day, the accuracy was around 0.015 reflectance units. This accuracy is very close to that of Sentinel-2 surface reflectances.
  • If no image is available during the month, the performance degrades slightly, ranging from 0.025 to 0.045 depending on the bands. While the method lacks the ability to predict specific events such as the exact date when a farmer will harvest his plot, it still provides an accurate estimate of the order of magnitude.

Currently, no other method offers equivalent capabilities; however, various published methods can perform more specialised subtasks, such as predicting a Sentinel-2 image when a Landsat image is available or the temporal interpolation of a Sentinel-2 series.

Performance comparisons have been carried out for some of these subtasks. Experiments show that a single TAMRF model trained once and for all offers similar or better performance than specialized models, with additional advantages such as robustness to clouds in images and the ability to predict unobserved dates and improve spatial resolution.

Comparaison de différentes méthodes de prédiction d'une image Sentinel-2 à 10m de résolution quand une image Landsat est disponible le jour même. La ligne du haut correspond à diverses méthodes de la littérature, la seconde ligne les prédictions de TAMRF avec de g à d, seulement des images Landsat, seulement des images Sentinel-2, ou les deux, et la référence Sentinel-2. Les deux lignes suivantes sont indentiques pour les bandes proche et moyen infra-rouge. La méthode TAMRF est la plus proche de la réf.
Comparison of different methods for predicting a Sentinel-2 image with 10 m resolution when a Landsat image is available on the same day. The top line corresponds to various methods found in the literature, while the second line shows TAMRF predictions.

A useful fusion

This method should be very useful.

  • It will allow the temporal revisit times of two space missions to be combined for improved monitoring of agricultural events, seasonal changes and natural disasters.
  • It will provide a valuable alternative to monthly cloud-free syntheses.
  • It could even transform satellite design, enabling missions to be carried out with frequent observations at medium resolution (e.g. 10 m) combined with missions involving less frequent (but still regular) revisits at higher resolution (e.g. 2 m).

Continue your exploration