The Future Lab AI4EO aims not only to be at the cutting edge of Earth observation but also intends to make key contributions for the interpretability of AI, the involved ethical implications, and the corresponding technology transfer. We encourage scientific community and general public interested on these topics to join our virtual seminars. If you are interested on attending a particular seminar please send an email to ai4eo@tum.de with the title of the talk as subject. Furthermore, if you want to receive notifications about all our events we invite you to subscribe to our distribution list ai4eo-seminars-subscribe@lists.lrz.de

Looking forward to meeting you soon!

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Data-driven Machine Vision for Fast and Reliable Predictions

Speaker: Prof. Dr. Rudolph Triebel, Researcher at DLR and Guest Professor at the AI4EO Future Lab - Technical University of Munich. March 11 2002. 09-10am (CET)

Recently, machine vision algorithms have shown a steadily increasing level of maturity for real-world applications such as robotics, autonomous driving, and remote sensing. Still, there are at least three major challenges that remain to make vision systems more useful in these domains. First, while most current approaches rely on machine learning techniques, they often lack a sufficient amount of high-quality, semantically annotated training data. Second, the predictive uncertainty of current approaches does not correlate sufficiently to the actual probability of correct predictions. And finally, fast inference in time-critical domains is difficult to achieve. In my talk, I will present some recent work from my team at DLR that addresses all these three challenges. In particular, I will show approaches to generate high-quality training data synthetically for effective semantic segmentation, an uncertainty-aware classifier based on a Bayesian Neural Network (BNN) architecture, and a fast and robust algorithm for object pose estimation and tracking. As applications for these methods, I will give examples from robotics and remote sensing.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

The challenges of predicting future sea level rise and its potential consequences for society

Speaker: Prof. Dr. Jonathan Bamber, Professor at the University of Bristol and Guest Professor at the AI4EO Future Lab - Technical University of Munich. November 12 2021. 09-10am (CET)

Sea level rise is one of the most serious and damaging consequences of climate change. Despite its importance and potential societal impacts, it has, to date, proved challenging to predict future SLR and probabilities of different projections. This is because the largest uncertainty lies in what the ice sheets covering Antarctica and Greenland may do. Together they have the potential to raise global mean sea level by 65 m. The most recent assessment suggests that a SLR of 2 m would flood land occupied by 630 m people, roughly a tenth of the population of the planet.

What is the probability of a SLR of 2 m by 2100, or sooner? This question has proved difficult to answer because the ice sheets have the longest response time of any part of the climate system, while reliable observations span just a few decades. In addition, several potentially critical processes are poorly understand and/or contested in terms of their role in future trends. Planning for adaptation strategies, however, takes multiple decades. The Thames Barrier, for example, took almost forty years to implement after the devastating floods of 1957.

A key to reducing uncertainties in future SLR is improved understanding of how sea level has responded to climate forcing during the instrumental record, especially since the advent of continuous observations of sea surface height from altimetry in 1992. Here I will presents the concept behind, and early results from, an ERC grant called GlobalMass (www.globalmass.eu), which aims to partition the sea level budget into its component parts using a combination of satellite and in-situ data sets, numerical model output within Bayesian inference framework. To do this, it is necessary to identify, separately, the contributions from solid-Earth deformation, global land hydrology, melting ice and thermal expansion of the oceans.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Machine learning to approach sustainability using scarcely labeled to unlabeled earth observation data

Speaker: Prof. Dr. Dario Augusto Borges Oliveira, Guest Professor at the AI4EO Future Lab - Technical University of Munich. November 05 2021. 09-10am (CET)

In the last decades, a debate on a responsible, sustainable human presence on Earth emerged strongly. With climate change and the overwhelming economic pressure on Nature, empowering procedures for efficient resource use with the recent advances in artificial intelligence is vital to create adequate policies and trigger warning alerts accordingly. Remotely sensing natural dynamic phenomena, like phenological crop cycles or deforestation processes, is challenging. Continuous and smooth physical processes usually rule such phenomena, but remote sensing involves different sensors with very different essence, scale, and visit rates, corrupted by stochastic events, resulting in highly complex multimodal multitemporal and multi-scale datasets. Moreover, data labeling in Earth Observation (EO) applications is usually scarce or unavailable due to the massive amount of data continuously acquired or notorious fieldwork limitations. This presentation discusses machine learning approaches for earth observation data with scarce labeling, aiming to develop environmental protection solutions, promote efficient tools to adapt to EO’s global warming effects, and support efficient agricultural practices with a lower data annotation burden.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Distinguished Lecturer Series: Interpretable Deep Learning from Earth Observation Data

Speaker: Prof. Dr. Plamen Angelov, Professor in Intelligent Systems, Leicester University. September 24 2021. 11-12am (CET)

This talk will be in two parts. First, I will present LIRA – the Lancaster University, UK centre that is focused on Intelligent, Robotics and Autonomous systems research. Then, I will talk about interpretable or explainable-by-design forms of deep learning and their application to Earth Observation data.

Machine Learning (ML) and AI justifiably attract the attention and interest not only of the wider scientific community and industry, but also society and policy makers. They are now widely used in remote sensing and Earth Observation as well. However, even the most powerful (in terms of accuracy) algorithms such as deep learning (DL) can give a wrong output, which may be fatal. Due to the opaque and cumbersome model structure used by DL, some authors started to talk about a dystopian “black box” society. Despite the success in this area, the way computers learn is still principally different from the way people acquire new knowledge, recognise objects and make decisions. People do not need a huge amount of annotated data. They learn by example, using similarities to previously acquired prototypes, not by using parametric analytical models. Current ML approaches are focused primarily on accuracy and overlook explainability, the semantic meaning of the internal model representation, reasoning and its link with the problem domain. They also overlook the efforts to collect and label training data and rely on assumptions about the data distribution that are often not satisfied. The ability to detect the unseen and unexpected and start learning this new class/es in real time with no or very little supervision is critically important and is something that no currently existing classifier can offer. The challenge is to fill this gap between high level of accuracy and the semantically meaningful solutions. The most efficient algorithms that have fuelled interest towards ML and AI recently are also computationally very hungry – they require specific hardware accelerators such as GPU, huge amounts of labelled data and time. They produce parameterised models with hundreds of millions of coefficients, which are also impossible to interpret or be manipulated by a human. All these challenges and identified gaps require a dramatic paradigm shift and a radical new approach. In this talk I will sketch (since the time is quite short to go in much detail) the method called xDNN (explainable by design deep neural network) and its application to remote sensing and flood detection using Sentinel-2 multispectral images.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Semantic Segmentation & Detection Using Multi-Sensory Data: From Methods to Applications

Speaker: Dr. Muhammad Shahzad. Guest Professor at the AI4EO Future Lab - Technical University of Munich. August 20 2021. 9-10am (CET)

Recent advancements in sensing and data acquisition technologies have enabled us to gather multi-sensory data from various sensors. To smartly process such data, both segmentation and detection are the key tasks that have a pivotal role especially in semantic scene understanding and high-level cognition which in turn have a wide range of diverse applications in different fields including remote sensing (e.g., urban modeling, vegetation monitoring, surveying, surveillance), robotics (e.g., autonomous navigation, self-driving, terrestrial mapping, housekeeping, old-age assistance, agriculture), augmented/virtual reality, medical imaging, and many others. This talk will briefly present the work which I have done with the help of my colleagues in semantic segmentation, object detection, and their applications over data acquired from various sensors including radar, optical, inertial, and 3D point clouds.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Explainable machine learning for the environmental sciences

Speaker: Prof.Dr.-Ing. Ribana Roscher. Assistant Professor of Remote Sensing at the Institute of Geodesy and Geoinformation, University of Bonn, Germany, and a Guest Professor at the Technical University of Munich in the AI4EO Future Lab. August 6 2021. 9-10am (CET)

Machine learning methods have been an integral part of many application areas for some time. Especially with the recent development of neural networks, these methods are increasingly used in the sciences to obtain scientific results from observational or simulation data. Besides high accuracy, a desired goal is to learn explainable models and to understand how a specific decision was made. To achieve this goal and obtain explanations, knowledge from the domain is needed, which can be integrated into the model or applied post-hoc. This presentation addresses explainable machine learning approaches in the environmental sciences and shows that machine learning can not only be used to learn models that should be consistent with our existing knowledge but can also lead to new scientific insights.

The Active Learning Paradigm: Reduce labeling effort with efficient labeling

Speaker: Dr. Matthias Kahl, postdoctoral researcher at the AI4EO Future Lab. July 23 2021. 9-10am (CET)

The purpose of many ML approaches is to automate complex decisions processes or predict outcomes in multivariate environments. For any supervised ML approach, a certain amount of ground truth (GT) data is necessary for training processes or data base for lazy learner. Ground truth data is generally gathered by human involvement in form of agreement, annotation, categorization or supervision of Instances. The common way of random selection of Instances that are being labeled by human is a good baseline strategy to approximate the underlying distribution of a given set of instances. The field of Active Learning examines all strategies besides this random sampling to achieve either a better performance by the same amount of GT-instances or to reduce the amount of necessary GT-instances keeping the same performance.  In this talk I will give an overview of common strategies, when to use AL and what strategies might be useful in the EO context.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Beyond Fellowship Retrospective: Geospatial Machine Learning for Earth Observation and Climate Modeling

Speaker: Mr Konstantin Klemmer. Beyond Fellow at the AI4EO Lab, and PhD. Candidate at the University of Warwick & New York University, UK.
July 2nd 2021. 9-10am (CET)

This talk will summarize my research activities throughout my stay as Beyond Fellow, and how they fit into the greater scope of my PhD research on geospatial machine learning. I will initially revisit the aim of my PhD work: embedding information on spatial context and dependencies into neural network models to improve their performance when working with geographic data. I will explore existing parametric and non-parametric embeddings capturing spatial and spatio-temporal dynamics and will discuss how these may be integrated into predictive and generative neural network models. Moving from methods to applications, I will highlight some of the research on geospatial machine learning conducted in collaboration with the AI4EO lab, focusing on applications in climate modeling and biomass estimation.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Large-scale spatio-temporal indexing & analytics with IBM PAIRS

Speaker: Dr. Conrad Albrecht. Principal Investigator of the Large-Scale Data Mining in Earth Observation group at the German Aerospace Center (DLR).
11th June 2021. 9-10am (CET)

For the past 6 years, researchers @ IBM TJ Watson Research Center with IBM software engineers designed and implemented a petabyte-scale geospatial data platform, IBM PAIRS: https://pairs.res.ibm.com/tutorial/ [1,2]. PAIRS curates geospatial data of various resolution in space-time by a unified, nested index for ready consumption by scientists to extract valuable insight [3-5]. The presentation will introduce the technology behind, will highlight scientific applications based on, and demonstrate hands-on to the system: https://github.com/ibm/ibmpairs.

[1] https://doi.org/10.1109/BigData.2015.7363884

[2] https://doi.org/10.1109/BigData.2016.7840910

[3] https://doi.org/10.1109/BigData47090.2019.9006600

[4] https://doi.org/10.1109/BigData47090.2019.9005548

[5] https://doi.org/10.1145/3394486.3403301

The corresponding slides of the presentation can be found here.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Due to the success of deep learning approaches for many applications in computer science, these ideas are now getting more and more important also in other research fields including life science, medicine, or remote sensing. Even though the applications are rather different, in all of these application areas, we have to deal with the same problems: the limited amount of (labeled) data and heterogeneous or ambiguous data. In this talk, we will discuss both problems and demonstrate how these can be analyzed and avoided in practice. To this end, both the theoretical foundations as well as the practical aspects to circumvent these problems for different application areas will be tackled.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

From Compressed Sensing to Neurally Augmented Algorithms

Speaker: Dr. Peter Jung. Senior scientist at the Technical University of Berlin and Visiting Professor at AI4EO Lab.
22th January 2021. 9am (CET)

Recovering data from indirect and incoherent observations is a core task in fields like computational imaging, communications and information processing, group testing and others.  Such inverse problems are ill-posed and therefore prior structural assumptions are necessary to restrict solutions.

As prototypical examples, compressed sensing and low-rank recovery deal with the problem of recovering a sparse vector or a low-rank matrix from very few compressive observations, far less than its ambient dimension. Fundamental works show that in many cases this can be provably achieved in a robust and stable manner with computationally tractable algorithms.

However, sparsity and low-rankness are simple priors and recovery algorithms often require tuning.  It is difficult and often impossible to treat detailed structure and optimal tuning in real-world problems analytically. Recovery approaches, well-understood in theory, perform often sub-optimal in practice. Algorithms converge slowly and increased acquisition time and sampling rates are necessary to achieve a given target resolution.

On the other hand, in many cases neural networks can be trained to empirically achieve high expressivity and the question is how make these ideas accessible to the inverse problem setting.

In this talk I will discuss potential links between the compressed sensing methodology, data-driven approaches for inverse problems and tuning of algorithms. I will first present some recent tuning-free compressed sensing results with applications in communication and group testing showing that strict guarantees are can be obtained in non-standard settings.  Then I will focus on how structure and tuning can be incorporated data-driven into recovery algorithms. I will discuss here some ideas and recent results for compressed sensing and phase retrieval which show that substantial improvements in terms of recovery quality and run-time are possible.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Unsupervised deep learning for multi-temporal analysis

Speaker: Dr. Sudipan Saha, Postdoctoral researcher AI4EO Lab.
10th December 2020. 9am (CET)

Deep learning based methods depend on the availability of labeled training data. Such labeled data is often unavailable in remote sensing, especially in the context of high-resolution (HR) multi-temporal image analysis. In addition to the lack of labeled data, HR multi-temporal image analysis needs to deal with spatio-temporal complexity and differences related to the acquisition conditions, e.g., those induced by the use of different sensors. This talk will provide an overview of the methods devised to address the aforementioned challenges, especially using transfer learning, self-supervised learning, and and domain adaptation. Considering the remote sensing data is evolving fast in terms of spatial, spectral, and temporal resolution, the talk will also provide an overview of possible future developments that can take advantage of the next-generation remote sensing data.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Artificial Neural Networks and AI in high Assurance Applications: Gaps and Techniques

Speaker: Dr. Johann Schumann, Researcher at KBR/NASA Ames Research Center
18th November 2020. 9am (CET)

In recent years, capabilities of Deep Neural Networks (DNN) and Artificial Intelligence (AI) systems have grown tremendously. They are now applied in many areas ranging from game playing, social media, science, to robotics, automotive, and aerospace applications. Based upon requirements for safety of DNN and AI in high assurance automotive and aerospace applications, I will discuss the necessity to ensure that AI techniques for the analysis of Earth observation data and reasoning are working correctly and reliably. In this talk I will present modern techniques for the verification and validation (V&V) of DNN and other AI components as well as approaches for interpretable AI. I will discuss how these techniques can help to ensure quality of the AI results, improve confidence in their application, and facilitate human-AI interaction and collaboration.

Literature Review of Ethical issues in AI4EO: Current Understanding and Scope for Improvement

Speaker: Prof. Dr. Mrinalini Kochupillai, Guest Professor at AI4EO Future Lab
13th November 2020. 9am (CET)

This talk provides an overview of findings from an extensive literature review conducted in the field of ethical issues and opportunities at the interface of Artificial Intelligence and Earth Observations. It will highlight: (i) the ethical issues and opportunities already well know/identified in the literature, (ii) the strengths and shortcomings linked with the present approach and understanding of these issues and (iii) outline a roadmap for tackling the shortcomings and creating a more comprehensive, user-friendly set of guidelines and approaches for researchers engaged in the field of AI4EO. The talk will feature some live surveys and will offer ample time for Qs and As and discussion.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Towards Geographically-Aware Machine Learning

Speaker: Konstantin Klemmer, PhD student at University of Warwick & New York University and Beyond Fellow at AI4EO Future Lab
4th November 2020. 9am (CET)

Machine learning methods have shown great promise for modelling complex, high-dimensional data environments. However, they still struggle with inherently non-iid data such as geographical data. On the other hand, the academic fields of geographic information science and spatial statistics have long known this issue and developed approaches to identify and embed spatial dependencies. This opens up the opportunity to combine approaches from both areas to enable geographically-aware machine learning with high-dimensional, non-linear data. This talk will highlight why these methods are needed, looking at real-world examples. Further, we will explore some useful spatial metrics and how they can be applied in generative and predictive machine learning models.

YouTube

By loading the video, you agree to YouTube's privacy policy.
Learn more

Load video

Beyond Perception Towards Reasoning: Visual Reasoning in Remote Sensing

Speaker: Prof. Dr Lichao Mou, researcher at EO Data Science department and guest Professor of AI4EO Future Lab
29th October 2020. 9am (CET)

Over the past years deep learning has brought a real revolution in artificial intelligence for Earth observation (AI4EO), producing stunning results in a variety of different applications. For instance, deep learning-based remote sensing image classification and object detection systems can now be trained to recognize hundreds of different land cover, land use, and object categories, which sometimes are difficult to distinguish even for humans. Albeit these are indeed impressive advancements, there is no doubt that many problems that are really at the core of AI4EO are far from being solved. This is particularly true for those tasks that involve reasoning, such as induction, deduction, and spatial and temporal reasoning. In this seminar, I will present several exploratory works on visual reasoning in remote sensing done by my colleagues and me.

Shallow learners are dead – Long live shallow learners! Random Forests in the age of Deep Learning

Speaker: Dr. Ronny Hänsch. Researcher DLR- SAR-Technology Department.
15th October 2020. 10am (CET)

The rise of deep neural networks has caused essential changes well beyond the machine learning (ML) and computer vision (CV) communities. One of the consequences is that the previous zoo of used ML methods (e.g. Naive Bayes, MLPs, SVMs, Random Forests, etc.) is now replaced by a monoculture of (deep) neural networks. Deep Learning (DL) approaches have also been successfully used (and sometimes abused) in Remote Sensing (RS) and Earth Observation (EO). Nevertheless, in contrast to other CV applications, shallow learners seem to prevail in RS/EO and coexist with DL (although somewhat in the shadow). This talk aims to shed some light on possible reasons, discusses modern RFs variations, and positions them into the context of Deep Learning.

Village Data Analytics (VIDA): Use of machine learning and Earth Observation to identify remote villages for electrification

Speaker: Mr. Nabin Raj Gaihre. Researcher at TFE Energy.
26th August 2020. 10am (CET)

More than a billion people do not have access to electricity. Most of them live in pretty remote regions. There is very little known about these villages. This data void in remote villages is “one of the”, if not “the” most important barrier. And, we are developing a solution for it! Our solution is “Village Data Analytics”, or VIDA. VIDA is a machine-learning-based software that analyses satellite imagery and ground data to provide insights into remote villages. It identifies and extract insights about rural villages, anywhere in the world, and to assess their suitability for off-grid electrification, using mini-grids or individual solar home systems. VIDA points governments and donors, electrification companies, and investors to villages that require immediate electrification. VIDA has already been used by governments and large donors in sub-Saharan Africa.

Artificial Intelligence for Earth Observation