Keynote speakers

We are happy to announce Tom Schaul, Gabriele Gramelsberger, and Joost Batenburg as keynote speakers.

Tom Schaul

Dr. Tom Schaul has been a senior researcher at DeepMind since 2013. His research is focused on reinforcement learning with deep neural networks, but includes modular and continual learning, black-box optimisation, off-policy learning about many goals simultaneously, and video-game benchmarks — most recently StarCraft II. Tom grew up in Luxembourg and studied computer science in Switzerland (with exchanges at Waterloo and Columbia), where he obtained an MSc from the EPFL in 2005. He holds a PhD from TU Munich (2011), done under the supervision of Jürgen Schmidhuber at the Swiss AI Lab IDSIA. From 2011 to 2013, he completed a postdoc with Yann LeCun at the Courant Institute of NYU.

Read more (title and abstract)

About the talk

The allure and the challenges of deep reinforcement learning

In a nutshell, reinforcement learning (RL) means making good decisions by learning from experience. The generality of this problem formulation, and the prospect of minimal supervision, make RL a plausible candidate for solving tomorrow’s problems. RL has been demonstrated to work at scale, in particular when combined with deep learning. In this talk I’ll expand on the key challenges that remain, and illustrate them using the case-study of AlphaStar, a deep RL agent that learned to play the game of StarCraft II at grandmaster level.

 

 

Gabriele Gramelsberger

Gabriele Gramelsberger is Professor for Theory of Science and Technology. Her research is devoted to the philosophy of computational sciences. She has conducted studies on the influence of computer-based modeling and simulation on meteorology, biology, and chemistry. She is also investigating the connection between computer-based simulation and machine learning strategies. In 2018 she founded the Computational Science Studies Lab (CSS-Lab) at the RWTH Aachen University for developing software tools for the study of scientific code from a philosophy of science perspective (e.g. together with Maschmann et al. 2020).

Read more (title and abstract)

About the talk

Machine learning-based research strategies – a game changer for science?

Research is increasingly driven by machine learning strategies for data analysis, but also discovering new hypotheses and findings. Material science, chemistry, astrophysics, meteorology and many more disciplines are experimenting with machine learning settings of various types for various purposes. The keynote lecture will present examples from ongoing research projects based on machine learning strategies and will discuss the impact of AI on scientific research in general.

 

Joost Batenburg

Prof. dr. Joost Batenburg is Professor of Imaging and Visualization at the Leiden Institute for Advanced Computer Science (LIACS) and also leads the Computational Imaging group at CWI, the national research center for mathematics and computer science in The Netherlands. His research focuses on 3D image reconstruction, and in particular on tomographic imaging. From 2013 till 2017 he chaired the EU COST Action EXTREMA on advanced X-ray tomography. His current research focuses on creating a real-time tomography pipeline, funded by an NWO Vici grant. He is responsible for the FleX-Ray lab, where a custom-designed CT system is linked to advanced data processing and reconstruction algorithms. Machine learning and other fields of AI play a key role in his research for improving image quality, speed of image reconstruction, and for novel interactive workflows in 3D imaging.

Read more (title and abstract)

About the talk

Challenges in real-time 3D imaging, and how machine learning comes to the rescue.

Tomography is a powerful technique for visualizing the interior of an object in 3D from a series of its projections, acquired at different angles during a tomographic scan. At the heart of the technique is a mathematical inverse problem (known as “reconstruction”). At present, the steps of image acquisition, reconstruction, and analysis are usually carried out sequentially, often analyzing the data after the scan has long finished.
In this lecture I will present the research of my team, along with many collaborators, to develop algorithms, and software that make the entire tomographic imaging pipeline work in “real-time”, while the scan is taking place. Machine learning can address many of the algorithmic challenges involved, but certainly not in a plug-and-play manner. When scanning objects in 3D, we typically do not have access to a large database of training examples, the circumstances of each scan are different, and so are the internal properties of the objects. In addition, training times and memory requirements tend to increase drastically when scaling up from 2D to fully 3D imaging. I will go over each of these problems and present various strategies for making machine learning feasible and efficient in real-time 3D imaging environments. I will illustrate the results by providing examples from scientific research (real-time synchrotron tomography and real-time electron tomography), industry (real-time quality control) and cultural heritage (interactive technical art investigation).

 

FACt speakers

We are happy to announce Nico Roos, Yingqian Zhang, and Luc de Raedt as FACt speakers.

Nico Roos

Nico Roos is an associate professor at the department of Data-Science and Knowledge Engineering (DKE) of the Faculty of Science and Engineering (FSE) at Maastricht University (UM). His main area of expertise is Knowledge Representation and Reasoning. Nico Roos has also done research on Robotics and Computer Vision, and he is one of the two coordinators of the research theme Explainable and Reliable AI (ERAI) at Maastricht University.

Read more (title and abstract)

About the talk

We aren’t doing AI research

There is a large diversity of excellent research in the BENELUX addressing different aspects of artificial intelligence. The Dutch AI Manifesto distinguishes seven of these aspects, called AI foundations. The creation of artificial intelligence is a multidisciplinary challenge that requires integration of several of these AI foundations. The Dutch AI Manifesto mentions three multidisciplinary challenges: Socially-Aware AI, Explainable AI, and Responsible AI. Although these are important challenges, they do not address the creation of artificial intelligence. If we claim to do AI research instead of only creating smart solutions, then we should also address the multidisciplinary challenge of creating artificial intelligence. In the talk, I will address several aspect of this multidisciplinary challenge.

 

Yingqian Zhang

Yingqian Zhang is associate professor in the department of Industrial Engineering at Eindhoven University of Technology. Her current research focuses on developing data-driven decision-making  techniques for optimizing operational decisions and system design, supported by EU and national funded projects. She has received several best paper awards on this line of research, including a Best Paper Award for 2017 fromOmega The International Journal of Management Science, and a Best Industrial Paper award from ICAART in 2020.  She works closely with industrial partners on applications in e-commerce, logistics, and manufacturing. She is active in AI and OR communities, with board membership of BNVKI (Benelux Association for Artificial Intelligence), and the Data Science meets Optimisation working group of EURO (Association of European Operational Research Societies).

Read more (title and abstract)

About the talk

AI for industrial decision-making

Operational decision-making problems are everywhere, from train maintenance scheduling, ambulance dispatching, to order batching in warehousing. These problems are typically studied in the
Operations Research community, where expert and domain knowledge is used to find good solutions. In this talk, I will introduce new research directions my team have been working on these years that
investigate how data can augment traditional decision making methods. I will use examples from practice to illustrate our approaches and the emerging research challenges.

 

Luc de Raedt

Luc De Raedt is full professor at the Department of Computer Science, KU Leuven, and director of Leuven.AI, the newly founded KU Leuven Institute for AI. He is a guestprofessor at Örebro University in the Wallenberg AI, Autonomous Systems and Software Program. He received his PhD in Computer Science from KU Leuven (1991), and was full professor (C4) and Chair of Machine Learning at the Albert-Ludwigs-University Freiburg, Germany (1999-2006). His research interests are in Artificial Intelligence, Machine Learning and Data Mining, as well as their applications. He is well known for his contributions in the areas of learning and reasoning, in particular, for his work on probabilistic and inductive programming. He co-chaired important conferences such as ECMLPKDD 2001 and ICML 2005 (the European and International Conferences on Machine Learning), ECAI 2012 and will chair IJCAI in 2022 (the European and international AI conferences). He is on the editorial board of Artificial Intelligence, Machine Learning and the Journal of Machine Learning Research. He is a EurAI and AAAI fellow, and received and ERC Advanced Grant in 2015.

Read more (title and abstract)

About the talk

Neuro-Symbolic = Neural + Logical + Probabilistic

The overall goal of neuro-symbolic computation is to integrate high-level reasoning with low-level perception, or phrased differently System I and System II. We argue 1) that neuro-symbolic computation should integrate neural networks with the two most prominent methods for reasoning, that is, logic and probability, and 2) that neuro-symbolic integrated methods should have the pure neural, logical and probabilistic methods as special cases. We make some observations w.r.t. the state-of-the-art with regard to these claims.

This is joint work with Robin Manhaeve, Sebastijan Dumancic, Giuseppe Marra, Thomas Demeester and Angelika Kimmig.