We are happy to announce Tom Schaul, Gabriele Gramelsberger, and Joost Batenburg as keynote speakers.

Tom Schaul

Dr. Tom Schaul has been a senior researcher at DeepMind since 2013. His research is focused on reinforcement learning with deep neural networks, but includes modular and continual learning, black-box optimisation, off-policy learning about many goals simultaneously, and video-game benchmarks — most recently StarCraft II. Tom grew up in Luxembourg and studied computer science in Switzerland (with exchanges at Waterloo and Columbia), where he obtained an MSc from the EPFL in 2005. He holds a PhD from TU Munich (2011), done under the supervision of Jürgen Schmidhuber at the Swiss AI Lab IDSIA. From 2011 to 2013, he completed a postdoc with Yann LeCun at the Courant Institute of NYU.

 

 

Gabriele Gramelsberger

Prof. Dr. Gabriele Gramelsberger holds the Chair for Theory of Science and Technology. Together with Prof. Dr. Stefan Böschen (Chair for Technology and Society) she is responsible for the Master’s program in Governance of Technology and Innovation. In 2018 she founded the CSS Lab, supported by the NRW Digital Fellowship 2017. Her aim is to develop a conceptual framework for Philosophy of Computational Scienes as well as an open science infrastructure for Computational Science Studies. She is a member of the RWTH Human Technology Center and serves as Vice Dean for Research of the Faculty of Arts and Humanities at the RWTH Aachen University. In 2018 the DFG appointed her as a member of the Allianz Initiative “Digitale Information”. In 2019 she became a regular member of the North Rhine-Westphalian Academy of Science, Humanities and the Arts. She received her PhD in philosophy from the Freie Universität Berlin in 2002, where she taught from 2004 to 2014. In 2015 she became Privatdozentin at the TU Darmstadt, where she taught from 2014 to 2016. She was guest researcher at the Max Planck Institute for Meteorology in Hamburg (2007) and research fellow at the DFG “Media Cultures of Computer Simulation” Institute for Advanced Study at Leuphana University Lüneburg (2014 and 2015 to 2016). In 2016 she became Chair for Philosophy of Digital Media at the University Witten/Herdecke, and in 2017 Chair for Theory of Science and Technology at RWTH Aachen University.

 

Joost Batenburg

Prof. dr. Joost Batenburg is Professor of Imaging and Visualization at the Leiden Institute for Advanced Computer Science (LIACS) and also leads the Computational Imaging group at CWI, the national research center for mathematics and computer science in The Netherlands. His research focuses on 3D image reconstruction, and in particular on tomographic imaging. From 2013 till 2017 he chaired the EU COST Action EXTREMA on advanced X-ray tomography. His current research focuses on creating a real-time tomography pipeline, funded by an NWO Vici grant. He is responsible for the FleX-Ray lab, where a custom-designed CT system is linked to advanced data processing and reconstruction algorithms. Machine learning and other fields of AI play a key role in his research for improving image quality, speed of image reconstruction, and for novel interactive workflows in 3D imaging.

Title: Challenges in real-time 3D imaging, and how machine learning comes to the rescue.

Abstract: Tomography is a powerful technique for visualizing the interior of an object in 3D from a series of its projections, acquired at different angles during a tomographic scan. At the heart of the technique is a mathematical inverse problem (known as “reconstruction”). At present, the steps of image acquisition, reconstruction, and analysis are usually carried out sequentially, often analyzing the data after the scan has long finished.
In this lecture I will present the research of my team, along with many collaborators, to develop algorithms, and software that make the entire tomographic imaging pipeline work in “real-time”, while the scan is taking place. Machine learning can address many of the algorithmic challenges involved, but certainly not in a plug-and-play manner. When scanning objects in 3D, we typically do not have access to a large database of training examples, the circumstances of each scan are different, and so are the internal properties of the objects. In addition, training times and memory requirements tend to increase drastically when scaling up from 2D to fully 3D imaging. I will go over each of these problems and present various strategies for making machine learning feasible and efficient in real-time 3D imaging environments. I will illustrate the results by providing examples from scientific research (real-time synchrotron tomography and real-time electron tomography), industry (real-time quality control) and cultural heritage (interactive technical art investigation).