This year we have extended the industry track to an industry & society afternoon, to not just discuss business applications of AI and machine learning, but also the impact of AI society: the good, the bad and the ugly. The program consists of an opening panel, as well as two invited talks followed by discussion.
The AI rat race, if we are all trying to be first, who will win?
The pursuit of AI is often portrayed as a race: who will get their first? The US, Europe or China? Industry or government? Academic research or industry? But the question is, will anyone really win if it’s approached like this? Shouldn’t we rather collaborate? But how can this be achieved, especially when collaborating across many different partners, from funded research programs to looser collaboration networks? How do we ensure that society, consumers and citizens are also benefiting, and AI is applied responsibly? And how can (early stage) researchers benefit and take part?
- Sabine Demey, Director Flanders Artificial Intelligence Program &IMEC
- Holger Hoos, Leiden University & CLAIRE
- Stefan Leijnen, University of Applied Sciences Utrecht & The Netherlands AI Coalition
- Hilde Weerts, Eindhoven University of Technology
The panel members will be interviewed by Peter van der Putten, Leiden University & Pegasystems.
Bursting bubbles, busting bias and beating bullies
Guy De Pauw: Social Media Monitoring of Hate Speech and Disinformation
Abstract: In a recent longitudinal hate speech study, Textgain reported unsettling increases in the proliferation of online toxicity and disinformation on social media. Contrary to the hopes of many sociologists that people would come together again during the coronavirus pandemic, this volume is still rapidly increasing with new emerging conspiracy theories, disinformation, and propaganda aimed at fuelling civic unrest. Such large data streams can no longer be processed manually, which makes it difficult for policy makers to develop effective, up-to-date mitigation strategies. In this presentation, we will present ongoing work at Textgain on deploying Social Media Monitoring tools to detect hate speech and disinformation on social media, and the deployment of novel strategies to counter it.
Bio: Guy De Pauw is a language engineer, developing Artificial Intelligence technology for the automated processing of user-generated content. He is the co-founder and CEO of the University of Antwerp spin-off Textgain, a company that leverages such AI technology for societal good. With a background in linguistics and machine learning, he is continuously exploring how self-learning technology can find meaningful patterns in massive content streams and how these patterns can be transformed into actionable insights.
David Graus: AI for bursting filter bubbles and addressing bias in hiring
Lead Data Scientist & Data Science Chapter Lead, Randstad Groep Nederland
Abstract: In the current polarized debate around AI in society, algorithmic matching, recommender systems, and personalization algorithms are often equated to toxic and manipulative systems that society is at the mercy of. However, in computer science research, these supposed detrimental and negative effects of recommender systems turn out to be hard to prove, and are often disputed.
In his talk, David will show how these AI systems can be employed for societal good, by showcasing examples of how news personalization can result in more diverse, serendipitous, and dynamic news consumption, and how in the recruitment domain, algorithms can be employed to yield more fair hiring by effectively addressing human bias.
Bio: David Graus is lead data scientist at Randstad Groep Nederland, the global leader in the HR services industry. He works on algorithmic matching, fairness, natural language processing and knowledge graphs. David obtained his PhD degree in 2017 at the University of Amsterdam in information retrieval. Prior to joining Randstad, David worked on the award-winning BNR SMART Radio app and news personalization at Het Financieele Dagblad.