Keynote Talks

Automated-Reasoning Revolution: From Theory to Practice and Back

For the past 40 years computer scientists generally believed that NP-complete problems are intractable. In particular, Boolean satisfiability (SAT), as a paradigmatic automated-reasoning problem, has been considered to be intractable. Over the past 20 years, however, there has been a quiet, but dramatic, revolution, and very large SAT instances are now being solved routinely as part of software and hardware design. In this talk I will review this amazing development and show how automated reasoning is now an industrial reality.

I will then describe how we can leverage SAT solving to accomplish other automated-reasoning tasks. Sampling uniformly at random satisfying truth assignments of a given Boolean formula or counting the number of such assignments are both fundamental computational problems in computer science with applications in software testing, software synthesis, machine learning, personalized learning, and more. While the theory of these problems has been thoroughly investigated since the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances. Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and Satisfiability Modulo Theory, that scales to formulas with hundreds of thousands of variables without giving up correctness guarantees.

Fintech and Its Scientific Drivers

FinTech can be seen as the meeting of minds between economics and computer science in the digital age. Among its major intellectual foundations are reliable distributed computing and cryptography from the side of computer science, and efficient mechanism design for financial activities from the side of economics. In this talk we discuss some recent work in auction and blockchain from this perspective. For example, is it true that more revenue can always be extracted from an auction where the bidders are more willing to pay than otherwise? Can more revenue be extracted when the bidders are more risk-tolerant than otherwise? We also present some new results on blockchain fees. These results help shed light on some structural questions in economics whose answers are non-obvious.

Towards Knowledge Graph Based Representation, Augmentation and Exploration of Scholarly Communication

Despite an improved digital access to scientific publications in the last decades, the fundamental principles of scholarly communication remain unchanged and continue to be largely document-based. The document-oriented workflows in science have reached the limits of adequacy as highlighted by recent discussions on the increasing proliferation of scientific literature, the deficiency of peer-review and the reproducibility crisis. We need to represent, analyse, augment and exploit scholarly communication in a knowledge-based way by expressing and linking scientific contributions and related artefacts through semantically rich, interlinked knowledge graphs. This should be based on deep semantic representation of scientific contributions, their manual, crowd-sourced and automatic augmentation and finally the intuitive exploration and interaction employing question answering on the resulting scientific knowledge base. We need to synergistically combine automated extraction and augmentation techniques, with large-scale collaboration to reach an unprecedented level of knowledge graph breadth and depth. As a result, knowledge-based information flows can facilitate completely new ways of search and exploration. The efficiency and effectiveness of scholarly communication will significant increase, since ambiguities are reduced, reproducibility is facilitated, redundancy is avoided, provenance and contributions can be better traced and the interconnections of research contributions are made more explicit and transparent. In this talk we will present first steps in this direction in the context of our Open Research Knowledge Graph initiative and the ScienceGRAPH project.

Safety Verification for Deep Neural Networks with Provable Guarantees

Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing automated verification and testing techniques for deep neural networks to ensure safety and security of their classification decisions with respect to input manipulations. The techniques exploit Lipschitz continuity of the networks and aim to approximate, for a given set of inputs, the reachable set of network outputs in terms of lower and upper bounds, in anytime manner, with provable guarantees. We develop novel algorithms based on feature-guided search, games and global optimisation, and evaluate them on state-of-the-art networks. We also develop foundations for probabilistic safety verification for Gaussian processes, with application to neural networks.

On Termination of Probabilistic Programs

Program termination is a key question in program verification. This talk considers the termination of probabilistic programs, programs that can describe randomised algorithms and more recently received attention in machine learning. Termination of probabilistic programs has some unexpected effects. Such programs may diverge with zero probability; they almost-surely terminate (AST). Running two AST-programs in sequence that both have a finite expected termination time -- so-called positive AST -- may yield an AST-program with an infinite termination time (in expectation). Thus positive AST is not compositional with respect to sequential program composition. This talk discusses that proving positive AST (and AST) is harder than the halting problem, shows a powerful proof rule for deciding AST, and sketches a Dijkstra-like weakest precondition calculus for proving positive AST in a fully compositional manner.

The Power of Symbolic Automata and Transducers

Symbolic automata and transducers extend finite automata and transducers by allowing transitions to carry predicates and functions over rich alphabet theories, such as linear arithmetic. Therefore, these models extend their classic counterparts to operate over infinite alphabets, such as the set of rational numbers. Due to their expressiveness, symbolic automata and transducers have been used to verify functional programs operating over lists and trees, to prove the correctness of complex implementations of BASE64 and UTF encoders, and to expose data parallelism in computations that may otherwise seem inherently sequential. In this talk, I give an overview of what is currently known about symbolic automata and transducers as well as their variants. We discuss what makes these models different from their finite-alphabet counterparts, what kind of applications symbolic models can enable, and what challenges arise when reasoning about these formalisms. Finally, I present a list of open problems and research directions that relate to both the theory and practice of symbolic automata and transducers.