30.04.2026
Formal Verification of Neural Networks: Guarantees Beyond Testing

Laura Nenzi | Start: 12:00 | CCG seminar room (CCGEG002), Sandgasse 38a
Abstract
Neural networks are increasingly deployed in safety-critical domains, where failures can have severe consequences. However, standard evaluation based on testing and validation datasets cannot provide formal guarantees about model behavior under all possible inputs. This talk introduces neural network verification, focusing on methods that provide provable guarantees of safety and robustness. We present the problem as checking whether a model satisfies a given property over a set of inputs, such as robustness to bounded perturbations or compliance with safety constraints. We then survey key approaches, including exact methods based on mixed-integer linear programming (MILP) and SMT solving, as well as more scalable but approximate techniques based on over-approximation and bound propagation. Simple examples illustrate how these methods work and highlight the trade-off between precision and scalability. The talk concludes with a brief overview of current limitations and emerging research directions, including certified robustness and integration with training procedures.

Bio
Laura Nenzi is an Associate Professor in Computer Science at the University of Trieste. Her research lies in formal methods for complex systems, with a focus on runtime verification,spatio-temporal logics, and the formal analysis of cyber-physical systems. She is also interested in verified and explainable artificial intelligence, aiming to provide rigorous guarantees for modern data-driven models.

Photo provided by speaker