Past the Perimeter: Low-Cost Memory Interposer Attacks on Confidential Computing

Abstract

As cloud computing adoption grows, so do concerns about trust and data privacy. Confidential computing, powered by innovative hardware technologies like Intel SGX and AMD SEV, promises strong isolation and transparent memory encryption to protect against privileged attackers and physical threats such as bus snooping and cold boot attacks.

This talk overviews our recent work on BadRAM and BatteringRAM, showing that state-of-the-art memory encryption can be reliably bypassed with limited physical access and ~$50 of custom hardware. By introducing a novel form of runtime memory aliasing, we defeat even the firmware defenses deployed in response to our earlier findings; ultimately exposing fundamental limitations in today’s scalable confidential computing designs.



Bio

Jo Van Bulck is a professor in the DistriNet lab at the Department of Computer Science of KU Leuven, Belgium. His research explores attacks and defenses at the hardware-software boundary, with particular attention to privileged side channels in trusted execution environments.

Jo’s research has uncovered several innovative attack vectors in commodity Intel x86 processors that have led to microcode and silicon mitigations in hardware, as well as software patches in major operating systems and compilers.



Photo provided by speaker

Rowhammer bit flips a decade later

Abstract

The first Rowhammer exploit was published a little more than a decade ago on a DDR3-based system. Since then, we have had two generations of DRAM technology with proprietary mitigations. In this talk, I present our journey in understanding the security guarantees of these mitigations in DDR4 and DDR5 devices through significant platform building efforts, painstaking reverse engineering, and creative system-level techniques. The results are not encouraging; DRAM is as insecure as a decade ago while the cost of independent security analysis is growing beyond what academia can do. I finish with a brief discussion of possible paths forward.



Bio

Kaveh is an associate professor at ETH Zurich where he leads the COMSEC group. Next to defensive work, he has been involved in the discovery of many high-profile security vulnerabilities in commodity DRAM and CPU chips. He is a proud owner of five Pwnies and many best/distinguished paper awards, including at Oakland, USENIX Security and MICRO.

Photo © Giulia Marthaler / ETH Zurich

Polynomial-time minimizable automata for omega-regular languages

Abstract

For languages over finite words, automata types that permit polynomial-time minimization are well-known. For languages over infinite words, as used when specifying the behavior of reactive systems, finding an automaton class that has a polynomial-time minimization algorithm proved to be substantially more difficult.
While some such representations for so-called lasso languages exists, their use in applications is limited and tends to be restricted to language learning.

In this talk, we present recent progress towards solving this problem. We start by showing how arbitrary omega-regular languages can be canonically decomposed into a series of co-Büchi languages, each of which can in turn be made canonical and minimized by representing them as history-deterministic co-Büchi automata with transition-based acceptance. We show how to translate such a chain of co-Büchi automata (COCOA) representation into a fixpoint formula for performing reactive synthesis over a game graph.

Afterwards, we consider the question if the main ideas of the COCOA representation can be lifted to an automaton model in which the language to be represented is only encoded as a single automaton, as usual in automata theory. We show that a reinterpretation of how history-deterministic co-Büchi automata accept words can be combined with parity acceptance to obtain a polynomial-time minimizable automaton model for arbitrary omega-regular languages. Finally, we show that this new automaton model is useful both for reactive synthesis and probabilistic verification.



Bio

Rüdiger Ehlers received his doctorate from Saarland University in 2012 and held researcher positions at UC Berkeley and Cornell University before becoming a junior research group leader at the University of Bremen. Since 2019, he is professor for embedded systems at Clausthal University of Technology.

Photo provided by speaker

Bachelor@ISEC & Awards 2025

At the event, we present our new open bachelor’s thesis (and master’s thesis) topics and award prizes to excellent students.  

If you’re interested in joining us for your bachelor’s thesis in security, this is the best way to get an impression of our topics as well as how a bachelor’s thesis at ISEC works: You’ll hear about our research areas and current hot topics, our Bachelor@ISEC program where you can work on your thesis together with your fellow students in one of our offices if you like, and maybe you’ll get to know your supervisor while chatting along.

THE AWARDS:

ISEC Student Research Excellence Award: Students at ISEC who became co-authors of a scientific publication in the context of a thesis or project receive this award.

ISEC Bachelor Excellence Award: This award is for students majoring in “Information Security” at TU Graz and who have completed their bachelor’s degree with distinction at TU Graz. Application deadline: Oct 9th 2025!

The event will also be the kick-off lecture in Introduction to Scientific Working (ISW), where you will be able to choose your preferred topic!   

We are looking forward to meeting you!

 

Master@ISEC

We’ll give an overview of the Major Information Security and all you need to know about it (including our new Master@ISEC network and its benefits ), introduce the new updated curriculum with several exciting new courses, and provide an opportunity to meet fellow students and lecturers while enjoying some pizza. No matter if you just started or are heading to the end of your studies–anyone interested is welcome! 

A modular interpretation of the Hessian of elliptic curves

Abstract

In this talk, we will discuss the modular interpretation of the Hessian transformation on elliptic curves. We begin by recalling some classical results concerning the action of the Hessian transforma-tion on the j-invariants and on the Hesse pencil, and we rephrase them in the context of the modu-lar curves X(1) and X(3). Building on the work of Mula, Pintore, and Taufer in the recent preprint (ArXiv:2407.17042), we lift the Hessian transformation up to maps on 16 different modular curves, in-cluding X(6), and analyse their effects on the associated moduli spaces. Finally, we give a numerical representation of Hessian map on the extended complex upper half-plane H∗. This talk is based on the speaker’s Master’s thesis supervised by Pintore, Taufer and Mula.



Bio

Riccardo Lolato — Master’s degree in Mathematics – Cryptography, Universita` degli Studi di Trento.



Photo provided by speaker

25th International Conference on Runtime Verification

From September 15 to 19, 2025, we’re hosting the 25th International Conference on Runtime Verification (RV 2025) at TU Graz!

The RV series is an annual event that brings together researchers and practitioners from academia and industry who are interested in novel, lightweight formal methods for monitoring, analyzing, and guiding the runtime behavior of software and hardware systems. Runtime verification techniques play a vital role in ensuring system correctness, reliability, and robustness. They offer an additional layer of rigor and effectiveness compared to conventional testing, while remaining more practical than exhaustive formal verification.

This year’s edition features three co-located workshops – RVmeetsMBD, RVCase, and VASSAL – which will take place on September 15. We are also delighted to welcome an outstanding lineup of keynote speakers: Thomas Henzinger, Nils Jansen, Ankush Desai, and Daniela Micucci.

Memory-Centric Computing: Enabling Fundamentally Efficient & Intelligent Machines

Abstract
Computing is bottlenecked by data. Large amounts of application data overwhelm the storage capability, communication capability, and computation capability of the modern machines we design today. As a result, many key applications’ performance, efficiency, and scalability are bottlenecked by data movement. In this talk, we describe three major shortcomings of modern computers in terms of 1) dealing with data, 2) taking advantage of vast amounts of data, and 3) exploiting different semantic properties of application data. We argue that an intelligent computing architecture should be designed to handle data well. We posit that handling data well requires designing architectures based on three key principles: 1) data-centric, 2) data-driven, 3) data-aware. We give examples of how to exploit these principles to design a much more efficient and higher performance computing system. We especially discuss recent research that aims to fundamentally reduce memory latency and energy, and practically enable computation close to data, with at least two promising directions: 1) processing using memory, which exploits the fundamental operational properties of memory chips to perform massively-parallel computation in memory, with low-cost changes, 2) processing near memory, which integrates sophisticated additional processing capability in memory chips, the logic layer of 3D-stacked technologies, or memory controllers to enable near-memory computation with high memory bandwidth and low memory latency. We show both types of architectures can enable order(s) of magnitude improvements in performance and energy consumption of many important workloads, such as artificial intelligence, machine learning, graph analytics, database systems, video processing, climate modeling, genome analysis. We discuss how to enable adoption of such fundamentally more intelligent architectures, which are key to efficiency, performance, and sustainability. We conclude with some research opportunities in and guiding principles for future computing architecture and system designs.

An accompanying overview of modern memory-centric computing ideas & systems can be found at arxiv.org/pdf/2012.03112 (“A Modern Primer on Processing in Memory”, updated February 2025).

A shorter invited paper from IMW 2025 is at arxiv.org/pdf/2505.00458 (“Memory-Centric Computing: Solving Computing’s Memory Problem”, May 2025)



Bio

Onur Mutlu is a Professor of Computer Science at ETH Zurich. He previously held the William D. and Nancy W. Strecker Early Career Professorship at Carnegie Mellon University. His current research interests are in computer architecture, computing systems, hardware security, memory & storage systems, and bioinformatics, with a major focus on designing fundamentally energy-efficient, high-performance, and robust computing systems. Many techniques he, with his group and collaborators, has invented over the years have largely influenced industry and have been widely employed in commercial microprocessors and memory & storage systems used daily by hundreds of millions of people. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held product, research and visiting positions at Intel Corporation, Advanced Micro Devices, VMware, Google, and Stanford University. He received various honors for his research, including the 2025 IEEE Computer Society Harry H. Goode Memorial Award “for seminal contributions to computer architecture research and practice, especially in memory systems,” 2024 IFIP WG10.4 Jean-Claude Laprie Award in Dependable Computing (for the original RowHammer work), 2022 Persistent Impact Prize of the Non-Volatile Memory Systems Workshop (for original architectural work on Phase Change Memory), 2021 IEEE High Performance Computer Architecture Conference Test of Time Award (for the Runahead Execution work), 2020 IEEE Computer Society Edward J. McCluskey Technical Achievement Award, 2019 ACM SIGARCH Maurice Wilkes Award and more than thirty best paper, “Top Pick” paper, or test-of-time recognitions at various leading computer systems, architecture, and security venues. He is an ACM Fellow, IEEE Fellow, and an elected member of the Academy of Europe. He enjoys teaching, mentoring, and enabling & democratizing access to high-quality research and education. He has supervised 24 PhD graduates, many of whom received major dissertation awards, 15 postdoctoral trainees, and more than 60 Master’s and Bachelor’s students. His computer architecture and digital logic design course lectures and materials are freely available on YouTube (OnurMutluLectures@CMUCompArch), and his research group makes a wide variety of open-source artifacts freely available online. For more information, please see his webpage.



Photo provided by speaker

Green Intelligent & Connected Systems with Sensory Intelligence on Chip: Pushing AI Out of the Cloud and into the Physical World

Abstract 

Recent semiconductor scaling trends continue to support the evolution of intelligent and connected silicon systems. Such evolution vastly outranges any application ever deployed by human beings, and its sustained growth is now fundamentally impeded by the ludicrously high levels of power consumption that next-generation datacenters are expected to require. At the same time, moving intelligence into trillion-scale distributed edge devices is fundamentally impeded by batteries, which threaten economic and environmental sustainability of the underlying scaling trend, and hence feasibility.

This talk introduces key ideas and silicon demonstrations to enable a new breed of always-on silicon systems with sensory intelligence with no battery inside (or any other energy storage). Highly power-scalable systems with adaptation to the highly-fluctuating power profile of energy harvesters is shown to enable next-generation pervasive integrated systems with cost well below 1$, size of few millimeters, long lifetime well beyond the traditional shelf life of batteries, yet at near-100% up-time.

Sensor interfaces, processors and wireless transceivers fitting existing infrastructure (e.g., WiFi, Bluetooth) with power reductions by orders of magnitude and down to sub-leakage are exemplified by numerous silicon demonstrations from our Green IC research group, along with their system integration. Ultimately, the technological pathway discussed in this talk supports sustainable growth of applications leveraging large-scale deployments of silicon systems, making our planet smarter. And greener too.



Bio

Massimo Alioto is Provost’s Chair Professor at the ECE Department of the National University of Singapore, where he leads the Green IC group and the Integrated Circuits and Embedded Systems area. Previously, he held positions at the University of Siena, Intel Labs – CRL (2013), University of Michigan – Ann Arbor (2011-2012), University of California – Berkeley (2009-2011), EPFL – Lausanne.

He is (co)author of 400 publications on journals and conference proceedings, and four books with Springer (with two more coming). His primary research interests include ultra-low power and self-powered systems, green computing, circuits for machine intelligence, hardware security, and emerging technologies.

He was the Editor in Chief of the IEEE Transactions on VLSI Systems and Deputy Editor in Chief of the IEEE Journal on Emerging and Selected Topics in Circuits and Systems. He was the Chair of the Distinguished Lecturer Program for the IEEE CAS Society, and was a Distinguished Lecturer for the SSC and CAS Society. Previously, Prof. Alioto was the Chair of the “VLSI Systems and Applications” Technical Committee of the IEEE Circuits and Systems Society (2010-2012). He served as Guest Editor of numerous journal special issues (JSSC, TCAS-I, JETCAS…), Technical Program Chair of several IEEE conferences (ISCAS, SOCC, PRIME, ICECS), and TPC member (ISSCC, ASSCC). His research group contribution has been recognized through various best paper awards (e.g., ISSCC), and in the ten technological highlights of the TSMC annual report, among the others. Prof. Alioto is an IEEE Fellow.




Photo provided by speaker

Tweakable enciphering modes and their Committing Security

Abstract
A tweakable enciphering mode (TEM) is a cryptographic primitive that provides length-preserving encryption. In 2024, the National Institute of Standards and Technology (NIST) issued the Accordion call to standardize future-proof TEMs. TEMs serve as building blocks for various modes of operation, including authenticated encryption (AE), deterministic AE (DAE) and disk encryption. NIST has identified context commitment (CMT-4) as an important security objective for TEMs when used in AE/DAE.

We will start the talk by discussing the challenges of CMT-4 secure TEMs. In particular, we show that many existing TEMs, such as HCTR2 and Adiantum, fail to achieve CMT-4. We discuss different approaches to remedy the situation, and conclude our talk by proposing novel TEM designs, which are the first to achieve provably CMT-4 security.