Master's Thesis: A Verified, Analysis-Focused Interpreter for WebAssembly

My master’s thesis about WebAssembly is now available under Publications.


WebAssembly is a standardized general-purpose bytecode language that primarily targets usage as part of web applications. Next to an English prose specification, it also has an official formal specification that describes the language semantics unambiguously in a natural deduction style. This motivates the development of verified tools for static analysis and optimization, since the formal specification can be used as a direct basis to verify against. However, this specification of execution by means of reduction rules has certain inconvenient properties such as mixing data and code, as well as not having any notion of addresses in a program. This makes the application of techniques such as Abstract Interpretation impossible in the sense that information should be determined and stored for specific program points. We overcome these limitations by developing a new interpreter that uses an explicit call stack and value stack, in addition to explicitly tracking a program counter on top of immutable code. This interpreter is developed directly on top of the WebAssembly mechanization developed by Conrad Watt in Isabelle/HOL, a generic theorem proving and logic system. We then present a proof for completeness of our interpreter, showing that it can indeed represent any execution allowed by the WebAssembly specification, which makes it a suitable basis for static analysis. As a side effect of this, we also show that real-world interpreters taking a similar approach using explicit stacks and program counters can also cover the entire WebAssembly language. In addition to the interpreter, we extend the mechanization by formalizing the specification of integer types and operations, as this part had still been missing. We also supply executable implementations for these integer operations and verify them against the specification.

Introducing Projects in Rizin

Article on

When manually analyzing a complex binary, possibly over the course of days, weeks or even months, it is crucial to be able to keep track of the gained knowledge through annotations such as comments, function and variable names. As such, the tool one is working with also need to provide a reliable and future-proof way to save and restore this information. One of the biggest additions in Rizin surely is the new projects feature, which provides exactly this functionality in both rizin on the command line and Cutter. In this article, we would like to give an overview of how it was designed, what exactly it promises to you, as well as the current limitations you should be aware of when using it right now.


  • Projects can be used in rizin using the Ps [<project.rzdb>] and Po <project.rzdb> commands and in Cutter through its regular user interface.

Continue reading

Chiaki - bringing PlayStation 4 Remote Play to Linux at MiniDebConf

I was invited to the MiniDebConf Online #2 “Gaming Edition” to give a talk about Chiaki. The recording can be viewed on the MDCO Website.


Chiaki is an open source implementation of Sony’s PlayStation 4 Remote Play, which is a game streaming protocol that allows the PlayStation 4 to be played remotely over the network. The talk will contain a brief overview of the protocol and the reverse engineering process that was used to obtain all of this information, as well as a few short demos of Chiaki in action. It is intended for anyone interested in the technical details of low-latency game streaming or reverse engineering closed source applications.

SMT - Bit Vectors

My Master seminar paper about SMT solving in the theory of bit vectors is now available under Publications.


Due to how real computers are designed, SMT formulas built on unbounded integers aregenerally unsuited for deciding properties on program code and instead, sequences of bits,calledbit vectors, are used to represent machine integers. After establishing a calculus forSMT formulas on such data, this document elaborates on how these can be automaticallydecided by using theflatteningapproach to translate them into propositional logic, which canbe solved by any SAT solver. This approach is extended toincremental flattening, which isable to decide certain types of formulas significantly faster. For these processes, an exemplarily implementation in Haskell is developed, which is finally being used to solve a small CrackMe-challenge as an example application.

Background Tasks in radare2

Article on Recently, I have been working on improving performance in Cutter, the radare2 GUI, especially when working with larger binaries. One major issue was that almost everything that accessed r2, such as updating the list of functions, strings, etc., was running on the main thread, and thus freezing the UI. While this is barely noticeable with smaller binaries, it can lead to a severe impact on usability for larger ones. The obvious solution for this is to somehow run the work in the background and update the UI when it is done. Unfortunately, r2 was never designed to be multi-threaded. This may intuitively seem like a poor design choice, however there are clear reasons behind it: First, apart from potential speed-up, multi-threading would have almost no practical advantages when accessing r2 from its classic command line interface. And second, ensuring all necessary parts of the code are thread-safe would introduce an enormous additional overhead, possibly even lowering the overall performance.

Continue reading

Bachelor's Thesis: Volume Rendering of Meteorological Simulation Data

I have recently finished my Bachelor’s Thesis on integrating two different volume lighting methods in Met.3D, a free and open source application for visualizing meteorological simulation data. The thesis is now available under Publications.


This thesis presents the integration of two different volume lighting methods to be used for visualization of meteorological simulation data, in particular cloud data, in Met.3D, an open-source visualization application for meteorological uses. The first method is primarily a visuals-based approach, which simulates single scattering and imitates certain visual phenomena appearing in real-world clouds. The second, while generally requiring more computational effort, is a specific implementation of the photon mapping algorithm and directly simulates multiple scattering by tracing photons through the volume. It takes advantage of the Henyey-Greenstein function in order to be able to only use 3D textures as the data structure of the photon map. We use our implementations of these methods to produce a number of images and analyze them with respect to visual outcome, physical accuracy and performance. In addition, we compare images created with the photon mapping method to results that were generated using MFASIS, a method to simulate radiative transfer of clouds.