News - Research Group Visual Computing

PhD defence of Engel, D.: Deep Learning in Volume Rendering

Ulm University

Friday, 13 September 2024, 12:00 am | O28/2203

 

Dominik Engel, external phd candidate of the research group Visual Computing. will defend his phd project of the title Deep Learning in Volume Rendering.

His jury will consist of Prof. Dr Timo Ropinski (Ulm University), Prof. Dr Stefan Bruckner (University of Rostock), Assoc.-Prof. Dr. Pere-Pau Vázquez (UPC Barcelona), as additional member Prof. Dr Birte Glimm, and Prof. Dr Hans Kestler as head of the jury and minutes (both Uni Ulm).

Abstract: A variety of scientific fields commonly acquire volumetric data, for example using computed tomography (CT) or magentic resonance imaging (MRI) in medicine. Such volumetric data is often complex and requires visualization to gain understanding. However, rendering volumetric data entails several challenges, such as filtering the data to reveal the structures of interest. Meaningful filtering of 3D data is difficult to implement in 2D graphical user interfaces. Furthermore, rendering of volumetric data is generally compute intensive, especially when considering volumetric shading. Lastly, volume rendering needs to be interactive and achieve reasonable frame rates in order to fully explore the 3D data from different views, while adapting the filtering. To combat those challenges, this work explores how volume rendering and its individual aspects can be assisted by means of deep neural networks. Deep neural networks have recently proven very competent in many disciplines, like natural language processing, vision, but also computer graphics. They excell in the approximation of complex functions and can learn relevant features and representations when trained with sufficient data. In the scope of this dissertation, we show how these capabilities can be used throughout the volume rendering pipeline. This pipeline consists of the filtering of structures of interest, shading of those structures and the composition of the light transmitted from the volume. In the filtering step, we leverage the strong representations learned by self-supervised neural networks to enable an interactive click-to-select workflow that segments structures annotated by users within slice views. For shading, we propose a volume-to-volume network to predict volumetric ambient occlusion that respects how the volume is filtered. Lastly, we employ a neural net to invert the composition step, separating different structures in an already composited semi-transparent volume rendered images into a modifiable layered representation.

We wish him all the best towards a successfull end of this major step in his academic carreer.