Die Nachrichten des Instituts für Medieninformatik

Disputation Engel, D.: Deep Learning in Volume Rendering

Universität Ulm

Freitag, 13. September 2024, 12:00 Uhr | : O27/2203

Dominik Engel, externer Promovend der Forschungsgruppe Visual Computing,  wird seine Dissertation des Titels Deep Learning in Volume Rendering verteidigen.

Er wird begutachtet von Prof. Dr. Timo Ropinski (Uni Ulm), Prof. Dr. Stefan Bruckner (Universität Rostock), Assoc.-Prof. Dr. Pere-Pau Vázquez (UPC Barcelona), sowie dem benannten Wahlmitglieder Prof. Dr. Birte Glimm (Uni Ulm); Vorsitz und Protokoll übernimmt Prof. Dr. Hans Kestler (Uni Ulm).

Abstract: A variety of scientific fields commonly acquire volumetric data, for example using computed tomography (CT) or magentic resonance imaging (MRI) in medicine. Such volumetric data is often complex and requires visualization to gain understanding. However, rendering volumetric data entails several challenges, such as filtering the data to reveal the structures of interest. Meaningful filtering of 3D data is difficult to implement in 2D graphical user interfaces. Furthermore, rendering of volumetric data is generally compute intensive, especially when considering volumetric shading. Lastly, volume rendering needs to be interactive and achieve reasonable frame rates in order to fully explore the 3D data from different views, while adapting the filtering. To combat those challenges, this work explores how volume rendering and its individual aspects can be assisted by means of deep neural networks. Deep neural networks have recently proven very competent in many disciplines, like natural language processing, vision, but also computer graphics. They excell in the approximation of complex functions and can learn relevant features and representations when trained with sufficient data. In the scope of this dissertation, we show how these capabilities can be used throughout the volume rendering pipeline. This pipeline consists of the filtering of structures of interest, shading of those structures and the composition of the light transmitted from the volume. In the filtering step, we leverage the strong representations learned by self-supervised neural networks to enable an interactive click-to-select workflow that segments structures annotated by users within slice views. For shading, we propose a volume-to-volume network to predict volumetric ambient occlusion that respects how the volume is filtered. Lastly, we employ a neural net to invert the composition step, separating different structures in an already composited semi-transparent volume rendered images into a modifiable layered representation.

Das Institut wünscht gutes Gelingen!