Machine Learning

Artificial Intelligence is the driving force behind many recent advancements in technology, such as autonomous driving and ChatGPT. Still, there is a lot of active development in the hardware running these algorithms, especially when this hardware is located in an edge device, e.g. a smartwatch.

There are numerous application scenarios (keyword spotting, camera enhancing, health-data monitoring, …), which can all benefit from advantages when running the algorithm close to the user and data source. Less bandwidth for transmitting raw data to a server is needed, data is kept private and the latency needed to transmit data to a remote AI-hardware vanishes.

Our work focuses on the needs of two parties involved when deploying an AI solution at-the-Edge: users expect a fast and power-saving hardware, which yields results in an instant and does not drain the battery of their smartphone or wearable. On the other side, developers of AI algorithms rely on an effective protection of their elaborately designed or trained AI model against intellectual-property theft.

Maschinelles Lernen

Projects

Secure Mixed-Signal Neural Networks

S. Wilhelmstätter: Artificial intelligence (AI) functions, and specifically neural network (NN) inference, are increasingly found in resource-constrained devices that cannot outsource complex computations to external servers. This “edge AI” paradigm leads to new security challenges as attackers now have physical access to devices, in addition to known attacks, and can perform side-channel and fault injection attacks ... [more]

Power-Efficient Deep Neural Networks based on Co-Optimization with Mixed-Signal Integrated Circuits

J. Conrad: EdgeAI is the distributed computing paradigm for executing machine-learning algorithms close to the sensor. Compared to centralized, e.g. cloud-based solutions, data security, low latency and bandwidth reduction are achieved. At the same time, there is the major problem that the power consumption of today's deep neural networks (the most common kind of machine-learning algorithm) is far too high for such applications ... [more]