CS5900.162/3

Usable Security and Privacy

Seminar Usable Security and Privacy (Bachelor)
Research trends in Usable Security and Privacy (Master)

Usable Security and Privacy is an interdisciplinary field of research that aims to bridge the gap between security requirements and human behaviour . The technical robustness of security mechanisms is often emphasised, but human factors are often disregarded. This can lead to barriers to use, misconfigurations or unsafe behaviour. Therefore, this seminar aims to give students the opportunity to deal with effective as well as user-friendly solutions.

As the seminar comes from the Institute of Information Resource Management (OMI), the topic of Internet censorship will be explored alongside more general topics in the area of usable security and privacy, with a focus on the balance between security, performance and usability. Psychological factors also play a decisive role in security behaviour and, accordingly, in the design of secure tools.

The following list describes the topics. The actual topics to be worked on are parts of these subject areas and are assigned individually via Moodle.

Themes

This research area investigates the ways state of private actors control information flows (censorship) and monitor user activities (surveillance), as well as the design, usability and effectiveness of such tools that individuals use to bypassthese restrictions (e.g., VPNs, Tor, proxy services). Governments worldwide implement various censorship strategies, often justified as protecting national security or cultural values. However, these practices can limit freedom of expression and lead to self-censorship, while surveillance methods weakens privacy through excessive data collection. On the flipside, researchers and activists develop circumvention technologies that help users restore access to blocked content or preserve anonymity. Yet, the usability of such tools remains often hinders adoption: even the most robust tool can falter if non-tech-savvy users struggle with setup or configuration. By looking at both censorship and surveillance methods, as well as the real-world usability of countermeasures, this research field aims to understand how people can protect their access to information and their personal privacy under various types of restrictions.

Technology adoption focuses on factors why individuals, communities or organizations decide to embrace or reject new technologies, in this context new security or privacy tools. Classic models like the Technology Acceptance Model (TAM) (Davis, 1989) or Diffusion of Innovations (DOI) (Rogers, 2003) highlight how certain factors like perceived usefulness, ease of use, social influence, and communication channels shape the pace and depth of adoption. In security contexts, these elements intersect with additional concerns: cognitive load of configuring security tools, trustworthiness of the technology provider, and the real or perceived threats that motivate users to protect themselves. Studies in this area also draw from organizational behavior, showing how leadership, voluntariness, and (workplace) culture can either promote or hinder the adoption of secure systems. From the user's point of view, adopting (and continuing to use) security measures often depend on intuitive interfaces, minimal effort, social influences and clear demonstrations of benefits over risks (cost-benefit evaluation).

Research focuses on how individuals perceive security and privacy risks, and how they respond to them: what drives them to adopt protective measures (or ignore them), by examining users' mental models, cognitive biases, and emotional reactions. A core framework is the Protection Motivation Theory (PMT), originally developed in health psychology, which has been adapted to cybersecurity contexts. PMT suggests that users evaluate both the severity of threats (e.g., risk of identity theft) and their ability to cope (e.g., installing security software, using strong passwords). Risk perception is also shaped by fear appeals, social and cultural factors, personal experiences with attacks (e.g., phishing or malware). Decision-making becomes more complex in organizational settings, where employees balance convenience, time pressure, and security policies. Understanding these human factors will help to design effective interventions that encourage safer online habits.

This area investigates how users manage and perceive their personal data online. The "privacy paradox" refers to the discrepancy between users' proclaimed concerns about their digital privacy and their actual behavior: often oversharing personal information on platforms like Facebook, Twitter, Instagram or TikTok. In social media contexts, this phenomenon is especially visible, as the platform's design (e.g. default public settings) and social influences (e.g., peer pressure, the desire for visibility) can incentivize users to post extensive personal details. Despite expressing high privacy concerns in surveys, many rarely change default settings or read privacy policies, leading to questions how well user concerns actually translate into concrete protective actions. Studies explore how certain design features like nudges or data-sharing reminders can reduce unintentional oversharing, and how peer influences (e.g., wanting to fit in, fear of missing out) or platform-specific cultures encourage it.

Trusted Compute Environments (TCEs) or Trusted Execution Environments (TEEs), such as Intel SGX, ARM TrustZone, and AMD SEV, offer the promise of enhanced security and privacy by creating isolated execution environments where sensitive data and code can operate protected from the rest of the system. However, the complex programming models, performance overhead, and lack of user-friendly interfaces often hinder their adoption in real-world applications. This seminar topic explores the usability challenges associated with TCEs, focusing on the tension between strong security/privacy guarantees and a positive user experience. 

Prof. Dr. Steffen Wendzel

  • Room: 5209 (O27)
More

M.Sc Julia Lenz

  • Room: 5406 (026)
More

Organisational Information

Next course start: Summer semester 2025
Frequency: every 2nd semester

Location: O28 - 1002

Time: Thursday, 2:00 PM – 4:00 PM

ECTS: 4

Seminar: (2 contact hours per week); written seminar paper, presentation materials, and presentation as part of a seminar talk

Registration via the central seminar allocation tool on Moodle by Saturday, 15 March 2025.

The actual topic assignment takes place in our internal Moodle course.

Bachelor: preferably in English

Master: English

Topics can only be worked on individually. To obtain all credits, a seminar paper must be written and a presentation followed by a discussion is required.

Degree programs: B.Sc. and M.Sc. in Computer Science, Media Informatics, Software Engineering

Quick Info

The seminar materials can be found in our Moodle course.