Personal data protection has surged to the centre of policy debate, leading to the introduction of the EU General Data Protection Regulation (GDPR). An increasing body of literature shows that consumers’ actions regarding their personal data are often not aligned with their stated preferences—the so called “privacy paradox”. We strengthen this line of research by investigating the gap between behavior and stated privacy preferences in the laboratory. We further consider the design of IT security systems aiming to preserve privacy and anonymization, and its impact on consumer behavior. The increasing use of digital technologies, like algorithms in online marketplaces, is also redirecting attention to issues of fairness and discrimination. There are growing fears that such technologies can lead to discrimination or be used to exploit consumers’ behavioral biases. Experimental evidence supports these theoretical findings.
In this working package, we explore the issue of algorithmic fairness in markets for consumer goods and ask how algorithmic fairness procedures – contract, data protection, and anti-discrimination law – can be implemented to address algorithmic discrimination and exploitation. Finally, digital technologies bear new cybersecurity challenges. As true for many public goods, incentives to engage individually in cybersecurity are weak since the costs are typically internalized by the public. Insufficient awareness and consumers’ lack of information regarding cyber threats and security properties of ICT-related products amplifies the associated risks. Our aim is to evaluate the role of quality infrastructure to increase cybersecurity focusing on the proposed Cybersecurity Act.