With today’s internet commerce, the protection of the consumers’ personal data plays an increasingly central role. On the one hand, an increasing number of business models rely on monetizing big personal data to provide high-value online services free of charge, including web search, social networking, and personalized product recommendations. On the other hand, the commercial exploitation of their personal data raises concerns among consumers, owing to the loss of privacy and to the feeling that one does not get a fair share of the data’s worth. This is confirmed by surveys showing that consumers are becoming worried about their personal data and demanding better protection. While surveys can be informative, little is known about the consumers real actions regarding their personal data as stated preferences and revealed are often not aligned (a phenomenon called “privacy paradox”).

Fairness of a data request has already been found to be a significant driver towards disclosure. The monetary value of big personal data and of its sources is apparent. Facebook’s market capitalization currently exceeds $200 per user. However, this money is not readily available to the individual consumer and market prices for isolated profile data are low, often well below one US cent. Companies create and extract the data rent through sophisticated data mining, combining expertise, IT infrastructure, and complementary data sets. Users typically receive non-monetary benefits in return, although privacy activists (e.g., Commodify.us) and start- ups (such as Datacoup) have explored paying users a share of the data rent. Beyond basic practical issues, a deterrent for companies to distribute the data profits may be the suspicion that users would be less willing to provide their details if they understood its value for the company. The value of personal data to the firm can thus be understood as an opportunity cost of privacy. A large part of the experimental literature on privacy has focused on this willingness to pay for privacy. Studies have shown that a sizeable proportion of Web users pay extra to keep certain data private. Yet, some studies have demonstrated the privacy paradox in the lab where consumers freely provide data for little or nothing in return, but claim to be very concerned about protecting their privacy. Moreover, users are prone to biased decisions and framing effects.

It is sometimes argued that consumers would reveal their personal data less freely if they knew the data’s value for the company. This assumption has not been investigated systematically. In this task, we propose to investigate with the help of lab and field experiments what share of the data’s worth is considered as fair by consumers. Furthermore, the experiments aim at investigating the reaction of consumers to information from the company about this value. The game can be understood as an ultimatum game where the company proposes to split the value from the data. The consumer can accept this split and provide the data, or he can reject the offer and not provide his personal data. If the consumer rejects, then both the company and the consumer receive a payoff of zero. The proposed experiments will shed light on the question how privacy regimes should be designed in order to achieve an optimal outcome in terms of data disclosure and privacy both for the consumers and the firms. It has been shown that identification, once introduced in mainstream experiments such as ultimatum game, is a powerful variable that changes economic actions. Yet, there is currently little interaction of academic work and applied policy design. We intend to strengthen this knowledge transfer by investigating the gap between behavior and stated privacy preferences and the impact of data protection seals on real data disclosure behavior in the laboratory.