Privacy attitudes and behaviours — a paradox?
There has been a lot of discussion in recent weeks around the optimal approach to a Covid-19 contact tracing app (as discussed by Henrik Nordmark here). While the details of the debate may have passed you by, at its heart are strongly held views on data security and personal privacy.
Having spent some time reviewing the literature surrounding consumer attitudes and behaviours in relation to the data economy, its very hard to escape the idea that there is a disconnect between our stated attitudes and our actual behaviours.
On the one hand, we profess significant anxiety or concern about digital practices and the data economy, on the other we continue to embrace the ease and convenience afforded by digital services while doing very little to protect our privacy.
In fact, Consumer Association (Which?) research found that between Q1 2018 and Q1 2019 — a period covering the Cambridge Analytica revelations and the enforcement of the EU GDPR — UK consumers reported a significant increase in anxiety and concern with regard to contemporary data practices.
Yet, the very same research shows a contemporaneous increase in (what Which? call) the maximiser behavioural group — i.e. people willing to embrace friction-free digital shortcuts (e.g. social sign-ins) in order to access platforms and services as quickly and easily as possible.
If there is such a thing as a privacy paradox, it is arguably found in the growth of this Anxious Maximiser consumer group.
So what are brands and marketers to do when considering this landscape? Where is the urgency to reform the data economy coming from? How can consumer anxiety be addressed? What practices have to change, which are likely to continue? And how should we prepare for the “end of cookies“?
For many marketers over recent years, privacy has been little more than a distraction. Consumer behaviour is more important than attitude. We are embracing digital services as never before. There is little evidence of any correlation between privacy scandals and platform usage or advertising revenues. The value exchange is everything and we have made our choices.
The value exchange is connected to the concept of consumer surplus. This is essentially the difference between what we would be willing to pay for a product or service and what we actually pay. Particularly interesting in the context of free digital services.
How much is your data worth?
Using this approach, MIT Sloan research published in the Harvard Business Review suggests that, for example, Facebook users in the USA and Europe are benefiting from a median consumer surplus of around $500 per annum. This is a modest figure compared to the value given to search and email — with search estimated to deliver a median consumer surplus of around $17,000 per annum and email around 50% of that figure.
Other research from the Technology Policy Institute has tried to put a price on the perceived value of sharing different types of personal data — on what price people are willing to accept (WTA) in exchange for access to their data.
Here we see location data as least valued by consumers ($1.82 pcm) with financial/bank balance ($8.44 pcm) and biometric/fingerprint data ($7.56) as most valued. Access to your texts carries a price of $6.05 pcm.
Clearly these are not large amounts of money — around $100 per annum to share your bank balance, for example — certainly when compared to the estimated consumer surplus discussed above. But therein lies the question: where is the value exchange in sharing your bank balance? How does that impact the quality of goods, services and experience that you receive? What about sharing your texts?
As fascinating as this type of economic analysis, and it is a vital tool in evaluating potential public policy interventions, it surely isn’t sufficient. And it certainly isn’t transparent to most users, beyond a sense that the loss of control of our personal data is the price of entry to the digital economy. Which of course it doesn’t have to be.
A new approach to personal data
Brands should look more closely at reported consumer attitudes. The terms used in qualitative research are extraordinary and include acquiescence, confusion, cynicism, helplessness, impotence and nihilism. We just don’t know what is happening with our data.
Shoshana Zuboff refers to a process of psychic numbing that masks epistemic inequality. Others refer to the perceived futility of privacy self-management, which is connected to the opacity of many privacy policies. These are then presented as the vehicle for informed consent, the legal underpinning for much of the data economy.
But what also shines through from the research, especially recent work by the RSA and ODI, is consumer frustration with the nature of inference. People talk about their frustration with the nature of brands making guesses about them, they believe they are being misrepresented, misread, misinterpreted and misunderstood by (blunt) algorithmic decision making. They declare that we are not robots.
Brands, therefore, need to think carefully as to how they navigate this territory, not least as we move forward with greater automation, use of machine learning and algorithmic decision making. The GDPR has already shone a light on profiling and automated decision making, a process that continues to expand and intensify — with potentially significant repercussions for individuals.
As such, we have spoken before about the value and importance of transparently collected, willingly shared, zero party data, in a post-cookie world. This is certainly an important aspect of a new cultural approach but should be combined with a wider set of principles in relation to personal data management, perhaps starting with an explicit commitment to never selling customer data to third parties.