Privacy@Scale & The Data Driven Economy Project

Exploring a New Paradigm for Personal Data

A year ago, Facebook commissioned a program to explore how the data driven economy is evolving and to pose the question: how can we sustainably maximize the contribution personal data makes to the economy, to society, and, crucially, to individuals?

We’re transitioning into an era in which people’s data will turbocharge the creation of value for the economy and for society, but also increasingly for them as individuals. We have the opportunity to generate mutually reinforcing benefits for all stakeholders, while still working to minimize and mitigate risks and harms. You can download the resulting report and see earlier versions here: thedatadriveneconomy.com.

I was fortunate to be one of the 175 experts invited to take part in one of a series of roundtables to discuss these issues, and was honored to be quoted in the final report. The roundtables were organized, and the reports written, by Ctrl-Shift, a specialist consultancy helping organizations to create new services and strategic market positions based on trust and control around data. And in May I sat down with Emily Sharpe, Manager of Privacy and Public Policy at Facebook, for a Q&A at their event called Privacy@Scale in Washington D.C. on how attitudes around data privacy are evolving, and why it makes sense for organizations to find ways to transition to “consent strategies” rather than maintaining today’s more coarse-grained and adversarial system of personal data handling.

What follows are some highlights of the discussion between Emily and myself.


Privacy@Scale Q&A

Emily Sharpe: During your recent keynote at the European Identity and Cloud conference, you spoke about how, in view of new consent regulations, standards, and tools on the scene, we need to think strategically about solutions that don't force “awkward compromises” when it comes to privacy, business growth, and consumer trust. Could you share examples of what you see as “awkward compromises”? And what would you do differently with respect to regulations and standards to arrive at a better outcome that's not simply not “awkward,” but hopefully not even a compromise — which implies trade-offs that may be unnecessary and even harmful?

Eve Maler: There are a lot of examples of awkward compromises when it comes to for-profit companies and their data collection and sharing policies – retail grocery chains maintaining loyalty programs comes immediately to mind. But I think one of the more enlightening instances where you had individual customers balking at data policies laid down by a business was when Spotify imposed a new privacy policy back in the summer of 2015. When it first came out, the terms asserted that Spotify basically had blanket access to all the data on your smartphone:

“With your permission, we may collect information stored on your mobile device, such as contacts, photos, or media files.”

The policy also asserted the right to collect location data and access to third-party app data, meaning Spotify would have access to your Facebook account, for instance. As these things so typically play out nowadays, the outcry in the media was swift and harsh, and within a few days the CEO was apologizing online. But here’s the thing, companies like Spotify haven’t had a lot of choices in negotiating access to and managing data from their customers. When your core competency is heavily commoditized and switching costs are low (like with music services!), loyalty programs relying on customer data get outsized importance. The more a Spotify can use personal data to customize their offering to your personal tastes and preferences, the more likely they are to keep you locked in as a customer. Hence their motivation to have a single (typically nonnegotiable) “privacy deal” (Terms & Conditions, or Terms of Service).

What I’ve been working on at ForgeRock and with the User-Managed Access (UMA) Work Group at the Kantara Initiative is an alternative to this kind of all-or-nothing, one-size-fits-all approach. The idea is to enable a “consent strategy” that defers data sharing options to end user choice, leveraging new “privacy/consent tech” to make it finer-grained and more meaningful. UMA builds on OAuth to allow users to work with any API and app ecosystem to get delegation and consent functionality like you see with Google Docs. You can have Share buttons, dropdowns with choices of what access to grant (Read Only, Download, Edit, etc.) and revocation of specific permissions — plus a central console where you can manage all this. That kind of capability wasn’t available last year, but it is now and any organization participating in the digital economy can play, and also interoperate with each other.

Emily Sharpe: During the Data Driven Economy roundtable you participated in in California, you said the following: “Regarding UMA architecture and OAuth, it was Facebook that made it popular. UMA makes the end user the intermediary to the flow of data between applications, to ask people if they want to connect to those applications for data flows for their own benefit. It innovates a consent flow, and the ability to revoke this consent at a later date. As far as I know no regulator has ever talked about this.” Question: Practices like these seem to actually enhance core privacy principles like control/consent, and data accuracy and quality. Why aren't regulators promoting examples like these? Should regulators and policymakers be focusing more on the privacy rewards of innovative practices like these?

Eve Maler: Regulation doesn't know what's coming next – regulators aren't in charge of innovation, and they can't be expected to know what’s next in the marketplace. If you look at the arc of tech innovation, there are examples of regulation where it's pretty clear what the regulation is supposed to do and what they know. But generally it's hard to anticipate new practices. For example, regulators tend to talk about personal data disclosure and limited collection and use, meaning that they anticipate the data will flow from a service to an application – but these APIs generally allow for putting data back into the service (read and write functions), which is really not “collection” or “disclosure” at all the way they’d probably understand it. And OAuth was built as a response to top-line business needs and growth strategies. It works by enabling a user to permission an app to connect to a service on the user's behalf, in some cases even if the user isn't online. The user can withdraw their permission anytime. This is a real live example of consent innovation in the market.

Emily Sharpe: We heard the following from a participant in the APAC roundtable: “What we’re finding is that where the trust and transparency are established, customers are opening up and sharing more information that’s contextual – providing they understand the purpose for which it will be used and for a specific period of time.” Question: do you agree? How do we measure these benefits?

Eve Maler: It's not a slam dunk that people are harmed by more data flows – there are risks and rewards in sharing and not sharing. It can depend on the industry vertical and the individual use case, for example. But, for instance, we’re seeing impressive results in healthcare settings. We’re working with Philips on their HealthSuite Digital Platform, which is an open, cloud-based platform that collects, compiles and analyzes clinical and other data from a wide range of devices and sources. Securely managing the identities of patients, caregivers, devices, and even family members, is critical with this kind of cloud-based system. Philips did a study with Banner Health looking at monitoring vital stats of patients with chronic health conditions through devices and mobile apps. They found that this monitoring can save 10 days on average per patient, per year in the emergency room, and also save $27,000 dollars per patient, per year. Shorter, less ER-intensive stays in hospital, with more care happening outside clinical settings because monitoring can happen in the home, and data securely shared through the cloud. And yes, establishing trust and transparency is key to making this work.


Just a few closing thoughts: Please note that the Q&A above was recreated from my notes – it’s not an exact transcription. Many thanks to Emily and the Facebook and Ctrl-Shift teams for including me in the Data Driven Economy project. Facebook is at the center of the many of the debates circling around personal data privacy today. Indeed, the E.U. Safe Harbor argument came about initially in regards to how Facebook transfers data between data centers in the U.S. and Europe. So it is encouraging to see the Facebook team working so diligently to advance the data privacy dialog, and seeking to find solutions that are beneficial to all parties. As Stephen Deadman, Facebook’s Global Deputy Chief Privacy Officer, put it in his foreword to the final report, “when people have more control over their own data, more growth, innovation and value can be created than when they don’t.” I couldn’t agree more.