Page 4 of 41

The true blue ocean

“Blue ocean strategy challenges companies to break out of the red ocean of bloody competition by creating uncontested market space that makes the competition irrelevant.”

That’s what  W. Chan Kim and Renee Mauborgne say in the original preface to  Blue Ocean Strategy: How to Create Uncontested Market Space and Make the Competition Irrelevant, published by Harvard Business Review Press in 2005.  Since then the red/blue ocean metaphor has become business canon.

The problem with that canon is that it looks at customers the way a trawler looks at fish.

To understand the problem here, it helps to hear marketing talk to itself. Customers, it says, are targets to herd on a journey into a funnel through which they are acquired, managed, controlled and locked in.

This is the language of ranching and slavery. Not a way to talk about human beings.

Worse, every business is a separate trawler, and handles customers in its hold differently, even if they’re using the same CRM, CX and other systems to do all the stuff listed two paragraphs up. (Along with other mudanities: keeping records, following leads, forecasting sales, crunching numbers, producing analytics, and other stuff customers don’t care about until they’re forced to deal with it, usually when a problem shows up.)

In fact, these systems can’t help holding customers captive. Because the way these systems are sold and deployed means there are as many different ways for customers to “relate” to those companies as there are companies.

And, as long as companies are the only parties able to (as the GDPR puts it) operate as a “data controller” or “data processor,” the (literally) damned customer remains nothing more than a “data subject” in countless separate databases and name spaces, each with separate logins and passwords.

This is why, from the customer’s perspective, the whole ocean of CRM and CX are opaque with rutilance.

Worse, all CRM and CX systems operate on the assumption that it is up to them to know everything about a customer, a prospect, or a user. And most of that knowledge these days is obtained early in the (literally) damned “journey” through exactly the kind of tracking that has caused—

  1. Ad blocking, which (though it had been around since 2004) hockey-sticked in 2013, when the adtech fecosystem gave the middle finger to Do Not Track, and which by 2015 was the biggest boycott in world history
  2. Regulation, most notably the GDPR and the CCPA, which never would have happened had marketing not wanted to track everyone like marked animals
  3. Tracking protection, now getting built into browsers (e.g. Safari, Firefox, Brave, Edge) because the market (that big blue ocean) demands it

Stop and think for a minute how much the market actually knows—meaning how much customers actually know about what they own, use, want, wish for, regret, and the rest of it.

The simple fact is that companies’ customers and users know far more about the products and services they own and use than the companies do. Those people are also in a far better position to share that knowledge than any CRM, CX or other system for “relating” to customers can begin to guess at, much less comprehend. Especially when every company has its own separate and isolated ways of doing both.

But customers today still mostly lack ways of their own to share that knowledge, and do it selectively and safely. Those ways are in the category we call VRM (when it shakes hands with CRM), or Me2B  (when it’s dealing broadly across everything a company does with customers and users).

VRM and Me2B are what make as free as can be, outside any company’s nets, funnels and teeming holds in trawler’s hulls.

It’s also much bigger than the red ocean of CRM/CX by themselves, because it’s where customers share far more—and better—information than they can inside existing CRM/CX systems. Or will, once VRM and Me2B tools and services stand up.

For example, there’s—

  • What customers actually want to buy (rather than what companies can at best only guess at)
  • What customers already own, and how they’re actually using it (meaning what’s their Internet of their things)
  • What companies, products and service customers are actually loyal to, and why
  • How customers would  like to share their experiences
  • What relevant credentials they carry, for identity and other purposes. And who their preferred agents or intermediaries might be
  • What their terms, conditions and privacy policies are, and how compliance with those can be assured and audited
  • What their tools are, for making all those things work, across the board, with all the companies and other organizations they engage

The list is endless, because there is no limit to what customers can say to companies (or how they relate to companies) if companies are willing to deal with customers who have as much scale across corporate systems as those systems wish to have across all of their customers.

Being “customer centric” won’t cut it. That’s just a gloss on the same old thing. If companies wish to be truly customer-driven, they need to be dealing with free-range human beings. Not captives.

So: how?

There is already code for doing much of what’s listed in the seven bullets above.  Services too. (Examples.) There could be a lot more.

There are also nonprofits working to foster development in that big blue ocean. Customer Commons is ProjectVRM’s own spin-off. The Me2B Alliance is a companion effort. So are MyData and the Sovrin Foundation. All of them could use some funding.

What matters for business is that all of them empower free-range customers and give them scale: real leverage across companies and markets, for the good of all.

That’s the real blue ocean.

Without VRM and Me2B working there, the most a company can do with its CRM or CX system is look at it.

Bonus link. Pull quote: “People must own root authority, before a system transmutes your personal life into a consumer. Before you need the system to exist, you are whole.”

 

Where VRM fits

VRM is the hand CRM shakes.

That’s the simplest way of putting it. That’s what we wanted it to be when we started ProjectVRM in 2006, and that’s how we described it in 2011, when I gave this talk at SugarCRM‘s SugarCon conference:

Those “ways” are tools that belong to each customer and give them global scale: meaning they should work the same way for every company’s CRM system. Just like the customer’s phone, email and browser shake hands with every company already.

This is, as the marketers say, positioning. And it’s important, now that a number of significant .orgs have stepped up to take care of other work we helped start with ProjectVRM. Most notable are Customer Commons (a ProjectVRM spin-off), the Me2B Alliance, and MyData Global. There are others, but those are foremost on the ProjectVRM list.

The space we’re building out here is immense, so there is not only room for everybody, but more work than even everybody can do. Meanwhile it is essential that we clarify what all the roles are. Hence this post.

What if we called cookies “worms”?

While you ponder that, read Exclusive: New York Times phasing out all 3rd-party advertising data, by Sara Fischer in Axios.

The cynic in me translates the headline as “Leading publishers cut out the middle creep to go direct with tracking-based advertising.” In other words, same can, nicer worms.

But maybe that’s wrong. Maybe we’ll only be tracked enough to get put into one of those “45 new proprietary first-party audience segments” or  “at least 30 more interest segments.” And maybe only tracked on site.

But we will be tracked, presumably. Something needs to put readers into segments. What else will do that?

So, here’s another question: Will these publishers track readers off-site to spy on their “interests” elsewhere? Or will tracking be confined to just what the reader does while using the site?

Anyone know?

In a post on the ProjectVRM list, Adrian Gropper says this about the GDPR (in response to what I posted here): “GDPR, like HIPAA before it, fails because it allows an unlimited number of dossiers of our personal data to be made by unlimited number of entities. Whether these copies were made with consent or without consent through re-identification, the effect is the same, a lack of transparency and of agency.”

So perhaps it’s progress that these publishers (the Axios story mentions The Washington Post and Vox as well as the NYTimes) are only keeping limited dossiers on their readers alone.

But that’s not progress enough.

We need global ways to say to every publisher how little we wish them to know about us. Also ways to keep track of what they actually do with the information they have. (And we’re working on those. )

Being able to have one’s data back (e.g. via the CCPA) is a kind of progress (as is the law’s discouragement of collection in the first place), but we need technical as well as legal mechanisms for projecting personal agency online. (Models for this are Archimedes and Marvel heroes.)  Not just more ways to opt out of being observed more than we’d like—especially when we still lack ways to audit what others do with the permissions we give them.

That’s the only way we’ll get rid of the worms.

Bonus link.

Markets as conversations with robots

From the Google AI blogTowards a Conversational Agent that Can Chat About…Anything:

In “Towards a Human-like Open-Domain Chatbot”, we present Meena, a 2.6 billion parameter end-to-end trained neural conversational model. We show that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots. Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.

A chat between Meena (left) and a person (right).

Meena
Meena is an end-to-end, neural conversational model that learns to respond sensibly to a given conversational context. The training objective is to minimize perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation). At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity.
 
Concretely, Meena has a single Evolved Transformer encoder block and 13 Evolved Transformer decoder blocks as illustrated below. The encoder is responsible for processing the conversation context to help Meena understand what has already been said in the conversation. The decoder then uses that information to formulate an actual response. Through tuning the hyper-parameters, we discovered that a more powerful decoder was the key to higher conversational quality.
So how about turning this around?

What if Google sold or gave a Meena model to people—a model Google wouldn’t be able to spy on—so people could use it to chat sensibly with robots or people at companies?

Possible?

If, in the future (which is now—it’s freaking 2020 already), people will have robots of their own, why not one for dealing with companies, which themselves are turning their sales and customer service systems over to robots anyway?

People are the real edge

You Need to Move from Cloud Computing to Edge Computing Now!, writes Sabina Pokhrel in Towards Data Science. The reason, says her subhead, is that “Edge Computing market size is expected to reach USD 29 billion by 2025.” (Source: Grand View Research.) The second person “You” in the headline is business. Not the people at the edge. At least not yet.

We need to fix that.

By we, I mean each of us—as independent individuals and as collected groups—and with full agency in both roles. The Edge Computing is both.

The article  illustrates the move to Edge Computing this way:

The four items at the bottom (taxi, surveillance camera, traffic light, and smartphone) are at the edges of corporate systems. That’s what the Edge Computing talk is about. But one of those—the phone—is also yours. In fact it is primarily yours. And you are the true edge, because you are an independent actor.

More than any device in the world, that phone is the people’s edge, because connected device is more personal. Our phones are, almost literally, extensions of ourselves—to a degree that being without one in the connected world is a real disability.

Given phones importance to us, we need to be in charge of whatever edge computing happens there. Simple as that. We cannot be puppets at the ends of corporate strings.

I am sure that this is not a consideration for most of those working on cloud computing, edge computing, or moving computation from one to the other.

So we need to make clear that our agency over the computation in our personal devices is a primary design consideration. We need to do that with tech, with policy, and with advocacy.

This is not a matter of asking companies and governments to please give us some agency. We need to create that agency for ourselves, much as we’ve learned to walk, talk and act on our own. We don’t have “Walking as a Service” or “Talking as a Service.” Because those are only things an individual human being can do. Likewise there should be things only an individual human with a phone can do. On their own. At scale. Across all companies and governments.

Pretty much everything written here and tagged VRM describes that work and ways to approach that challenge.

Recently some of us (me included) have been working to establish Me2B as a better name for VRM than VRM.  It occurs to me, in reading this piece, that the e in Me2B could stand for edge. Just a thought.

If we succeed, there is no way edge computing gets talked about, or worked on, without respecting the Me’s of the world, and their essential roles in operating, controlling, managing and otherwise making the most of those edges—for the good of the businesses they deal with as well as themselves.

 

 

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. It’s hard to find a bigger hairball of conflicting interests and surely awful outcomes.

Second, What does the Internet make of us, where I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?


What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

What law might clear the way for VRM/Me2B development?

VRM/Me2B developers shouldn’t have to wait for laws to pave the way through a wall-like status quo.  (And we say that in our Privacy Manifesto.) But a good law or two should help.

That was I had hoped—even expected—the GDPR to do.  Specifically, I called it  “the world’s most heavily weaponized law protecting personal privacy,” said it was “aimed at companies that track people without asking” and that it would “blow away the (mostly US-based) surveillance economy, especially tracking-based ‘adtech,’ which supports most commercial publishing online.”

That hasn’t happened.

It has been sixteen months since the GDPR went into effect (May 2018), and violation of personal privacy online today remains as pervasive as ever. Worse, violators take advantage of a loophole* in the GDPR that allows them to continue tracking people by requiring (or appearing to require) “consent” to  cookies and other means of tracking (so you can get “personalized,” “interest-based” or “relevant” advertising, the perpetrators say). As long as various EU countries’ Data Protection Authorities (who enforce the GDPR) fail to focus on simple fact that nearly every website and its third parties are doing the same bad things Google and Facebook are accused of doing, the practice will continue, and the GDPR will remain a failure at stopping widespread spying-based adtech.

Meanwhile, many privacy advocates in the U.S. (including me) have invested hope in the California Consumer Privacy Act (CCPA), which will go into effect on January 1, 2020.  I invite you to visit the operative language in that law, starting  here. As legalese goes, it’s remarkably readable. Meanwhile, Wikipedia compresses these rather well under the heading Intentions of the Act:

The intentions of the Act are to provide California residents with the right to:

  1. Know what personal data is being collected about them.
  2. Know whether their personal data is sold or disclosed and to whom.
  3. Say no to the sale of personal data.
  4. Access their personal data.
  5. Request a business to delete any personal information about a consumer collected from that consumer.
  6. Not be discriminated against for exercising their privacy rights.

Note that this presumes that nearly all agency resides on the data collectors’ side, and that the only agency possible on the individual’s side is asking to know or say no to what others who collected personal data can do with it.

That’s not enough.

Making matters worse is that we are mere “consumers” to the CCPA, “data subjects” to the GDPR and “users” to the computer industry—in each case with no more freedom and agency than what potential violators of our privacy (e.g. the websites and services of the world) separately grant us, through their countless, lengthy and infinitely varied privacy policies, terms and “agreements.”

In other words, we’re still at Square Zero, and Square One is neither the CCPA nor the GDPR. Those are relevant in the ways that guard rails are relevant to a winding road; but we don’t have the road yet.

While I’ve made it clear elsewhere that we need tech more than policy (because tech of our own—VRM tech—gives us agency), it will sure help to have policy that guides the deployment of that tech.

So,  what law might actually open the way for VRM development, preferably by simply giving individuals a new power they’ve been lacking, such as real control over just one aspect of their privacy: what Louis Brandeis and Samuel Warren called “the right to be let alone” when we’re online?

I like two.

First is the Do-Not-Track Act of 2019. It’s model legislation from DuckDuckGo, and explained this way:

When you turn on the setting in your browser that says “Do Not Track”, you probably expect to no longer be tracked on most websites you visit. Right? Well, you would be wrong. But don’t worry, you’re not alone.

Our recent study on the Do Not Track (DNT) browser setting indicated that about a quarter of people have turned on this setting, and most were unaware big sites do not respect it. That means approximately 75 million Americans, 115 million citizens of the European Union, and many more people worldwide are, right now, broadcasting a DNT signal.

All of these people are actively asking the sites they visit to not track them. Unfortunately, no law requires websites to respect your Do Not Track signals, and the vast majority of sites, including most all of the big tech companies, sadly choose to simply ignore them.

Let’s change that now. Let’s put teeth behind this widely used browser setting by making a law that would align with current consumer expectations and empower people to more easily regain control of their online privacy.

While DuckDuckGo actively supports the passing of strong, comprehensive privacy laws, we also recognize that it will take time for them to take effect worldwide. In the meantime, governments can provide immediate relief by enacting separate, simpler Do Not Track legislation.

It is extremely rare to have such an exciting legislative opportunity like this, where the hardest work — coordinated mainstream technical implementation and widespread consumer adoption — is already done.

That’s why we’re announcing draft legislation that can serve as a starting point for legislators in America and beyond. It’s entitled the “Do-Not-Track Act of 2019” and, if it were to be enacted, would require sites to respect the Do Not Track browser setting in this manner:

  1. No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission.
  2. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn’t be able to use your data from Whatsapp in unrelated situations (like for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history.

Under this proposed law, these restrictions would only come into play if a consumer has turned on the Do Not Track signal for their Internet traffic. To keep the Internet from breaking, these restrictions would have very narrowly tailored exceptions for debugging, auditing, security, non-commercial security research, and reporting, and further limited by mandated data-minimization requirements.

In particular, each of these narrow exceptions would only apply if a site adopts strict data-minimization practices, such as using the least amount of personal information needed, and anonymizing it whenever possible. And importantly, this draft legislation takes a more realistic view of what constitutes anonymous data vs. de-identified data. Legislators need to appreciate that users can be re-identified unless companies implement extra measures of protection.

Katherine Druckman and I also talked about this a bit with Gabriel Weinberg, CEO and founder of DuckDuckGo, in our Reality 2.0 podcast with him last month.

The other is Adrian Gropper‘s Patient Privacy Rights Information Governance Label. It says,

Patient Privacy Rights Information Governance Label August 19, 2019 Note: 0-to-5 of the boxes to be checked by the application, device, or service provider.1. No sharing: The data is never shared with any external entities. It is not even shared in de-identified form.

2. No aggregation: The data is never aggregated with other types of input or data from external sources. This includes mixing the data gathered via The Service with other data, such as patient-reported outcomes.

3. Always voluntary self-identification: The user of The Service is able to choose their own identity. The user does not need to have their identity verified unless required by law.

4. Digital agent support: The user is able to specify a digital agent, trustee, or equivalent information manager, and this specified agent will not be subject to certification or censorship.

5. No vendor lock-in: The Service is easily and conveniently substitutable, so the user can easily move their data to another vendor providing a similar service. This prevents vendor lock-in and is often accomplished using Open Standards. Indications for Use: The five separately self-asserted statements on the PPR Information Governance Label are subject to legal enforcement as would the privacy policy associated with The Service.

While not proposed as a law, it would be good to have a law that imposes those requirements, and leaves room for individuals to provide for exceptions, for example when they have working relationships with a service provider.

Maciej Ceglowski also has some good suggestions.


*Part 1 under Article 6 of the GDPR, covering the “Lawfulness of processing,” says, “Processing shall be lawful only if and to the extent that at least one of the following applies: (a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes.” Hence the consent notices with an “accept” button in front of websites.  These are most often presented as “cookie notices.” (Which are actually required by earlier EU law that was to some degree ignored until the GDPR came along.)

Whether a notice on the front of a website talks cookies or not, it usually means the site is obtaining your consent to being tracked “to personalize content and advertising” (or whatever) by spying on you. I’ve been told by GDPR experts that this really isn’t a loophole, and that most of these consent notices actually violate the GDPR’s letter and not just its spirit. Still, while that might be true, violation of the GDPR’s spirit remains normative.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

VRM is Me2B

Most of us weren’t at the latest VRM Day or IIW (both of which happened in the week before last), so I’ll fill you in on a cool development there: a working synonym for VRM that makes a helluva lot more sense and may have a lot more box office.

That synonym is Me2B.

And by “we” I mean Lisa Lavasseur, who in addition to everything behind that link, runs the new Me2B Alliance, which features the graphic there on the right, (suggesting an individual in a driver’s seat). She is also is the Vice Chair of the  IEEE 7012 Standard for Machine Readable Personal Privacy Terms, a new effort with which some of us are also involved.

Lisa led many sessions at IIW, mostly toward solidifying what the Me2B Alliance will do. If you stay tuned to me2b.us, you can see how that work grows and evolves.

The main thing for me, in the here and now, is to share how much I like Me2B as a synonym for VRM.

It is  also a synonym for C2B, of course; but it’s more personal. I also think it may have what it takes to imply Archimedes-grade leverage for individuals in the marketplace. For more on what I mean by that, see any or all of these:EIC award

I’m also putting this up to help me prep for mentioning Me2B tomorrow during this talk at the  2019 European Identity & Cloud Conference. It was at this same conference in 2008 that ProjectVRM won its first award. That’s it there on the right.

It’s becoming clear now that we were were way ahead of a time that finally seems to be arriving.

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

« Older posts Newer posts »

© 2023 ProjectVRM

Theme by Anders NorenUp ↑