Category: Adblocking

Markets vs. Marketing in the Age of AI

Maybe history will defeat itself.

Remember FreePC? It was a thing, briefly, at the end of the last millennium, right before Y2K pooped the biggest excuse for a party in a thousand years. This may help. The idea was to put ads in the corner of your PC’s screen. The market gave it zero stars, and it failed.

And now comes Telly, hawking free TVs with ads in a corner, and a promise to “optimize your ad experience.” As if anybody wants an ad experience other than no advertising at all.

Negative demand for advertising has been well advertised by both ad blocking (the biggest boycott in human history) and ad-free “prestige” TV, (or SVOD, for subscription video on demand). With those we gladly pay—a lot— not to see advertising. (See numbers here.)

But the advertising business (in the mines of which I toiled for too much of my adult life) has always smoked its own exhaust and excels best at getting high with generous funders. (Yeah, some advertising works, but on the whole people still hate it on the receiving end.)

The fun will come when our own personal AI bots, working for our own asses, do battle with the robot Nazgûls of marketing — and win, because we’re on the Demand side of the marketplace, and we’ll do a better job of knowing what we want and don’t want to buy than marketing’s surveillant AI robots can guess at. Supply will survive, of course. But markets will defeat marketing by taking out the middle creep.

The end state will be one Cluetrain forecast in 1999, Linux Journal named in 2006, the VRM community started working on that same year, and The Intention Economy detailed in 2012. The only thing all of them missed was how customer intentions might be helped by personal AI.

Personal.* Not personalized.

Markets will become new and better dances between Demand and Supply, simply because Demand will have better ways to take the lead, and not just follow all the time. Simple as that.


*For more on how this will work, see Individual Empowerment and Agency on a Scale We’ve Never Seen Before.

Is being less tasty vegetables our best strategy?

We are now being farmed by business. The pretense of the “customer is king” is now more like “the customer is a vegetable” — Adrian Gropper

That’s a vivid way to put the problem.

There are many approaches to solutions as well. One is suggested today in the latest by @_KarenHao in MIT Technology Review, titled

How to poison the data that Big Tech uses to surveil you:
Algorithms are meaningless without good data. The public can exploit that to demand change.

An  excerpt:

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:
Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

The sourced paper* is titled Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies, and concludes,

In this paper, we presented a framework for using “data leverage” to give the public more influence over technology company behavior. Drawing on a variety of research areas, we described and assessed the “data levers” available to the public. We highlighted key areas where researchers and policymakers can amplify data leverage and work to ensure data leverage distributes power more broadly than is the case in the status quo.

I am all for screwing with overlords, and the authors suggest some fun approaches. Hell, we should all be doing whatever it takes, lawfully (and there is a lot of easement around that) to stop rampant violation of our privacy—and not just by technology companies. The customers of those companies, which include every website that puts up a cookie notice that nudges visitors into agreeing to be tracked all over the Web (in observance of the letter of the GDPR, while screwing its spirit), are also deserving of corrective measures. Same goes for governments who harvest private data themselves, or gather it from others without our knowledge or permission.

My problem with the framing of the paper and the story is that both start with the assumption that we are all so weak and disadvantaged that our only choices are: 1) to screw with the status quo to reduce its harms; and 2) to seek relief from policymakers.  While those choices are good, they are hardly the only ones.

Some context: wanton privacy violations in our digital world has only been going on for a little more than a decade, and that world is itself barely more than  a couple dozen years old (dating from the appearance of e-commerce in 1995). We will also remain digital as well as physical beings for the next few decades or centuries.

So we need more than these kinds of prescriptive solutions. For example, real privacy tech of our own, that starts with giving us the digital versions of the privacy protections we have enjoyed in the physical world for millennia: clothing, shelter, doors with locks, and windows with curtains or shutters.

We have been on that case with ProjectVRM since 2006, and there are many developments in progress. Some even comport with our Privacy Manifesto (a work in progress that welcomes improvement).

As we work on those, and think about throwing spanners into the works of overlords, it may also help to bear in mind one of Craig Burton‘s aphorisms: “Resistance creates existence.” What he means is that you can give strength to an opponent by fighting it directly. He applied that advice in the ’80s at Novell by embracing 3Com, Microsoft and other market opponents, inventing approaches that marginalized or obsolesced their businesses.

I doubt that will happen in this case. Resisting privacy violations has already had lots of positive results. But we do have a looong way to go.

Personally, I welcome throwing a Theia.


* The full list of authors is Nicholas Vincent, Hanlin Li (@hanlinliii), Nicole Tilly and Brent Hecht (@bhecht) of Northwestern University, and Stevie Chancellor (@snchencellor) of the University of Minnesota,

What law might clear the way for VRM/Me2B development?

VRM/Me2B developers shouldn’t have to wait for laws to pave the way through a wall-like status quo.  (And we say that in our Privacy Manifesto.) But a good law or two should help.

That was I had hoped—even expected—the GDPR to do.  Specifically, I called it  “the world’s most heavily weaponized law protecting personal privacy,” said it was “aimed at companies that track people without asking” and that it would “blow away the (mostly US-based) surveillance economy, especially tracking-based ‘adtech,’ which supports most commercial publishing online.”

That hasn’t happened.

It has been sixteen months since the GDPR went into effect (May 2018), and violation of personal privacy online today remains as pervasive as ever. Worse, violators take advantage of a loophole* in the GDPR that allows them to continue tracking people by requiring (or appearing to require) “consent” to  cookies and other means of tracking (so you can get “personalized,” “interest-based” or “relevant” advertising, the perpetrators say). As long as various EU countries’ Data Protection Authorities (who enforce the GDPR) fail to focus on simple fact that nearly every website and its third parties are doing the same bad things Google and Facebook are accused of doing, the practice will continue, and the GDPR will remain a failure at stopping widespread spying-based adtech.

Meanwhile, many privacy advocates in the U.S. (including me) have invested hope in the California Consumer Privacy Act (CCPA), which will go into effect on January 1, 2020.  I invite you to visit the operative language in that law, starting  here. As legalese goes, it’s remarkably readable. Meanwhile, Wikipedia compresses these rather well under the heading Intentions of the Act:

The intentions of the Act are to provide California residents with the right to:

  1. Know what personal data is being collected about them.
  2. Know whether their personal data is sold or disclosed and to whom.
  3. Say no to the sale of personal data.
  4. Access their personal data.
  5. Request a business to delete any personal information about a consumer collected from that consumer.
  6. Not be discriminated against for exercising their privacy rights.

Note that this presumes that nearly all agency resides on the data collectors’ side, and that the only agency possible on the individual’s side is asking to know or say no to what others who collected personal data can do with it.

That’s not enough.

Making matters worse is that we are mere “consumers” to the CCPA, “data subjects” to the GDPR and “users” to the computer industry—in each case with no more freedom and agency than what potential violators of our privacy (e.g. the websites and services of the world) separately grant us, through their countless, lengthy and infinitely varied privacy policies, terms and “agreements.”

In other words, we’re still at Square Zero, and Square One is neither the CCPA nor the GDPR. Those are relevant in the ways that guard rails are relevant to a winding road; but we don’t have the road yet.

While I’ve made it clear elsewhere that we need tech more than policy (because tech of our own—VRM tech—gives us agency), it will sure help to have policy that guides the deployment of that tech.

So,  what law might actually open the way for VRM development, preferably by simply giving individuals a new power they’ve been lacking, such as real control over just one aspect of their privacy: what Louis Brandeis and Samuel Warren called “the right to be let alone” when we’re online?

I like two.

First is the Do-Not-Track Act of 2019. It’s model legislation from DuckDuckGo, and explained this way:

When you turn on the setting in your browser that says “Do Not Track”, you probably expect to no longer be tracked on most websites you visit. Right? Well, you would be wrong. But don’t worry, you’re not alone.

Our recent study on the Do Not Track (DNT) browser setting indicated that about a quarter of people have turned on this setting, and most were unaware big sites do not respect it. That means approximately 75 million Americans, 115 million citizens of the European Union, and many more people worldwide are, right now, broadcasting a DNT signal.

All of these people are actively asking the sites they visit to not track them. Unfortunately, no law requires websites to respect your Do Not Track signals, and the vast majority of sites, including most all of the big tech companies, sadly choose to simply ignore them.

Let’s change that now. Let’s put teeth behind this widely used browser setting by making a law that would align with current consumer expectations and empower people to more easily regain control of their online privacy.

While DuckDuckGo actively supports the passing of strong, comprehensive privacy laws, we also recognize that it will take time for them to take effect worldwide. In the meantime, governments can provide immediate relief by enacting separate, simpler Do Not Track legislation.

It is extremely rare to have such an exciting legislative opportunity like this, where the hardest work — coordinated mainstream technical implementation and widespread consumer adoption — is already done.

That’s why we’re announcing draft legislation that can serve as a starting point for legislators in America and beyond. It’s entitled the “Do-Not-Track Act of 2019” and, if it were to be enacted, would require sites to respect the Do Not Track browser setting in this manner:

  1. No third-party tracking by default. Data brokers would no longer be legally able to use hidden trackers to slurp up your personal information from the sites you visit. And the companies that deploy the most trackers across the web — led by Google, Facebook, and Twitter — would no longer be able to collect and use your browsing history without your permission.
  2. No first-party tracking outside what the user expects. For example, if you use Whatsapp, its parent company (Facebook) wouldn’t be able to use your data from Whatsapp in unrelated situations (like for advertising on Instagram, also owned by Facebook). As another example, if you go to a weather site, it could give you the local forecast, but not share or sell your location history.

Under this proposed law, these restrictions would only come into play if a consumer has turned on the Do Not Track signal for their Internet traffic. To keep the Internet from breaking, these restrictions would have very narrowly tailored exceptions for debugging, auditing, security, non-commercial security research, and reporting, and further limited by mandated data-minimization requirements.

In particular, each of these narrow exceptions would only apply if a site adopts strict data-minimization practices, such as using the least amount of personal information needed, and anonymizing it whenever possible. And importantly, this draft legislation takes a more realistic view of what constitutes anonymous data vs. de-identified data. Legislators need to appreciate that users can be re-identified unless companies implement extra measures of protection.

Katherine Druckman and I also talked about this a bit with Gabriel Weinberg, CEO and founder of DuckDuckGo, in our Reality 2.0 podcast with him last month.

The other is Adrian Gropper‘s Patient Privacy Rights Information Governance Label. It says,

Patient Privacy Rights Information Governance Label August 19, 2019 Note: 0-to-5 of the boxes to be checked by the application, device, or service provider.1. No sharing: The data is never shared with any external entities. It is not even shared in de-identified form.

2. No aggregation: The data is never aggregated with other types of input or data from external sources. This includes mixing the data gathered via The Service with other data, such as patient-reported outcomes.

3. Always voluntary self-identification: The user of The Service is able to choose their own identity. The user does not need to have their identity verified unless required by law.

4. Digital agent support: The user is able to specify a digital agent, trustee, or equivalent information manager, and this specified agent will not be subject to certification or censorship.

5. No vendor lock-in: The Service is easily and conveniently substitutable, so the user can easily move their data to another vendor providing a similar service. This prevents vendor lock-in and is often accomplished using Open Standards. Indications for Use: The five separately self-asserted statements on the PPR Information Governance Label are subject to legal enforcement as would the privacy policy associated with The Service.

While not proposed as a law, it would be good to have a law that imposes those requirements, and leaves room for individuals to provide for exceptions, for example when they have working relationships with a service provider.

Maciej Ceglowski also has some good suggestions.


*Part 1 under Article 6 of the GDPR, covering the “Lawfulness of processing,” says, “Processing shall be lawful only if and to the extent that at least one of the following applies: (a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes.” Hence the consent notices with an “accept” button in front of websites.  These are most often presented as “cookie notices.” (Which are actually required by earlier EU law that was to some degree ignored until the GDPR came along.)

Whether a notice on the front of a website talks cookies or not, it usually means the site is obtaining your consent to being tracked “to personalize content and advertising” (or whatever) by spying on you. I’ve been told by GDPR experts that this really isn’t a loophole, and that most of these consent notices actually violate the GDPR’s letter and not just its spirit. Still, while that might be true, violation of the GDPR’s spirit remains normative.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

Good news for publishers and advertisers fearing the GDPR

The GDPR (General Data Protection Regulation) is the world’s most heavily weaponized law protecting personal privacy. It is aimed at companies that track people without asking, and its ordnance includes fines of up to 4% of worldwide revenues over the prior year.

The law’s purpose is to blow away the (mostly US-based) surveillance economy, especially tracking-based “adtech,” which supports most commercial publishing online.

The deadline for compliance is 25 May 2018, just a couple hundred days from now.

There is no shortage of compliance advice online, much of it coming from the same suppliers that talked companies into harvesting lots of the “big data” that security guru Bruce Schneier calls a toxic asset. (Go to https://www.google.com/search?q=GDPR and see whose ads come up.)

There is, however, an easy and 100% GDPR-compliant way for publishers to continue running ads and for companies to continue advertising. All the publisher needs to do is agree with this request from readers:

That request, along with its legal and machine-readable expressions, will live here:

The agreements themselves can be recorded anywhere.

There is not an easier way for publishers and advertisers to avoid getting fined by the EU for violating the GDPR. Agreeing to exactly what readers request puts both in full compliance.

Some added PR for advertisers is running what I suggest they call #Safeds. If markets are conversations (as marketers have been yakking about since  The Cluetrain Manifesto), #SafeAds will be a great GDPR conversation for everyone to have:

Here are some #SafeAds benefits that will make great talking points, especially for publishers and advertisers:

  1. Unlike adtech, which tracks eyeballs off a publisher’s site and then shoot ads at those eyeballs anywhere they can be found (including the Web’s cheapest and shittiest sites), #SafeAds actually sponsor the publisher. They say “we value this publication and the readers it brings to us.”
  2. Unlike adtech, #SafeAds carry no operational overhead for the publisher and no cognitive overhead for readers—because there are no worries for either party about where an ad comes from or what it’s doing behind the scenes. There’s nothing tricky about it.
  3. Unlike adtech, #SafeAds carry no fraud or malware, because they can’t. They go straight from the publisher or its agency to the publication, avoiding the corrupt four-dimensional shell game adtech has become.
  4. #SafeAds carry full-power creative and economic signals, which adtech can’t do at all, for the reasons just listed. It’s no coincidence that nearly every major brand you can name was made by #SafeAds, while adtech has not produced a single one. In fact adtech has an ugly history of hurting brands by annoying people with advertising that is unwelcome, icky, or both.
  5. Perhaps best of all for publishers, advertisers will pay more for #SafeAds, because those ads are worth more.

#NoStalking and #SafeAds can also benefit social media platforms now in a world of wonder and hurt (example: this Zuckerberg hostage video). The easiest thing for them to do is go freemium, with little or no ads (and only safe ones on the paid side, and nothing but #SafeAds on the free side, in obedience to #NoStalking requests, whether expressed or not.

If you’re a publisher, an advertiser, a developer, an exile from the adtech world, or anybody else who wants to help out, talk to us. That deadline is a hard one, and it’s coming fast.

VRM Day: Starting Phase Two

VRM Day is today, 24 October, at the Computer History Museum. IIW follows, over the next three days at the same place. (The original version of this post was October 17.)

We’ve been doing VRM Days since (let’s see…) this one in 2013, and VRM events since this one in 2007. Coming on our tenth anniversary, this is our last in Phase One.

sisyphusTheRolling snowball difference between Phase One and Phase Two is that between rocks and snowballs. In Phase One we played Sisyphus, pushing a rock uphill. In Phase Two we roll snowballs downhill.

Phase One was about getting us to the point where VRM was accepted by many as a thing bound to happen. This has taken ten years, but we are there.

Phase Two is about making it happen, by betting our energies on ideas and work that starts rolling downhill and gaining size and momentum.

Some of that work is already rolling. Some is poised to start. Both kinds will be on the table at VRM Day. Here are ones currently on the agenda:

  • VRM + CRM via JLINC. See At last: a protocol to link VRM and CRM. , and The new frontier for CRM is CDL: customer driven leads. This is a one form of intentcasting that should be enormously appealing to CRM companies and their B2B corporate customers. Speaking of which, we also have—
  • Big companies welcoming VRM.  Leading this is Fing, a French think tank that brings together many of the country’s largest companies, both to welcome VRM and to research (e.g. through Mesinfos) how the future might play out. Sarah Medjek of Fing will present that work, and lead discussion of where it will head next. We will also get a chance to participate in that research by providing her with our own use cases for VRM. (We’ll take out a few minutes to each fill out an online form.)
  • Terms individuals assert in dealings with companies. These are required for countless purposes. Mary Hodder will lead discussion of terms currently being developed at Customer Commons and the CISWG / Kantara User Submitted Terms working group (Consent and Information Sharing Working Group). Among other things, this leads to—
  • 2016_04_25_vrmday_000-1Next steps in tracking protection and ad blocking. At the last VRM Day and IIW, we discussed CHEDDAR on the server side and #NoStalking on the individual’s side. There are now huge opportunities with both, especially if we can normalize #NoStalking terms for all tracking protection and ad blocking tools.  To prep for this, see  Why #NoStalking is a good deal for publishers, where you’ll find the image on the right, copied from the whiteboard on VRM Day.
  • Blockchain, Identity and VRM. Read what Phil Windley has been writing lately distributed ledgers (e.g. blockchain) and what they bring to the identity discussions that have been happening for 22 IIWs, so far. There are many relevancies to VRM.
  • Personal data. This was the main topic at two recent big events in Europe: MyData2016 in Helsinki and PIE (peronal information economy) 2016 in London.  The long-standing anchor for discussions and work on the topic at VRM Day and IIW is PDEC (Personal Data Ecosystem Consortium). Dean Landsman of PDEC will keep that conversational ball rolling. Adrian Gropper will also brief us on recent developments around personal health data as well.
  • Hacks on the financial system. Kevin Cox can’t make it, but wants me to share what he would have presented. Three links: 1) a one minute video that shows why the financial system is so expensive, 2) part of a blog post respecting his local Water Authority and newly elected government., and 3) an explanation of the idea of how we can build low-cost systems of interacting agents. He adds, “Note the progression from location, to address, to identity, to money, to housing.  They are all ‘the same’.” We will also look at how small business and individuals have more in common than either do with big business. With a hint toward that, see what Xero (the very hot small business accounting software company) says here.
  • What ProjectVRM becomes. We’ve been a Berkman-Klein Center project from the start. We’ve already spun off Customer Commons. Inevitably, ProjectVRM will itself be spun off, or evolve in some TBD way. We need to co-think and co-plan how that will go. It will certainly live on in the DNA of VRM and VRooMy work of many kinds. How and where it lives on organizationally is an open question we’ll need to answer.

Here is a straw man context for all of those and more.

  • Top Level: Tools for people. These are ones which, in legal terms, give individuals power as first parties. In mathematical terms, they make us independent variables, rather than dependent ones. Our focus from the start has been independence and engagement.
    • VRM in the literal sense: whatever engages companies’ CRM or equivalent systems.
    • Intentcasting.
    • PIMS—Personal Information Management Systems. Goes by many names: personal clouds, personal data stores, life management platforms and so on. Ctrl-Shift has done a good job of branding PIMS, however. We should all just go with that.
    • Privacy tools. Such as those provided by tracking protection (and tracking-protective ad blocking).
    • Legal tools. Such as the terms Customer Commons and the CISWG are working on.
    • UI elements. Such as the r-button.
    • Transaction & payment systems. Such as EmanciPay.

Those overlap to some degree. For example, a PIMS app and data store can do all that stuff. But we do need to pull the concerns and categories apart as much as we can, just so we can talk about them.

Kaliya will facilitate VRM Day. She and I are still working on the agenda. Let us know what you’d like to add to the list above, and we’ll do what we can. (At IIW, you’ll do it, because it’s an unconference. That’s where all the topics are provided by participants.)

Again, register here. And see you there.

 

Save

Save

Save

Save

It’s People vs. Advertising, not Publishers vs. Adblockers

By now hundreds of millions of people have gone to the privacy aisles of the pharmacy departments  in their local app stores and chosen a brand of sunblock to protect themselves from unwanted exposure to the harmful rays of advertising online.

There are many choices among potions on those shelves, but basically they do one, two or three of these things:

blockers

The most popular ad blocker, Adblock Plus, is configurable to do all three, but defaults to allow “acceptable”* ads and not to block tracking.

Tracking protection products, such as Baycloud Bouncer, Ghostery, Privacy Badger and RedMorph, are not ad blockers, but can be mistaken for them. (That’s what happens for me when I’m looking at Wired through Privacy Badger on Firefox.)

It is important to recognize these distinctions, for two reasons:

  1. Ad blocking, allowing “acceptable” ads, and tracking protection are different things.
  2. All three of those things answer market demand. They are clear evidence of the marketplace at work.

Meanwhle, nearly all press coverage of what’s going on here defaults to “(name of publisher or website here) vs. ad blockers.”

This  misdirects attention away from what is actually going on: people making choices in the open market to protect themselves from intrusions they do not want.

Ad blocking and tracking protection are effects, not causes. Blame for them should not go to the people protecting themselves, or to those providing them with means for protection, but to the sources and agents of harm. Those are:

  1. Companies producing ads (aka brands)
  2. Companies distributing the ads
  3. Companies publishing the ads
  4. All producers of unwanted tracking

That’s it.

Until we shift discussion to the simple causes and effects of supply and demand, with full respect for individual human beings and the legitimate choices they make in the open marketplace, to protect the sovereign personal spaces in their lives online, we’ll be stuck in war and sports coverage that misses the simple facts underlying the whole damn thing.

Until we get straight what’s going on here, we won’t be able to save those who pay for and benefit from advertising online.

Which I am convinced we can do. I’ve written plenty about that already here.

* These are controversial. I don’t go into that here, however, because I want to shift attention from spin to facts.

 

 

Save

Save

Save

Save

Save

Save

Save

© 2024 ProjectVRM

Theme by Anders NorenUp ↑