Category: tracking protection

How yours is your car?

Peugeot

I’ve owned a lot of bad cars in my decades.  But some I’ve loved, at least when they were on the road. One was the 1965 Peugeot 404 wagon whose interior you see above, occupied by family dog Christy, guarding the infant seat next to her. You’ll note that the hood is open, because I was working on it at the time, which was constantly while I owned it.

I shot that photo in early 1974, not long after arriving at our new home in Graham, North Carolina. The trip down from our old home in far northern New Jersey was one of the most arduous I’ve ever taken, with frequent stops to fix whatever went wrong along the way, which was plenty.

Trouble started when a big hunk of rusted floor fell away beneath my feet, so I could see the New Jersey Turnpike whizzing by down there, while worrying that the driver’s seat itself might fall to the moving pavement, and my ass with it.

The floor had rusted because rainwater would gather in the air vents between the far side of the windshield and the dashboard, and suddenly splat down on one’s feet, and the floor, soon as the car began to move.  (The floor was prepared for this with a drainage system of tubes laminated between layers of metal, meant to carry downward whatever water fell on top. Great foresight, I suppose. But less prepared was the metal itself, which was determined to rust.)

Later a can attached to the exhaust manifold blew to pieces so sound and exhaust straight from the engine sounded like a machine gun and could be heard to the horizons in all directions, and echoed into the cabin off the pavement through the new hole in the floor. I am sure that the hearing loss I have now began right then.

I replaced the lost metal with an emptied V8 juice can that I filled with steel wool for percussive exhaust damping, and fastened into place with baling wire that I carried just in case of, well, anything. I also always carried a large toolbox, because you never know. If you owned a cheap used car back in those days, you had to be ready for anything.

The car did have its appeals, some of which were detailed by coincidence a month ago by Raphael Orlove in Jalopnik, calling this very model the best wagon he’s ever driven. His reasons were correct—for a working car. The best feature was a cargo area was so far beyond capacious that I once loaded a large office desk into it with room to spare. It also had double shocks on the rear axle, to help handle the load, plus other arcane graces meant for heavy use, such as a device in the brake fluid line to the rear axle that kept the brakes from locking up when both rear wheels were spinning but off the ground. This, I was told, was for drivers on rough dirt roads in Africa.

While the Peugeot 404 was not as weird in its time as the Citroën DS or 2CV (both of which my friend Julius called “triumphs of French genius over French engineering”), it was still weird as shit in some remarkably impractical ways.

For example, screw-on hubcaps. These meant no tire machine could handle changing a tire, and you had to do the job by hand with tire irons and a sledgehammer. I carried those too. For unknown reasons, Peugeot also also hid spark plugs way down inside the valve cover, and fed them electricity through a spring inside a bakelite sleeve that was easy to break and would malfunction even if they weren’t broken.

I could go on, but all that stuff is beside my point, which is that this car was, while I had it, mine. I could fix it myself, or take it to a mechanic friendly to the car’s oddities. While some design features were odd or crazy, there were no mysteries about how the car worked, or how to fix or replace its parts. More importantly, it contained no means for reporting its behavior or use back to Peugeot, or to anybody.

It’s very different today. That difference is nicely unpacked in A Fight Over the Right to Repair Cars Turns Ugly, by @Aarian Marshall in Wired. At issue are right-to-repair laws, such as the one currently raising a fuss in Massachusetts.

See, all of us and our mechanics had a right to repair our own cars for most of the time since automobiles first hit the road. But cars in recent years have become digital as well as mechanical beings. One good thing about this is that lots of helpful diagnostics can be revealed. One bad thing is that many of those diagnostics are highly proprietary to the carmakers, as the cars themselves become so vertically integrated that only dealers can repair them.

But there is hope. Reports Aarian,

…today anyone can buy a tool that will plug into a car’s port, accessing diagnostic codes that clue them in to what’s wrong. Mechanics are able to purchase tools and subscriptions to manuals that guide them through repairs.

So for years, the right-to-repair movement has held up the automotive industry as the rare place where things were going right. Independent mechanics remain competitive: 70 percent of auto repairs happen at independent shops, according to the US trade association that represents them. Backyard tinkerers abound.

But new vehicles are now computers on wheels, gathering an estimated 25 gigabytes per hour of driving data—the equivalent of five HD movies. Automakers say that lots of this information isn’t useful to them and is discarded. But some—a vehicle’s location, how specific components are operating at a given moment—is anonymized and sent to the manufacturers; sensitive, personally identifying information like vehicle identification numbers are handled, automakers say, according to strict privacy principles.

These days, much of the data is transmitted wirelessly. So independent mechanics and right-to-repair proponents worry that automakers will stop sending vital repair information to the diagnostic ports. That would hamper the independents and lock customers into relationships with dealerships. Independent mechanics fear that automakers could potentially “block what they want” when an independent repairer tries to access a car’s technified guts, Glenn Wilder, the owner of an auto and tire repair shop in Scituate, Massachusetts, told lawmakers in 2020.

The fight could have national implications for not only the automotive industry but any gadget that transmits data to its manufacturer after a customer has paid money and walked away from the sales desk. “I think of it as ‘right to repair 2.0,’” says Kyle Wiens, a longtime right-to-repair advocate and the founder of iFixit, a website that offers tools and repair guides. “The auto world is farther along than the rest of the world is,” Wiens says. Independents “already have access to information and parts. Now they’re talking about data streams. But that doesn’t make the fight any less important.”

As Cory Doctorow put it two days ago in Agricultural right to repair law is a no-brainer, this issue is an extremely broad one that basically puts Big Car and Big Tech on one side and all the world’s gear owners and fixers on the other:

Now, there’s new federal agricultural Right to Repair bill, courtesy of Montana Senator Jon Tester, which will require Big Ag to supply manuals, spare parts and software access codes:

https://s3.documentcloud.org/documents/21194562/tester-bill.pdf

The legislation is very similar to the Massachusetts automotive Right to Repair ballot initiative that passed with a huge margin in 2020:

https://pluralistic.net/2020/09/03/rip-david-graeber/#rolling-surveillance-platforms

Both initiatives try to break the otherwise indomitable coalition of anti-repair companies, led by Apple, which destroyed dozens of R2R initiatives at the state level in 2018:

https://pluralistic.net/2021/02/02/euthanize-rentiers/#r2r

It’s a bet that there is more solidarity among tinkerers, fixers, makers and users of gadgets than there is among the different industries who depend on repair price-gouging. That is, it’s a bet that drivers will back farmers’ right to repair and vice-versa, but that Big Car won’t defend Big Ag.

The opposing side in the repair wars is on the ropes. Their position is getting harder and harder to maintain with a straight face. It helps that the Biden administration is incredibly hostile to that position:

https://pluralistic.net/2021/07/07/instrumentalism/#r2r

It’s no coincidence that this legislation dropped the same week as Aaron Perzanowski’s outstanding book “The Right to Repair” — R2R is an idea whose time has come to pass.

https://pluralistic.net/2022/01/29/planned-obsolescence/#r2r

[The next day…]

Cory just added this in a follow-up newsletter and post:

…remember computers are intrinsically universal. Even if manufacturers don’t cooperate with interop, we can still make new services and products that plug into their existing ones. We can do it with reverse-engineering, scraping, bots – a suite of tactics we call Adversarial Interoperability or Competitive Compatibility (AKA “comcom”):

https://www.eff.org/deeplinks/2019/10/adversarial-interoperability

These tactics have a long and honorable history, and have been a part of every tech giant’s own growth…

Read all three of those pieces. There is much to be optimistic about, especially once the fighting is mostly done, and companies have proven knowledge that free customers—and truly free markets—are more valuable than captive ones. That has been our position at ProjectVRM from the start. Perhaps, once #R2R and #comcom start paying off, we’ll finally have one of the proofs we’ve wanted all along.

Is being less tasty vegetables our best strategy?

We are now being farmed by business. The pretense of the “customer is king” is now more like “the customer is a vegetable” — Adrian Gropper

That’s a vivid way to put the problem.

There are many approaches to solutions as well. One is suggested today in the latest by @_KarenHao in MIT Technology Review, titled

How to poison the data that Big Tech uses to surveil you:
Algorithms are meaningless without good data. The public can exploit that to demand change.

An  excerpt:

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:
Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

The sourced paper* is titled Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies, and concludes,

In this paper, we presented a framework for using “data leverage” to give the public more influence over technology company behavior. Drawing on a variety of research areas, we described and assessed the “data levers” available to the public. We highlighted key areas where researchers and policymakers can amplify data leverage and work to ensure data leverage distributes power more broadly than is the case in the status quo.

I am all for screwing with overlords, and the authors suggest some fun approaches. Hell, we should all be doing whatever it takes, lawfully (and there is a lot of easement around that) to stop rampant violation of our privacy—and not just by technology companies. The customers of those companies, which include every website that puts up a cookie notice that nudges visitors into agreeing to be tracked all over the Web (in observance of the letter of the GDPR, while screwing its spirit), are also deserving of corrective measures. Same goes for governments who harvest private data themselves, or gather it from others without our knowledge or permission.

My problem with the framing of the paper and the story is that both start with the assumption that we are all so weak and disadvantaged that our only choices are: 1) to screw with the status quo to reduce its harms; and 2) to seek relief from policymakers.  While those choices are good, they are hardly the only ones.

Some context: wanton privacy violations in our digital world has only been going on for a little more than a decade, and that world is itself barely more than  a couple dozen years old (dating from the appearance of e-commerce in 1995). We will also remain digital as well as physical beings for the next few decades or centuries.

So we need more than these kinds of prescriptive solutions. For example, real privacy tech of our own, that starts with giving us the digital versions of the privacy protections we have enjoyed in the physical world for millennia: clothing, shelter, doors with locks, and windows with curtains or shutters.

We have been on that case with ProjectVRM since 2006, and there are many developments in progress. Some even comport with our Privacy Manifesto (a work in progress that welcomes improvement).

As we work on those, and think about throwing spanners into the works of overlords, it may also help to bear in mind one of Craig Burton‘s aphorisms: “Resistance creates existence.” What he means is that you can give strength to an opponent by fighting it directly. He applied that advice in the ’80s at Novell by embracing 3Com, Microsoft and other market opponents, inventing approaches that marginalized or obsolesced their businesses.

I doubt that will happen in this case. Resisting privacy violations has already had lots of positive results. But we do have a looong way to go.

Personally, I welcome throwing a Theia.


* The full list of authors is Nicholas Vincent, Hanlin Li (@hanlinliii), Nicole Tilly and Brent Hecht (@bhecht) of Northwestern University, and Stevie Chancellor (@snchencellor) of the University of Minnesota,

What if we called cookies “worms”?

While you ponder that, read Exclusive: New York Times phasing out all 3rd-party advertising data, by Sara Fischer in Axios.

The cynic in me translates the headline as “Leading publishers cut out the middle creep to go direct with tracking-based advertising.” In other words, same can, nicer worms.

But maybe that’s wrong. Maybe we’ll only be tracked enough to get put into one of those “45 new proprietary first-party audience segments” or  “at least 30 more interest segments.” And maybe only tracked on site.

But we will be tracked, presumably. Something needs to put readers into segments. What else will do that?

So, here’s another question: Will these publishers track readers off-site to spy on their “interests” elsewhere? Or will tracking be confined to just what the reader does while using the site?

Anyone know?

In a post on the ProjectVRM list, Adrian Gropper says this about the GDPR (in response to what I posted here): “GDPR, like HIPAA before it, fails because it allows an unlimited number of dossiers of our personal data to be made by unlimited number of entities. Whether these copies were made with consent or without consent through re-identification, the effect is the same, a lack of transparency and of agency.”

So perhaps it’s progress that these publishers (the Axios story mentions The Washington Post and Vox as well as the NYTimes) are only keeping limited dossiers on their readers alone.

But that’s not progress enough.

We need global ways to say to every publisher how little we wish them to know about us. Also ways to keep track of what they actually do with the information they have. (And we’re working on those. )

Being able to have one’s data back (e.g. via the CCPA) is a kind of progress (as is the law’s discouragement of collection in the first place), but we need technical as well as legal mechanisms for projecting personal agency online. (Models for this are Archimedes and Marvel heroes.)  Not just more ways to opt out of being observed more than we’d like—especially when we still lack ways to audit what others do with the permissions we give them.

That’s the only way we’ll get rid of the worms.

Bonus link.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

GDPR Hack Day at MIT

Our challenge in the near term is to make the GDPR work for us “data subjects” as well as for the “data processors” and “data controllers” of the world—and to start making it work before the GDPR’s “sunrise” on May 25th. That’s when the EU can start laying fines—big ones—on those data processors and controllers, but not on us mere subjects. After all, we’re the ones the GDPR protects.

Ah, but we can also bring some relief to those processors and controllers, by automating, in a way, our own consent to good behavior on their part, using a consent cookie of our own baking. That’s what we started working on at IIW on April 5th. Here’s the whiteboard:

Here are the session notes. And we’ll continue at a GDPR Hack Day, next Thursday, April 26th, at MIT. Read more about and sign up here. You don’t need to be a hacker to participate.

The most leveraged VRM Day yet

VRM Day is coming up soon: Monday, 2 April.

Register at that link. Or, if it fails, this one. (Not sure why, but we get reports of fails with the first link on Chrome, but not other browsers. Go refigure.)

Why this one is more leveraged than any other, so far:::

Thanks to the GDPR, there is more need than ever for VRM, and more interest than ever in solutions to compliance problems that can only come from the personal side.

For example, the GDPR invites this question: What can we do as individuals that can put all the companies we deal with in compliance with the GDPR because they’re in compliance withour terms and our privacy policies? We have some answers, and we’ll talk about those.

We also have two topics we need to dive deeply into, starting at VRM Day and continuing over the following three days at IIW, also at the Computer History Museum. These too are impelled by the GDPR.

First is lexicon, or what the techies call ontology: “a formal naming and definition of the types, properties, and interrelationships of the entities that really exist in a particular domain of discourse.” In other words, What are we saying in VRM that CRM can understand—and vice versa? We’re at that point now—where VRM meets CRM. On the table will be not just be the tools and services customers will use to make themselves understood by the corporate systems of the world, but the protocols, standard code bases, ontologies and other necessities that will intermediate between the two.

Second is cooperation. The ProjectVRM wiki now has a page called Cooperative Work that needs to be substantiated by actual cooperation, now that the GDPR is approaching. How can we support each other?

Bring your answers.

See you there.

VRM at MyData2016

mydata2016-image

As it happens I’m in Helsinki right now, for MyData2016, where I’ll be speaking on Thursday morning. My topic: The Power of the Individual. There is also a hackathon (led by DataBusiness.fi) going on during the show, starting at 4pm (local time) today. In no order of priority, here are just some of the subjects and players I’ll be dealing with,  talking to, and talking up (much as I can):

Please let me know what others belong on this list. And see you at the show.

Save

It’s People vs. Advertising, not Publishers vs. Adblockers

By now hundreds of millions of people have gone to the privacy aisles of the pharmacy departments  in their local app stores and chosen a brand of sunblock to protect themselves from unwanted exposure to the harmful rays of advertising online.

There are many choices among potions on those shelves, but basically they do one, two or three of these things:

blockers

The most popular ad blocker, Adblock Plus, is configurable to do all three, but defaults to allow “acceptable”* ads and not to block tracking.

Tracking protection products, such as Baycloud Bouncer, Ghostery, Privacy Badger and RedMorph, are not ad blockers, but can be mistaken for them. (That’s what happens for me when I’m looking at Wired through Privacy Badger on Firefox.)

It is important to recognize these distinctions, for two reasons:

  1. Ad blocking, allowing “acceptable” ads, and tracking protection are different things.
  2. All three of those things answer market demand. They are clear evidence of the marketplace at work.

Meanwhle, nearly all press coverage of what’s going on here defaults to “(name of publisher or website here) vs. ad blockers.”

This  misdirects attention away from what is actually going on: people making choices in the open market to protect themselves from intrusions they do not want.

Ad blocking and tracking protection are effects, not causes. Blame for them should not go to the people protecting themselves, or to those providing them with means for protection, but to the sources and agents of harm. Those are:

  1. Companies producing ads (aka brands)
  2. Companies distributing the ads
  3. Companies publishing the ads
  4. All producers of unwanted tracking

That’s it.

Until we shift discussion to the simple causes and effects of supply and demand, with full respect for individual human beings and the legitimate choices they make in the open marketplace, to protect the sovereign personal spaces in their lives online, we’ll be stuck in war and sports coverage that misses the simple facts underlying the whole damn thing.

Until we get straight what’s going on here, we won’t be able to save those who pay for and benefit from advertising online.

Which I am convinced we can do. I’ve written plenty about that already here.

* These are controversial. I don’t go into that here, however, because I want to shift attention from spin to facts.

 

 

Save

Save

Save

Save

Save

Save

Save

© 2024 ProjectVRM

Theme by Anders NorenUp ↑