Category: Internet (Page 1 of 3)

Homeless on the Web

Do you have a home on the Web?

I mean a page or a site that is yours. Not one that belongs to some .com, .org or .edu. One that’s truly yours, with a name you gave to it, nobody else has, and you fully inhabit.

Some of us do. I’m one of those, but with nothing to brag about. Go to searls.com and you’ll find a placeholder I’ve been updating every couple of years since the mid-’90s.  Behind that façade is a garage full of files I keep stored online but blocked from search engines. That’s so I can find them from anywhere, or so I can point other people to them every once in a while.

Like the rest of us, most of what I’ve done on the Web are on the sites that belong others. The goods in those sites are mine in the sense that I’ve created them. But where they are is not mine. Not in the least.

Nearly all the pages called “home” are those of what in the trade we call enterprises. Mine here is in an enterprise called Harvard University. I thank it for that grace.

Still, in a literal sense, most of us are homeless here. In a literal way maybe all of us are, because we don’t own our domain names. We rent them. Searls.com will exist only so long as I, or my heirs, continue paying to keep it active.

This isn’t a bad thing. Hell, the benefits of the Web are enormous in the extreme. I’m not knocking those.

I am, however, saying we are homeless. Here.

Yet there is nothing about the Internet that says you can’t have a home there—which is a deeper here, underneath the Web.

This is important because we need to clearly and finally make a sharp distinction between the Web and the Internet. Because they are not the same. The Internet is what the Web sits on. And, big and broad as it is, the Web is not the only thing that can sit on the Internet. This was true for Web as it was in the first place,  for what we called Web 2 in the early ’00s, and for what we call Web 3 today.

The Internet is different.  And there are few limits to what the Internet can support, much as there are few limits to what can be built on land or float on ocean.

But there are limits to what we can build on the Web. One of those is a home for ourselves. A real home. One that does not require renting a domain name. One that lets us zero-base what we can do upon the infinite grace granted us by simply connecting to a worldwide network of networks that exists only to move packets of data from any end to any other end.

So let’s start thinking about that.

Some of us (present company included) are on the case already. We need more.

While we ponder that, here’s a thought: Maybe one reason VRM has been slow to happen is that we’ve been trying to do it on the Web.


The photo above is on Love Ranch Road, in the center of Wyoming. The story of the ranch, and the home now abandoned there, is central to John McPhee’s Rising from the Plains. I was there to shoot the solar eclipse of August 2017, which was at its totality there. The darkness on the horizon is the shadow of the moon, approaching from the west.

Thinking outside the browser

Even if you’re on a phone, chances are you’re reading this in a browser.

Chances are also that most of what you do online is through a browser.

Hell, many—maybe even most—of the apps you use on your phone use the Webkit browser engine. Meaning they’re browsers too.

And, of course, I’m writing this in a browser.

Which, alas, is subordinate by design. That’s because, while the Internet at its base is a word-wide collection of peers, the Web that runs on it is a collection of servers to which we are mere clients. The model is an old mainframe one called client-server. This is actually more of a calf-cow arrangement than a peer-to-peer one:

The reason we don’t feel like cattle is that the base functions of a browser work fine, and misdirect us away from the actual subordination of personal agency and autonomy that’s also taking place.

See, the Web invented by Tim Berners-Lee was just a way for one person to look at another’s documents over the Internet. And that it still is. When you “go to” or “visit” a website, you don’t go anywhere. Instead, you request a file. Even when you’re watching or listening to an audio or video stream, what actually happens is that a file unfurls itself into your browser.

What you typically expect when you go to a website is typically the file called a page. You also expect that page will bring a payload of other files: ones providing graphics, video clips, or whatever. You might also expect the site to remember that you’ve been there before, or that you’re a subscriber to the site’s services.

You may also understand that the site remembers you because your browser carries a “cookie” the site put there, to helps the site remember what’s called “state,” so the browser and the site can renew their acquaintance with every visit. It is for this simple purpose that Lou Montulli invented the cookie in the first place, back in 1994. Lou got that idea because the client-server model puts the most agency on the server’s side, and in the dial-up world of the time, that made the most sense.

Alas, even though we now live in a world where there can be boundless intelligence on the individual’s side, and there is far more capacious communication bandwidth between network nodes, damn near everyone continues to presume a near-absolute power asymmetry between clients and servers, calves and cows, people and sites. It’s also why today when you go to a site and it asks you to accept its use of cookies, something unknown to you (presumably—you can’t tell) remembers that “agreement” and its settings, and you don’t—even though there is no reason why you shouldn’t or couldn’t. It doesn’t even occur to the inventors and maintainers of cookie acceptance systems that a mere “user” should have a way to record, revisit or audit the “agreement.” All they want is what the law now requires of them: your “consent.”

This near-absolute power asymmetry between the Web’s calves and cows is also why you typically get a vast payload of spyware when your browser simply asks to see whatever it is you actually want from the website.  To see how big that payload can be, I highly recommend a tool called PageXray, from Fou Analytics, run by Dr. Augustine Fou (aka @acfou). For a test run, try PageXray on the Daily Mail’s U.S. home page, and you’ll see that you’re also getting this huge payload of stuff you didn’t ask for:

Adserver Requests: 756
Tracking Requests: 492
Other Requests: 184

The visualization looks like this:

This is how, as Richard Whitt perfectly puts it, “the browser is actually browsing us.”

All those requests, most of which are for personal data of some kind, come in the form of cookies and similar files. The visual above shows how information about you spreads out to a nearly countless number of third parties and dependents on those. And, while these cookies are stored by your browser, they are meant to be readable only by the server or one or more of its third parties.

This is the icky heart of the e-commerce “ecosystem” today.

By the way, and to be fair, two of the browsers in the graphic above—Epic and Tor—by default disclose as little as possible about you and your equipment to the sites you visit. Others have privacy features and settings. But getting past the whole calf-cow system is the real problem we need to solve.


Cross-posted at the Customer Commons blog, here.

On privacy fundamentalism

This is a post about journalism, privacy, and the common assumption that we can’t have one without sacrificing at least some of the other, because (the assumption goes), the business model for journalism is tracking-based advertising, aka adtech.

I’ve been fighting that assumption for a long time. People vs. Adtech is a collection of 129 pieces I’ve written about it since 2008.  At the top of that collection, I explain,

I have two purposes here:

  1. To replace tracking-based advertising (aka adtech) with advertising that sponsors journalism, doesn’t frack our heads for the oil of personal data, and respects personal freedom and agency.

  2. To encourage journalists to grab the third rail of their own publications’ participation in adtech.

I bring that up because Farhad Manjoo (@fmanjoo) of The New York Times grabbed that third rail, in a piece titled  I Visited 47 Sites. Hundreds of Trackers Followed Me.. He grabbed it right here:

News sites were the worst

Among all the sites I visited, news sites, including The New York Times and The Washington Post, had the most tracking resources. This is partly because the sites serve more ads, which load more resources and additional trackers. But news sites often engage in more tracking than other industries, according to a study from Princeton.

Bravo.

That piece is one in a series called the  Privacy Project, which picks up where the What They Know series in The Wall Street Journal left off in 2013. (The Journal for years had a nice shortlink to that series: wsj.com/wtk. It’s gone now, but I hope they bring it back. Julia Angwin, who led the project, has her own list.)

Knowing how much I’ve been looking forward to that rail-grab, people  have been pointing me both to Farhad’s piece and a critique of it by  Ben Thompson in Stratechery, titled Privacy Fundamentalism. On Farhad’s side is the idealist’s outrage at all the tracking that’s going on, and on Ben’s side is the realist’s call for compromise. Or, in his words, trade-offs.

I’m one of those privacy fundamentalists (with a Manifesto, even), so you might say this post is my push-back on Ben’s push-back. But what I’m looking for here is not a volley of opinion. It’s to enlist help, including Ben’s, in the hard work of actually saving journalism, which requires defeating tracking-based adtech, without which we wouldn’t have most of the tracking that outrages Farhad. I explain why in Brands need to fire adtech:

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, >adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff

To fight adtech, it’s natural to look for help in the form of policy. And we already have some of that, with the GDPR, and soon the CCPA as well. But really we need the tech first. I explain why here:

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. It’s the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. With the tech working, the market for personal data will be one we control. For real.

If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

The tech horse is a collection of tools that provide each of us with ways both to protect our privacy and to signal to others what’s okay and what’s not okay, and to do both at scale. Browsers, for example, are a good model for that. They give each of us, as users, scale across all the websites of the world. We didn’t have that when the online world for ordinary folk was a choice of Compuserve, AOL, Prodigy and other private networks. And we don’t have it today in a networked world where providing “choices” about being tracked are the separate responsibilities of every different site we visit, each with its own ways of recording our “consents,” none of which are remembered, much less controlled, by any tool we possess. You don’t need to be a privacy fundamentalist to know that’s just fucked.

But that’s all me, and what I’m after. Let’s go to Ben’s case:

…my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly…

Let’s pause there. Concern about privacy is not hysteria. It’s a simple, legitimate, and personal. As Don Marti and and I (he first) pointed out, way back in 2015, ad blocking didn’t become the biggest boycott in world history in a vacuum. Its rise correlated with the “interactive” advertising business giving the middle finger to Do Not Track, which was never anything more than a polite request not to be followed away from a website:

Retargeting, (aka behavioral retargeting) is the most obvious evidence that you’re being tracked. (The Onion: Woman Stalked Across Eight Websites By Obsessed Shoe Advertisement.)

Likewise, people wearing clothing or locking doors are not “hysterical” about privacy. That people don’t like their naked digital selves being taken advantage of is also not hysterical.

Back to Ben…

…is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other.

Right. So let’s do the work. We haven’t started yet.

This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.

Good point, but does this excuse awful manners in the online world? Does it take off the table all the ways manners work well in the offline world—ways that ought to inform developments in the online world? I say no.

That is not to say that there shouldn’t be a debate about this data collection, and how it is used. Even that latter question, though, requires an appreciation of just how different the digital world is from the analog one.

Consider it appreciated. (In my own case I’ve been reveling in the wonders of networked life since the 80s. Examples of that are thisthis and this.)

…the popular imagination about the danger this data collection poses, though, too often seems derived from the former [Stasi collecting highly personal information about individuals for very icky purposes] instead of the fundamentally different assumptions of the latter [Google and Facebook compiling massive amounts of data to be read by machines, mostly for non- or barely-icky purposes]. This, by extension, leads to privacy demands that exacerbate some of the Internet’s worst problems.

Such as—

• Facebook’s crackdown on API access after Cambridge Analytica has severely hampered research into the effects of social media, the spread of disinformation, etc.

True.

• Privacy legislation like GDPR has strengthened incumbents like Facebook and Google, and made it more difficult for challengers to succeed.

True.

Another bad effect of the GDPR is urging the websites of the world to throw insincere and misleading cookie notices in front of visitors, usually to extract “consent” that isn’t, to exactly what the GDPR was meant to thwart.

• Criminal networks from terrorism to child abuse can flourish on social networks, but while content can be stamped out private companies, particularly domestically, are often limited as to how proactively they can go to law enforcement; this is exacerbated once encryption enters the picture.

True.

Again, this is not to say that privacy isn’t important: it is one of many things that are important. That, though, means that online privacy in particular should not be the end-all be-all but rather one part of a difficult set of trade-offs that need to be made when it comes to dealing with this new reality that is the Internet. Being an absolutist will lead to bad policy (although encryption may be the exception that proves the rule).

It can also lead to good tech, which in turn can prevent bad policy. Or encourage good policy.

Towards Trade-offs
The point of this article is not to argue that companies like Google and Facebook are in the right, and Apple in the wrong — or, for that matter, to argue my self-interest. The truth, as is so often the case, is somewhere in the middle, in the gray.

Wearing pants so nobody can see your crotch is not gray. That an x-ray machine can see your crotch doesn’t make personal privacy gray. Wrong is wrong.

To that end, I believe the privacy debate needs to be reset around these three assumptions:
• Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong.

No. We need to accept that simple and universally accepted personal and social assumptions about privacy offline (for example, having the ability to signal what’s acceptable and what is not) is a good model for online as well.

I’ll put it another way: people need pants online. This is not an absolutist position, or even a fundamentalist one. The ability to cover one’s private parts, and to signal what’s okay and what’s not okay for respecting personal privacy are simple assumptions people make in the physical world, and should be able to make in the connected one. That it hasn’t been done yet is no reason to say it can’t or shouldn’t be done. So let’s do it.

• Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet,

Likewise, the widespread creation and spread of gossip is inherent to life in the physical world. But that doesn’t mean we can’t have civilized ways of dealing with it.

and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones.

Tracking people everywhere so their eyes can be stabbed with “relevant” and “interest-based” advertising, in oblivity to negative externalities, is not a good idea or a positive outcome (beyond the money that’s made from it).  Let’s at least get that straight before we worry about what might be extinguished by full agency for ordinary human beings.

To be clear, I know Ben isn’t talking here about full agency for people. I’m sure he’s fine with that. He’s talking about policy in general and specifically about the GDPR. I agree with what he says about that, and roughly about this too:

• Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.

Still, that doesn’t mean we can’t use what’s offline to inform what’s online. We need to appreciate and harmonize the virtues of both—mindful that the online world is still very new, and that many of the civilized and civilizing graces of life offline are good to have online as well. Privacy among them.

Finally, before getting to the work that energizes us here at ProjectVRM (meaning all the developments we’ve been encouraging for thirteen years), I want to say one final thing about privacy: it’s a moral matter. From Oxford, via Google: “concerned with the principles of right and wrong behavior” and “holding or manifesting high principles for proper conduct.”

Tracking people without their clear invitation or a court order is simply wrong. And the fact that tracking people is normative online today doesn’t make it right.

Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, does the best job I know of explaining why tracking people online became normative—and why it’s wrong. The book is thick as a brick and twice as large, but fortunately Shoshana offers an abbreviated reason in her three laws, authored more than two decades ago:

First, that everything that can be automated will be automated. Second, that everything that can be informated will be informated. And most important to us now, the third law: In the absence of countervailing restrictions and sanctions, every digital application that can be used for surveillance and control will be used for surveillance and control, irrespective of its originating intention.

I don’t believe government restrictions and sanctions are the only ways to  countervail surveillance capitalism (though uncomplicated laws such as this one might help). We need tech that gives people agency and companies better customers and consumers.  From our wiki, here’s what’s already going on. And, from our punch list, here are some exciting TBDs, including many already in the works already:

I’m hoping Farhad, Ben, and others in a position to help can get behind those too.

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

The only path from subscription hell to subscription heaven

I subscribe to Vanity Fair. I also get one of its newsletters, replicated on a website called The Hive. At the top of the latest Hive is this come-on: “For all that and more, don’t forget to sign up for our metered paywall, the greatest innovation since Nitroglycerin, the Allman Brothers, and the Hangzhou Grand Canal.”

When I clicked on the metered paywall link, it took me to a plain old subscription page. So I thought, “Hey, since they have tracking cruft appended to that link, shouldn’t it take me to a page that says something like, “Hi, Doc! Thanks for clicking, but we know you’re already a paying subscriber, so don’t worry about the paywall”?

So I clicked on the Customer Care link to make that suggestion. This took me to a login page, where my password manager filled in the blanks with one of my secondary email addresses. That got me to my account, which says my Condé Nast subscriptions look like this:

Oddly, the email address at the bottom there is my primary one, not the one I just logged in with.  (Also oddly, I still get Wired.)

So I went to the Vanity Fair home page, found myself logged in there, and clicked on “My Account.” This took me to a page that said my email address was my primary one, and provided a way to change my password, to subscribe or unsubscribe to four newsletters, and a way to “Receive a weekly digest of stories featuring the players you care about the most.” The link below said “Start following people.” No way to check my account itself.

So I logged out from the account page I reached through the Customer Care link, and logged in with my primary email address, again using my password manager. That got me to an account page with the same account information you see above.

It’s interesting that I have two logins for one account. But that’s beside more important points, one of which I made with this message I wrote for Customer Care in the box provided for that:

Curious to know where I stand with this new “metered paywall” thing mentioned in the latest Hive newsletter. When I go to the link there — https://subscribe.condenastdigital.com/subscribe/splits/vanityfair/ — I get an apparently standard subscription page. I’m guessing I’m covered, but I don’t know. Also, even as a subscriber I’m being followed online by 20 or more trackers (reports Privacy Badger), supposedly for personalized advertising purposes, but likely also for other purposes by Condé Nast’s third parties. (Meaning not just Google, Facebook and Amazon, but Parsely and indexww, which I’ve never heard of and don’t trust. And frankly I don’t trust those first three either.) As a subscriber I’d want to be followed only by Vanity Fair and Condé Nast for their own service-providing and analytic purposes, and not by who-knows-what by all those others. If you could pass that request along, I thank you. Cheers, Doc

When I clicked on the Submit button, I got this:

An error occurred while processing your request.An error occurred while processing your request.

Please call our Customer Care Department at 1-800-667-0015 for immediate assistance or visit Vanity Fair Customer Care online.

Invalid logging session ID (lsid) passed in on the URL. Unable to serve the servlet you’ve requested.

So there ya go: one among .X zillion other examples of subscription hell, differing only in details.

Fortunately, there is a better way. Read on.

The Path

The only way to pave a path from subscription and customer service hell to the heaven we’ve never had is by  normalizing the ways both work, across all of business. And we can only do this from the customer’s side. There is no other way. We need standard VRM tools to deal with the CRM and CX systems that exist on the providers’ side.

We’ve done this before.

We fixed networking, publishing and mailing online with the simple and open standards that gave us the Internet, the Web and email. All those standards were easy for everyone to work with, supported boundless economic and social benefits, and began with the assumption that individuals are full-privilege agents in the world.

The standards we need here should make each individual subscriber the single point of integration for their own data, and the responsible party for changing that data across multiple entities. (That’s basically the heart of VRM.)

This will give each of us a single way to see and manage many subscriptions, see notifications of changes by providers, and make changes across the board with one move. VRM + CRM.

The same goes for customer care service requests. These should be normalized the same way.

In the absence of normalizing how people manage subscription and customer care relationships, all the companies in the world with customers will have as many different ways of doing both as there are companies. And we’ll languish in the login/password hell we’re in now.

The VRM+CRM cost savings to those companies will also be enormous. For a sense of that, just multiply what I went through above by as many people there are in the world with subscriptions, and  multiply that result by the number of subscriptions those people have — and then do the same for customer service.

We can’t fix this inside the separate CRM systems of the world. There are too many of them, competing in too many silo’d ways to provide similar services that work differently for every customer, even when they use the same back-ends from Oracle, Salesforce, SugarCRM or whomever.

Fortunately, CRM systems are programmable. So I challenge everybody who will be at Salesforce’s Dreamforce conference next week to think about how much easier it will be when individual customers’ VRM meets Salesforce B2B customers’ CRM. I know a number of VRM people  who will be there, including Iain Henderson, of the bonus link below. Let me know you’re interested and I’ll make the connection.

And come work with us on standards. Here’s one.

Bonus link: Me-commerce — from push to pull, by Iain Henderson (@iaianh1)

Why personal agency matters more than personal data

Lately a lot of thought, work and advocacy has been going into valuing personal data as a fungible commodity: one that can be made scarce, bought, sold, traded and so on.  While there are good reasons to challenge whether or not data can be property (see Jefferson and  Renieris), I want to focus on a different problem: the one best to solve first: the need for personal agency in the online world.

I see two reasons why personal agency matters more than personal data.

The first reason we have far too little agency in the networked world is that we settled, way back in 1995, on a model for websites called client-server, which should have been called calf-cow or slave-master, because we’re always the weaker party: dependent, subordinate, secondary. In defaulted regulatory terms, we clients are mere “data subjects,” and only server operators are privileged to be “data controllers,” “data processors,” or both.

Fortunately, the Net’s and the Web’s base protocols remain peer-to-peer, by design. We can still build on those. And it’s early.

A critical start in that direction is making each of us the first party rather than the second when we deal with the sites, services, companies and apps of the world—and doing that at scale across all of them.

Think about how much more simple and sane it is for websites to accept our terms and our privacy policies, rather than to force each of us, all the time, to accept their terms, all expressed in their own different ways. (Because they are advised by different lawyers, equipped by different third parties, and generally confused anyway.)

Getting sites to agree to our own personal terms and policies is not a stretch, because that’s exactly what we have in the way we deal with each other in the physical world.

For example, the clothes that we wear are privacy technologies. We also have  norms that discourage others from doing rude things, such as sticking their hands inside our clothes without permission.

We don’t yet have those norms online, because we have no clothing there. The browser should have been clothing, but instead it became an easy way for adtech and its dependents in digital publishing to plant tracking beacons on our naked digital selves, so they could track us like marked animals across the digital frontier. That this normative is no excuse. Tracking people without their conscious and explicit invitation—or a court order—is morally wrong, massively rude, and now (at least hopefully) illegal under the GDPR and other privacy laws.

We can easily create privacy tech, personal terms and personal privacy policies that are normative and scale for each of us across all the entities that deal with us. (This is what ProjectVRM’s nonprofit spin-off, Customer Commons, is about.)

It is the height of fatuity for websites and services to say their cookie notice settings are “your privacy choices” when you have no power to offer, or to make, your own privacy choices, with records of those choices that you keep.

The simple fact of the matter is that businesses can’t give us privacy if we’re always the second parties clicking “agree.” It doesn’t matter how well-meaning and GDPR-compliant those businesses are. Making people second parties in all cases is a design flaw in every standing “agreement” we “accept.” And we need to correct that.

The second reason agency matters more than data is that nearly the entire market for personal data today is adtech, and adtech is too dysfunctional, too corrupt, too drunk on the data it already has, and absolutely awful at doing what they’ve harvested that data for, which is so machines can guess at what we might want before they shoot “relevant” and “interest-based” ads at our tracked eyeballs.

Not only do tracking-based ads fail to convince us to do a damn thing 99.xx+% of the time, but we’re also not buying something most of the time as well.

As incentive alignments go, adtech’s failure to serve the actual interests of its targets verges on absolute. (It’s no coincidence that more than a year ago, up to 1.7 billion people were already blocking ads online.)

And hell, what they do also isn’t really advertising, even though it’s called that. It’s direct marketing, which gives us junk mail and is the model for spam. (For more on this, see Separating Advertising’s Wheat and Chaff.)

Privacy is personal. That means privacy is an effect of personal agency, projected by personal tech and by personal expressions of intent that others can respect without working at it. We have that in the offline world. We can have it in the online world too.

Privacy is not something given to us by companies or governments, no matter how well they do Privacy by Design or craft their privacy policies. Top-down privacy simply can’t work.

In the physical world we got privacy tech and norms before we got privacy law. In the networked world we got the law first. That’s why the GDPR has caused so much confusion. Good and helpful though it may be, it is the regulatory cart in front of the technology horse. In the absence of privacy tech, we also failed to get the norms that would normally and naturally guide lawmaking.

So let’s get the tech horse back in front of the lawmaking cart. If we don’t do that first, adtech will stay in control. And we know how that movie goes, because it’s a horror show and we’re living in it now.

 

Actual chat with an Internet Disservice Provider

customerdisservice

After failing to get a useful answer from Verizon about FiOS availabilty at a Manhattan address (via http://fios.verizon.com/fios-coverage.html), I engaged the site’s chat agent system, and had this dialog:

Jessica: Hi! I am a Verizon specialist, can I help you today?

You: I am trying to help a friend moving into ______ in New York City. The Web interface here gives a choice of three addresses, two of which are that address, but it doesn’t seem to work. She wants to know if the Gigabit deal — internet only (she doesn’t watch TV or want a phone) — is available there.
Jessica: By chatting with us, you grant us permission to review your services during the chat to offer the best value. Refusing to chat will not affect your current services. It is your right and our duty to protect your account information. For quality, we may monitor and/or review this chat.

You: sure.
Jessica: Hey there! My name is Jessica. Happy to help!

Jessica: Thank you for considering Verizon services. I would be glad to assist you with Verizon services.

You: Did you see my question?
Jessica: Thank you for sharing the address, please allow me a moment to check this for you.

Jessica: Yes, please allow me a moment to check this for you.

Jessica: I appreciate your patience.

Jessica: Do you live in the apartment?

You: No. I am looking for a friend who is moving into that building.
You: I had FiOS where I used to live near Boston and was pleased with it.
Jessica: Thank you for your consideration.

Jessica: The address where your friend will be moving require to enter the apartment number.

You: hang on
Jessica: Sure, take your time.

You: 5B
You: When we are done I
Jessica: Thank you, one more moment please.

You: would also like you to check my building as well.
Jessica: Sure, allow me a moment.

Jessica: I appreciate your patience.

Jessica: I’m extremely sorry to share this, currently at your friend’s location we don’t have Fios services.

You: Okay. How about _________ ?
You: Still there?
Jessica: Yes, I’m checking for this.

Jessica: Please stay connected.

Jessica has left the chat
You are being transferred, please hold…
You are now chatting with LOUIS
LOUIS: Good morning. I’ll be happy to assist you today. May I start by asking for your name, the phone number we are going to be working with today, and your account pin please?

You: I want to know if FiOS is available at _________.
You: __________. It is not a landline and I do not have an account.
LOUIS: Hello. You’ve reached our Verizon Wireless chat services. I don’t have an option to check on our Fios services for your area. You are able to contact our Fios sister company at the number 1-800-483-3000

You: this makes no sense. I was transfered to you by Jessica in FiOS.
LOUIS: Looks like Jessica is one of our chat agents, but we are with Verizon Wireless. Fios is our sister company, which is a different entity than us

You: Well, send some feedback to whoever or whatever is in charge. Not sure what the problem is, but it’s a fail in this round. Best to you. I now your job isn’t easy.
LOUIS: I do apologize about this, I will certainly relay this feedback on this matter. Here is a link to Verizon Communications for your residential services:https://www.verizon.com/support/residential/contact-us/index.htm

You: Thanks.
LOUIS: I want to thank you for chatting with me today. Hope you have a great day! You can find answers to additional questions at vzw.com/support. Please click on the “X” or “End Chat” button to end this chat.

You: Thanks agin.

The only way to fix this, as we’ve said here countless times, is from the customer’s side. Meanwhile, please dig Despair.com, source of the image above. For so many companies, it remains too true.

Our radical hack on the whole marketplace

In Disruption isn’t the whole VRM story, I visited the Tetrad of Media Effects, from Laws of Media: the New Science, by Marshall and Eric McLuhan. Every new medium (which can be anything from a stone arrowhead to a self-driving car), the McLuhans say, does four things, which they pose as questions that can have multiple answers, and they visualize this way:

tetrad-of-media-effects

The McLuhans also famously explained their work with this encompassing statement: We shape our tools and thereafter they shape us.

This can go for institutions, such as businesses, and whole marketplaces, as well as people. We saw that happen in a big way with contracts of adhesion: those one-sided non-agreements we click on every time we acquire a new login and password, so we can deal with yet another site or service online.

These were named in 1943 by the law professor Friedrich “Fritz” Kessler in his landmark paper, “Contracts of Adhesion: Some Thoughts about Freedom of Contract.” Here is pretty much his whole case, expressed in a tetrad:

contracts-of-adhesion

Contracts of adhesion were tools industry shaped, was in turn shaped by, and in turn shaped the whole marketplace.

But now we have the Internet, which by design gives everyone on it a place to stand, and, like Archimedes with his lever, move the world.

We are now developing that lever, in the form of terms any one of us can assert, as a first party, and the other side—the businesses we deal with—can agree to, automatically. Which they’ll do it because it’s good for them.

I describe our first two terms, both of which have potentials toward enormous changes, in two similar posts put up elsewhere: 

— What if businesses agreed to customers’ terms and conditions? 

— The only way customers come first

And we’ll work some of those terms this week, fittingly, at the Computer History Museum in Silicon Valley, starting tomorrow at VRM Day and then Tuesday through Thursday at the Internet Identity Workshop. I host the former and co-host the latter, our 24th. One is free and the other is cheap for a conference.

Here is what will come of our work:
personal-terms

Trust me: nothing you can do is more leveraged than helping make this happen.

See you there.

 

We’re done with Phase One

Here’s a picture that’s worth more than a thousand words:

maif-vrm

He’s with MAIF, the French insurance company, speaking at MyData 2016 in Helsinki, a little over a month ago. Here’s another:

sean-vrm

That’s Sean Bohan, head of our steering committee, expanding on what many people at the conference already knew.

I was there too, giving the morning keynote on Day 2:

cupfu1hxeaa4thh

It was an entirely new talk. Pretty good one too, especially since  I came up with it the night before.

See, by the end of Day 1, it was clear that pretty much everybody at the conference already knew how market power was shifting from centralized industries to distributed individuals and groups (including many inside centralized industries). It was also clear that most of the hundreds of people at the conference were also familiar with VRM as a market category. I didn’t need to talk about that stuff anymore. At least not in Europe, where most of the VRM action is.

So, after a very long journey, we’re finally getting started.

In my own case, the journey began when I saw the Internet coming, back in the ’80s.  It was clear to me that the Net would change the world radically, once it allowed commercial activity to flow over its pipes. That floodgate opened on April 30, 1995. Not long after that, I joined the fray as an editor for Linux Journal (where I still am, by the way, more than 20 years later). Then, in 1999, I co-wrote The Cluetrain Manifesto, which delivered this “one clue” above its list of 95 Theses:

not

And then, one decade ago last month, I started ProjectVRM, because that clue wasn’t yet true. Our reach did not exceed the grasp of marketers in the world. If anything, the Net extended marketers’ grasp a lot more than it did ours. (Shoshana Zuboff says their grasp has metastasized into surveillance capitalism. ) In respect to Gibson’s Law, Cluetrain proclaimed an arrived future that was not yet distributed. Our job was to distribute it.

Which we have. And we can start to see results such as those above. So let’s call Phase One a done thing. And start thinking about Phase Two, whatever it will be.

To get that work rolling, here are a few summary facts about ProjectVRM and related efforts.

First, the project itself could hardly be more lightweight, at least administratively. It consists of:

Second, we have a spin-off: Customer Commons, which will do for personal terms of engagement (one each of us can assert online) what Creative Commons (another Berkman-Klein spinoff) did for copyright.

Third, we have a list of many dozens of developers, which seem to be concentrated in Europe and Australia/New Zealand.  Two reasons for that, both speculative:

  1. Privacy. The concept is much more highly sensitive and evolved in Europe than in the U.S. The reason we most often get goes, “Some of our governments once kept detailed records of people, and those records were used to track down and kill many of them.” There are also more evolved laws respecting privacy. In Australia there have been privacy laws for several years requiring those collecting data about individuals to make it available to them, in forms the individual specifies. And in Europe there is the General Data Protection Regulation, which will impose severe penalties for unwelcome data gathering from individuals, starting in 2018.
  2. Enlightened investment. Meaning investors who want a startup to make a positive difference in the world, and not just give them a unicorn to ride out some exit. (Which seems to have become the default model in the U.S., especially Silicon Valley.)

What we lack is research. And by we I mean the world, and not just ProjectVRM.

Research is normally the first duty of a project at the Berkman Klein Center, which is chartered as a research organization. Research was ProjectVRM’s last duty, however, because we had nothing to research at first. Or, frankly, until now. That’s why we were defined as a development & research project rather than the reverse.

Where and how research on VRM and related efforts happens is a wide-open question. What matters is that it needs to be done, starting soon, while the “before” state still prevails in most of the world, and the future is still on its way in delivery trucks. Who does that research matters far less than the research itself.

So we are poised at a transitional point now. Let the conversations about Phase Two commence.

VRM at MyData2016

mydata2016-image

As it happens I’m in Helsinki right now, for MyData2016, where I’ll be speaking on Thursday morning. My topic: The Power of the Individual. There is also a hackathon (led by DataBusiness.fi) going on during the show, starting at 4pm (local time) today. In no order of priority, here are just some of the subjects and players I’ll be dealing with,  talking to, and talking up (much as I can):

Please let me know what others belong on this list. And see you at the show.

Save

« Older posts

© 2024 ProjectVRM

Theme by Anders NorenUp ↑