Category: Business (Page 1 of 3)

Toward a lexicon for advertising in both directions

We need a lexicon for the different ways buyers and sellers express their intentions to each other. Or, one might say, advertise.

On the demand side (⊂) we have what in ProjectVRM we’ve called intentcasting and (earlier) personal RFP. Scott Adams calls it broadcast shopping and John Hagel and David Siegel both (in books by that title) call it pull.

On the sell side (⊃) I can list at least six kinds of advertising alone that desperately need distinctive labels. To pull them apart, these are:

  1. Brand advertising. This kind is aimed at populations. All of it is contextual, meaning placed in media, TV or radio programs, or publications, that appeal broadly or narrowly to a categorized audience. None of it is tracking-based, and none of it is personal. Little of it wants a direct response. It simply means to impress. This is also the form of advertising that burned every brand you can name into your brain. In fact the word brand itself was borrowed from the cattle industry by Procter & Gamble in the 1930s, when it also funded the golden age of radio. Today it is also what sponsors all of sports broadcasting and pays most sports stars their massive salaries.
  2. Search advertising. This is what shows up with search results. There are two very different kinds here:
    1. Context-based. Not based on tracking. This is what DuckDuckGo does.
    2. Context+tracking based. This is what Google and Bing do.
  3. Tracking-based advertising. I’ve called this adtech. Cory Doctorow calls it ad-tech. Others call it ad tech. Some euphemize it as behavioralrelevant, interest-based, or personalized. Shoshana Zuboff says all of them are based on surveillance, which they are. So many critics speak of it as surveillance-based advertising.
  4. Advertising that’s both contextual and personal—but only in the sense that a highly characterized individual falls within a group, or a collection of overlapping groups, chosen by the advertiser. These are Facebook’s Core, Custom and Look-Alike audiences. Talk to Facebook and they’ll tell you these ads are not meant to be personal, though you should not be surprised to see ads for shoes when you have made clear to Facebook’s trackers (on the site, the apps, and wherever the company’s tentacles reach) that you might be in the market for shoes. Still, since Facebook characterizes every face in its audience in almost countless ways, it’s easy to call this form of advertising tracking-based.
  5. Interactive advertising. Vaguely defined by Wikipedia here,  and sometimes called conversational advertising,  the purpose is to get an interactive response from people. The expression is not much used today, even though the Interactive Advertising Bureau (IAB) is the leading trade association in the tracking-based advertising field and its primary proponent.
  6. Native advertising, also called sponsored content, is advertising made to look like ordinary editorial material.

The list is actually much longer. But the distinction that matters is between advertising that is tracking-based and the advertising that is not. As I put it in Brands need to fire adtech,

Let’s be clear about all the differences between adtech and real advertising. It’s adtech that spies on people and violates their privacy. It’s adtech that’s full of fraud and a vector for malware. It’s adtech that incentivizes publications to prioritize “content generation” over journalism. It’s adtech that gives fake news a business model, because fake news is easier to produce than the real kind, and adtech will pay anybody a bounty for hauling in eyeballs.

Real advertising doesn’t do any of those things, because it’s not personal. It is aimed at populations selected by the media they choose to watch, listen to or read. To reach those people with real ads, you buy space or time on those media. You sponsor those media because those media also have brand value.

With real advertising, you have brands supporting brands.

Brands can’t sponsor media through adtech because adtech isn’t built for that. On the contrary, adtech is built to undermine the brand value of all the media it uses, because it cares about eyeballs more than media.

Adtech is magic in this literal sense: it’s all about misdirection. You think you’re getting one thing while you’re really getting another. It’s why brands think they’re placing ads in media, while the systems they hire chase eyeballs. Since adtech systems are automated and biased toward finding the cheapest ways to hit sought-after eyeballs with ads, some ads show up on unsavory sites. And, let’s face it, even good eyeballs go to bad places.

This is why the media, the UK government, the brands, and even Google are all shocked. They all think adtech is advertising. Which makes sense: it looks like advertising and gets called advertising. But it is profoundly different in almost every other respect. I explain those differences in Separating Advertising’s Wheat and Chaff:

…advertising today is also digital. That fact makes advertising much more data-driven, tracking-based and personal. Nearly all the buzz and science in advertising today flies around the data-driven, tracking-based stuff generally called adtech. This form of digital advertising has turned into a massive industry, driven by an assumption that the best advertising is also the most targeted, the most real-time, the most data-driven, the most personal — and that old-fashioned brand advertising is hopelessly retro.

In terms of actual value to the marketplace, however, the old-fashioned stuff is wheat and the new-fashioned stuff is chaff. In fact, the chaff was only grafted on recently.

See, adtech did not spring from the loins of Madison Avenue. Instead its direct ancestor is what’s called direct response marketing. Before that, it was called direct mail, or junk mail. In metrics, methods and manners, it is little different from its closest relative, spam.

Direct response marketing has always wanted to get personal, has always been data-driven, has never attracted the creative talent for which Madison Avenue has been rightly famous. Look up best ads of all time and you’ll find nothing but wheat. No direct response or adtech postings, mailings or ad placements on phones or websites.

Yes, brand advertising has always been data-driven too, but the data that mattered was how many people were exposed to an ad, not how many clicked on one — or whether you, personally, did anything.

And yes, a lot of brand advertising is annoying. But at least we know it pays for the TV programs we watch and the publications we read. Wheat-producing advertisers are called “sponsors” for a reason.

So how did direct response marketing get to be called advertising ? By looking the same. Online it’s hard to tell the difference between a wheat ad and a chaff one.

Remember the movie “Invasion of the Body Snatchers?” (Or the remake by the same name?) Same thing here. Madison Avenue fell asleep, direct response marketing ate its brain, and it woke up as an alien replica of itself.

This whole problem wouldn’t exist if the alien replica wasn’t chasing spied-on eyeballs, and if advertisers still sponsored desirable media the old-fashioned way.

Bonus link.

I wrote that in 2017. The GDPR became enforceable in 2018 and the CCPA in 2020.  Today more laws and regulations are being instituted to fight tracking-based advertising, yet the whole advertising industry remains drunk on digital, deeply corrupt and delusional, and growing like a Stage IV cancer.

We live digital lives now, and most of the advertising we see and hear is on or through glowing digital rectangles. Most of those are personal as well. So, naturally, most advertising on those media is personal—or wishes it was. Regulations that require “consent” for the tracking that personalization requires do not make the practice less hostile to personal privacy. They just make the whole mess easier to rationalize.

So I’m trying to do two things here.

One is to make clearer the distinctions between real advertising and direct marketing.

The other is to suggest that better signaling from demand to supply, starting with intentcasting, may serve as chemo for the cancer that adtech has become. It will do that by simply making clear to sellers what buyers actually want and don’t want.

 

 

Salon with Robin Chase

Robin Chase, co-founder and original CEO of Zipcar and author of Peers Inc: How People and Platforms are Inventing the Collaborative Economy and Reinventing Capitalism, will speak at the Ostrom Workshop s Beyond the Web Salon Series at Indiana University at 2:00 PM Eastern this coming Monday, February 7, 2022. The event link is here, where you’ll also find the Zoom link.

The full theme of the salon series is Beyond the Web: Making a platform-free online marketplace for goods, ideas and everything else, about which you can read more here.

Robin’s work with transportation and peer production has been VRooMy from the start, and especially consistent with our work with the Ostrom Workshop on the Intention Byway in Bloomington, Indiana.

Upcoming speakers in the Salon Series (mark your calendars) are Ethan Zuckerman and Shoshana Zuboff. Both are BKC veterans and, like Robin, devoted to moving beyond status quos that vex us all. Ethan will be with us on March 7 and Shoshana on April 11. Days and times for both are Mondays at 2:00 PM Eastern. Details at those links.<

These events are all participatory, informative, challenging and fun. Please join us.

Beyond the Web

The Cluetrain Manifesto said this…

not

…in 1999.

And now, in 2021, it’s still not true—at least not on the Web.

If it was true, California’s CCPA wouldn’t call us mere “consumers” and Europe’s GDPR  wouldn’t call us mere “data subjects,” whose privacy is entirely at the grace of corporate “data processors” and “data controllers.” (While the GDPR does say a “natural person” can be either of those, the prevailing assumption says no. Worse, it assumes that what privacies we enjoy on the Web should be valved by choices we make when confronted with “consent” notices that pop up when we first visit a website, and which are recorded somewhere we don’t know and can’t audit or dispute.)

Simply put, we are not free, and our reach does not exceed their grasp. Again, on the Web.

But (this is key), the Web is not the Internet. It’s a haystack of stuff on the Net. It’s a big one, and hugely good in many ways. And maybe we can be really free there eventually. But why not work outside of it? That’s the question.

And that’s what some of us are answering. You might call what we’re doing a blue ocean strategy:

For example, Joyce and I are now in Bloomington, Indiana, embedded as visiting scholars at Indiana University’s Ostrom Workshop, where we are rolling out a new project called the Byway, for Customer Commons, ProjectVRM’s nonprofit spin-off. We will also be working with local communities of interest here in Bloomington. Stay tuned for more on that.

To find out more about what we’re up to—or just to discuss whatever seems relevant—please come to our first Beyond the Web salon, by Zoom, on Monday at 3pm Eastern time. The full link: https://events.iu.edu/ostromworkshop/event/264653-ostrom-salon-series-beyond-the-web

How the Web sucks

This spectrum of emojis is a map of the Web’s main occupants (the middle three) and outliers (the two on the flanks). It provides a way of examining who is involved, where regulation fits, and where money gets invested and made. Yes, it’s overly broad, but I think it’s helpful in understanding where things went wrong and why. So let’s start.

Wizards are tech experts who likely run their own servers and keep private by isolating themselves and communicating with crypto. They enjoy the highest degrees of privacy possible on and around the Web, and their approach to evangelizing their methods is to say “do as I do” (which most of us, being Muggles, don’t). Relatively speaking, not much money gets made by or invested in Wizards, but much money gets made because of Wizards’ inventions. Those inventions include the Internet, the Web, free and open source software, and much more. Without Wizards, little of what we enjoy in the digital world today would be possible. However, it’s hard to migrate their methods into the muggle population.

‍Muggles are the non-Wizards who surf the Web and live much of their digital lives there, using Web-based services on mobile apps and browsers on computers. Most of the money flowing into the webbed economy comes from Muggles. Still, there is little investment in providing Muggles with tools for operating or engaging independently and at scale across the websites and services of the world. Browsers and email clients are about it, and the most popular of those (Chrome, Safari, Edge) are by the grace of corporate giants. Almost everything Muggles do on the Web and mobile devices is on apps and tools that are what the trade calls silos or walled gardens: private spaces run by the websites and services of the world.

Sites. This category also includes clouds and the machinery of e-commerce. These are at the heart of the Web: a client-server (aka calf-cow) top-down, master-slave environment where servers rule and clients obey. It is in this category that most of the money on the Web (and e-commerce in general) gets made, and into which most investment money flows. It is also here that nearly all development n the connected world today happens.

 Ad-tech, aka adtech, is the home of surveillance capitalism, which relies on advertisers and their agents knowing all that can be known about every Muggle. This business also relies on absent Muggle agency, and uses that absence as an excuse for abusing the privilege of committing privacy violations that would be rude or criminal in the natural world. Also involved in this systematic compromise are adtech’s dependents in the websites and Web services of the world, which are typically employed by adtech to inject tracking beacons in Muggles’ browsers and apps. It is to the overlap between adtech and sites that all privacy regulation is addressed. This is why, the GDPR sees Muggles as mere “data subjects,” and assigns responsibility for Muggle’s privacy to websites and services the regulation calls “data controllers” and “data processors.” The regulation barely imagines that Muggles could perform either of those roles, even though personal computing was invented so every person can do both. (By the way, the adtech business and many of its dependents in publishing like to say the Web is free because advertising pays for it. But the Web is as free by nature as are air and sunlight. And most of the money Google makes, for example, comes from plain old search advertising, which can get along fine without tracking. There is also nothing about advertising itself that requires tracking.)

 Crime happens on the Web, but its center of gravity is outside, on the dark web. This is home to botnets, illegal porn, terrorist activity, ransom attacks, cyber espionage, and so on. There is a lot of overlap between crime and adtech, however, given the moral compromises required for adtech to function, plus the countless ways that bots, malware and other types of fraud are endemic to the adtech business. (Of course, to be an expert criminal on the dark web requires a high degree of wizardry. So I one could arrange these categories in a circle, with an overlap between wizards and criminals.)

I offer this set of distinctions for several reasons. One is to invite conversation about how we have failed the Web and the Web has failed us—the Muggles of the world—even though we enjoy apparently infinite goodness from the Web and handy services there. Another is to explain why ProjectVRM has been more aspirational than productive in the fifteen years it has been working toward empowering people on the commercial Net. (Though there has been ample productivity.) But mostly it is to explain why I believe we will be far more productive if we start working outside the Web itself. This is why our spinoff, Customer Commons, is pushing forward with the Byway toward i-commerce. Check it out.

Finally, I owe the idea for this visualization to Iain Henderson, who has been with ProjectVRM since before it started. (His other current involvements are with JLINC and Customer Commons.) Hope it proves useful.

QR codes are becoming fishhooks

We’ve been very bullish on QR codes here, because they’re an excellent way for customers and vendors to shake hands, to start doing business, and to form constructive relationships.

Alas, they have become bait for tracking by marketers. In QR Codes Are Here to Stay. So Is the Tracking They Allow, Erin Woo (@erinkwoo) of the NY Times explains how:

Restaurants have adopted them en masse, retailers including CVS and Foot Locker have added them to checkout registers, and marketers have splashed them all over retail packaging, direct mail, billboards and TV advertisements.

But the spread of the codes has also let businesses integrate more tools for tracking, targeting and analytics, raising red flags for privacy experts. That’s because QR codes can store digital information such as when, where and how often a scan occurs. They can also open an app or a website that then tracks people’s personal information or requires them to input it.

As a result, QR codes have allowed some restaurants to build a database of their customers’ order histories and contact information. At retail chains, people may soon be confronted by personalized offers and incentives marketed within QR code payment systems.

“People don’t understand that when you use a QR code, it inserts the entire apparatus of online tracking between you and your meal,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union. “Suddenly your offline activity of sitting down for a meal has become part of the online advertising empire.”

So that’s one more thing to fix in our apps and browsers. But how?

Obviously, we can try to avoid QR codes; but there are a growing number of places where that’s not possible.

Providing ways to opt out is a giant non-starter, as we’ve learned at great pain on the Web. (Do you have any record at all of the separate privacy settings you’ve made at all the sites and services where those choices have been provided? Of course not.)

We need at least two things here, and fast.

One is some way, in our phones or browsers, to prevent QR code scanning on phones from turning into tracking. Are you listening, Apple and Google? Plus everybody else in the QR code business?

The other is regulation. And I hate to say that, because too many regulations protect yesterday from last Thursday, and distort markets in ways seen and unseen for decades to come. But this is a case where we really need it.

[Two days later…]

There has been much follow-up to this piece. If you’re interested in that, start with this clip rom Wednesday;s FLOSS Weekly podcast, where Jonathan Bennett (@JP_Bennett) provides some excellent answers to questions raised here and elsewhere.

On Twitter, @QRcodeART has some good follow-up under an @TWiT tweet pointing to that clip. In that thread I stand accused of “pure babbling,” to which I plead guilty (providing, as I do, an example of how, as Garrison Keillor once put it, “English is the preacher’s language because it allows you to talk until you think of what to say”).

The main point in the thread is that QR codes are essentially “innocent.” Also, “#Bluetooth is much worse! Creative names, unique IDs (!) and such and usually open and “seeable” for everybody. Similar to your #Wifi searching always for a #WLan in the perimeter. Unique funny names and identifiable MAC addresses. Think about that !”

Good advice. Clearly, there are concerns for all the tech we use, especially the networked kind. If we fail to take precautions such as those Jonathan recommends, we’re likely being tracked in ways we wouldn’t welcome if we knew about it. Returning to the metaphor, everything you carry, scan or click on can be a fishhook. And, to the hookers, you’re just a fish.

 

 

Toward e-commerce 2.0

Phil Windley explains e-commerce 1.0  in a single slide that says this:

One reason this happened is that client-server, aka calf-cow  (illustrated in Thinking outside the browser) has been the default format for all relationships on the Web, and cookies are required to maintain those relationships.  The result is a highly lopsided power asymmetry in which the calves have no more power than the cows give them. As a result,

  1. The calves have no easy way even to find  (much less to understand or create) the cookies in their browsers’ jars.
  2. The calves have no identity of their own, but instead have as many different identities as there are websites that know (via cookies) their visiting browsers. This gives them no independence, much less a place to stand like Archimedes, with a lever on the world. The browser may be a great tool, but it’s neither that place to stand, nor a sufficient lever. (Yes, it should have been, and maybe still could be; but meanwhile, it isn’t.)
  3. All the “agreements” the calves have with the websites’ cows leave no readable record on the calves’ side. This severely limits their capacity for dispute, which is required for a true relationship.
  4. There exists no independent way the calves to signal their intentions—such as interests in purchase, conditions for engagement, or the need to be left alone (which is how Brandeis and Warren define privacy).

In other words, the best we can do in e-commerce 1.0 is what the calf-cow system provides: ways for calves to depend utterly on means the cows provide. And some of those cows are mighty huge.

Nearly all of signaling between demand and supply remains trapped inside these silos and walled gardens. We search inside their systems, we are notified of product and service availability inside their systems, we make agreements inside their systems (to terms and conditions they provide and require), or privacy is dependent on their systems, and product and service delivery is handled either inside their systems or through allied and dependent systems.

Credit where due: an enormous amount of good has come out of these systems. But a far larger amount of good is MLOTT—money left on the table—because there is a boundless sum and variety of demand and supply that still cannot easily signal their interest, intentions of presence to each other in the digital world.

Putting that money on the table is our job in e-commerce 2.0.

So here is a challenge: tell us how we can do that without using browsers.

Some of us here do have ideas. But we’d like to hear from you first.


Cross-posted at the ProjectVRM blog, here.

Is being less tasty vegetables our best strategy?

We are now being farmed by business. The pretense of the “customer is king” is now more like “the customer is a vegetable” — Adrian Gropper

That’s a vivid way to put the problem.

There are many approaches to solutions as well. One is suggested today in the latest by @_KarenHao in MIT Technology Review, titled

How to poison the data that Big Tech uses to surveil you:
Algorithms are meaningless without good data. The public can exploit that to demand change.

An  excerpt:

In a new paper being presented at the Association for Computing Machinery’s Fairness, Accountability, and Transparency conference next week, researchers including PhD students Nicholas Vincent and Hanlin Li propose three ways the public can exploit this to their advantage:
Data strikes, inspired by the idea of labor strikes, which involve withholding or deleting your data so a tech firm cannot use it—leaving a platform or installing privacy tools, for instance.
Data poisoning, which involves contributing meaningless or harmful data. AdNauseam, for example, is a browser extension that clicks on every single ad served to you, thus confusing Google’s ad-targeting algorithms.
Conscious data contribution, which involves giving meaningful data to the competitor of a platform you want to protest, such as by uploading your Facebook photos to Tumblr instead.
People already use many of these tactics to protect their own privacy. If you’ve ever used an ad blocker or another browser extension that modifies your search results to exclude certain websites, you’ve engaged in data striking and reclaimed some agency over the use of your data. But as Hill found, sporadic individual actions like these don’t do much to get tech giants to change their behaviors.
What if millions of people were to coordinate to poison a tech giant’s data well, though? That might just give them some leverage to assert their demands.

The sourced paper* is titled Data Leverage: A Framework for Empowering the Public in its Relationship with Technology Companies, and concludes,

In this paper, we presented a framework for using “data leverage” to give the public more influence over technology company behavior. Drawing on a variety of research areas, we described and assessed the “data levers” available to the public. We highlighted key areas where researchers and policymakers can amplify data leverage and work to ensure data leverage distributes power more broadly than is the case in the status quo.

I am all for screwing with overlords, and the authors suggest some fun approaches. Hell, we should all be doing whatever it takes, lawfully (and there is a lot of easement around that) to stop rampant violation of our privacy—and not just by technology companies. The customers of those companies, which include every website that puts up a cookie notice that nudges visitors into agreeing to be tracked all over the Web (in observance of the letter of the GDPR, while screwing its spirit), are also deserving of corrective measures. Same goes for governments who harvest private data themselves, or gather it from others without our knowledge or permission.

My problem with the framing of the paper and the story is that both start with the assumption that we are all so weak and disadvantaged that our only choices are: 1) to screw with the status quo to reduce its harms; and 2) to seek relief from policymakers.  While those choices are good, they are hardly the only ones.

Some context: wanton privacy violations in our digital world has only been going on for a little more than a decade, and that world is itself barely more than  a couple dozen years old (dating from the appearance of e-commerce in 1995). We will also remain digital as well as physical beings for the next few decades or centuries.

So we need more than these kinds of prescriptive solutions. For example, real privacy tech of our own, that starts with giving us the digital versions of the privacy protections we have enjoyed in the physical world for millennia: clothing, shelter, doors with locks, and windows with curtains or shutters.

We have been on that case with ProjectVRM since 2006, and there are many developments in progress. Some even comport with our Privacy Manifesto (a work in progress that welcomes improvement).

As we work on those, and think about throwing spanners into the works of overlords, it may also help to bear in mind one of Craig Burton‘s aphorisms: “Resistance creates existence.” What he means is that you can give strength to an opponent by fighting it directly. He applied that advice in the ’80s at Novell by embracing 3Com, Microsoft and other market opponents, inventing approaches that marginalized or obsolesced their businesses.

I doubt that will happen in this case. Resisting privacy violations has already had lots of positive results. But we do have a looong way to go.

Personally, I welcome throwing a Theia.


* The full list of authors is Nicholas Vincent, Hanlin Li (@hanlinliii), Nicole Tilly and Brent Hecht (@bhecht) of Northwestern University, and Stevie Chancellor (@snchencellor) of the University of Minnesota,

What SSI needs

wallet

Self-sovereign identity (SSI) is hot stuff.  Look it up and see how many results you get. As of today, I get 627,000 on Google.  By that measure alone, SSI is the biggest thing in the VRM development world. Nothing I know has more promise to give individuals leverage for dealing with the organizations of the world, especially in business.

Here’s how SSI works: rather than presenting your “ID” when some other party wants to know something about you, you present a verifiable credential that tells them no more than they need to know.

In other words, if someone wants to know if you are over 18, a member of Costco, a college graduate, or licensed to drive a car, you present a verifiable credential that tells the other party no more than that, but in a way they can trust. The interaction also leaves a trail, so you can both look back and remember what credentials you presented, and how the credential was accepted.

So, how do you do this? With a tool.

The easiest tool to imagine is a wallet, or a wallet app (here’s one) with some kind of dashboard. That’s what I try to illustrate with the image above: a way to present credentials and to keep track of how those play in the relevant parts of your life.

What matters is that you need to be in charge of your verifiable credentials, how they’re presented,  and how the history of interactions is recorded and auditable. You’re not just a “user,” or a pinball in some company’s machine. You’re the independent and sovereign self, selectively interacting with others who need some piece of “ID.”

There is no need for this to be complicated—at least not at the UI level. In fact, most of it can be automated, especially if the business ends of Me2B engagements are ready to work with verifiable credentials.

As it happens, almost all development in the SSI world is at the business end. This is very good, but it’s not enough.

To me it looks like SSI development today is where Web was in the early ’90s, before the invention of graphical browsers. Back then we knew the Web was there; but most of us couldn’t see or use it. We needed a graphical browser for that.  (Mosaic was the first, in 1993.)

For SSI to work, it needs to be the equivalent of a graphical browser. Maybe it’s a wallet, or maybe it’s something else. (I have an idea; but I want to see how SSI developers respond to this post first.)

The individual’s tool or tools (those equivalents of a browser) also don’t need to have a business model. In fact, it will be best if they don’t.

It should help to remember that Microsoft beat Netscape in the browser business by giving Internet Explorer away while Netscape charged for Navigator. Microsoft did that because they knew a free browser would be generative. It also helped that browsers were substitutable, meaning you could choose among many different ones.

What you look for here are because effects. That’s when you make money because of something rather than with it. Examples are the open protocols and standards beneath the Internet and the Web, free and open source code, and patents (such as Ethernet’s) that developers are left free to ignore.

If we don’t get that tool (whatever we call it), and SSI remains mostly a B2B thing, it’s doomed to niches at best.

I can’t begin to count how many times VRM developers have started out wanting to empower individuals and have ended up selling corporate services to companies, because that’s all they could imagine or sell—or that investors wanted. Let’s not let that happen here.

Let’s give people the equivalent of a browser, and then watch SSI truly succeed.

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. It’s hard to find a bigger hairball of conflicting interests and surely awful outcomes.

Second, What does the Internet make of us, where I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?


What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

Personal scale

Way back in 1995, when our family was still new to the Web, my wife asked a question that is one of the big reasons I started ProjectVRM: Why can’t I take my own shopping cart from one site to another?

The bad but true answer is that every site wants you to use their shopping cart. The good but not-yet-true answer is that nobody has invented it yet. By that I mean: not  a truly personal one, based on open standards that make it possible for lots of developers to compete at making the best personal shopping cart for you.

Think about what you might be able to do with a PSC (Personal Shopping Cart) online that you can’t do with a physical one offline:

  • Take it from store to store, just as you do with your browser. This should go without saying, but it’s still worth repeating, because it would be way cool.
  • Have a list of everything parked already in your carts within each store.
  • Know what prices have changed, or are about to change, for the products in your carts in each store.
  • Notify every retailer you trust that you intend to buy X, Y or Z, with restrictions (meaning your terms and conditions) on the use of that information, and in a way that will let you know if those restrictions are violated. This is called intentcasting, and there are a pile of companies already in that business.
  • Have a way to change your name and other contact information, for all the stores you deal with, in one move.
  • Control your subscriptions to each store’s emailings and promotional materials.
  • Have your  own way to express genuine loyalty , rather than suffering with as many coercive and goofy “loyalty programs” as there are companies
  • Have a standard way to share your experiences with the companies that make and sell the products you’ve bought, and to suggest improvements—and for those companies to share back updates and improvements you should know about.
  • Have wallets of your own, rather than only those provided by platforms.
  • Connect to your collection of receipts, instruction manuals and other relevant information for all the stuff you’ve already bought or currently rent. (Note that this collection is for the Internet of your things—one you control for yourself, and is not a set of suction cups on corporate tentacles.)
  • Your own standard way to call for service or support, for stuff you’ve bought or rented, rather than suffering with as many different ways to do that as there are companies you’ve engaged

All of these things are Me2B, and will give each of us scale, much as the standards that make the Internet, browsers and email all give us scale. And that scale will be just as good for the companies we deal with as are the Internet, browsers and email.

If you think “none of the stores out there will want any of this, because they won’t control it,” think about what personal operating systems and browsers on every device have already done for stores by making the customer interface standard. What we’re talking about here is enlarging that interface.

I’d love to see if there is any economics research and/or scholarship on personal scale and its leverage (such as personal operating systems, devices and browsers give us) in the digital world). Because it’s a case that needs to be made.

Of course, there’s money to me made as well, because there will be so many more, better and standard ways for companies to deal with customers than current tools (including email, apps and browsers) can by themselves.

« Older posts

© 2024 ProjectVRM

Theme by Anders NorenUp ↑