Category: Demand chain (Page 1 of 6)

Markets vs. Marketing in the Age of AI

Maybe history will defeat itself.

Remember FreePC? It was a thing, briefly, at the end of the last millennium, right before Y2K pooped the biggest excuse for a party in a thousand years. This may help. The idea was to put ads in the corner of your PC’s screen. The market gave it zero stars, and it failed.

And now comes Telly, hawking free TVs with ads in a corner, and a promise to “optimize your ad experience.” As if anybody wants an ad experience other than no advertising at all.

Negative demand for advertising has been well advertised by both ad blocking (the biggest boycott in human history) and ad-free “prestige” TV, (or SVOD, for subscription video on demand). With those we gladly pay—a lot— not to see advertising. (See numbers here.)

But the advertising business (in the mines of which I toiled for too much of my adult life) has always smoked its own exhaust and excels best at getting high with generous funders. (Yeah, some advertising works, but on the whole people still hate it on the receiving end.)

The fun will come when our own personal AI bots, working for our own asses, do battle with the robot Nazgûls of marketing — and win, because we’re on the Demand side of the marketplace, and we’ll do a better job of knowing what we want and don’t want to buy than marketing’s surveillant AI robots can guess at. Supply will survive, of course. But markets will defeat marketing by taking out the middle creep.

The end state will be one Cluetrain forecast in 1999, Linux Journal named in 2006, the VRM community started working on that same year, and The Intention Economy detailed in 2012. The only thing all of them missed was how customer intentions might be helped by personal AI.

Personal.* Not personalized.

Markets will become new and better dances between Demand and Supply, simply because Demand will have better ways to take the lead, and not just follow all the time. Simple as that.


*For more on how this will work, see Individual Empowerment and Agency on a Scale We’ve Never Seen Before.

Let’s zero-base zero-party data

Forrester Research has gifted marketing with a hot buzzphrase: zero-party data, which they define as “data that a customer intentionally and proactively shares with a brand, which can include preference center data, purchase intentions, personal context, and how the individual wants the brand to recognize her.”

Salesforce, the CRM giant (that’s now famously buying Slack), is ambitious about the topic, and how it can “fuel your personalized marketing efforts.” The second person you is Salesforce’s corporate customer.

It’s important to unpack what Salesforce says about that fuel, because Salesforce is a tech giant that fully matters. So here’s text from that last link. I’ll respond to it in chunks. (Note that zero, first and third party data is about you, no matter who it’s from.)

What is zero-party data?

Before we define zero-party data, let’s back up a little and look at some of the other types of data that drive personalized experiences.

First-party data: In the context of personalization, we’re often talking about first-party behavioral data, which encompasses an individual’s site-wide, app-wide, and on-page behaviors. This also includes the person’s clicks and in-depth behavior (such as hovering, scrolling, and active time spent), session context, and how that person engages with personalized experiences. With first-party data, you glean valuable indicators into an individual’s interests and intent. Transactional data, such as purchases and downloads, is considered first-party data, too.

Third-party data: Obtained or purchased from sites and sources that aren’t your own, third-party data used in personalization typically includes demographic information, firmographic data, buying signals (e.g., in the market for a new home or new software), and additional information from CRM, POS, and call center systems.

Zero-party data, a term coined by Forrester Research, is also referred to as explicit data.

They then go on to quote Forrester’s definition, substituting “[them]” for “her.”

The first party in that definition the site harvesting “behavioral” data about the individual. (It doesn’t square with the legal profession’s understanding of the term, so if you know that one, try not to be confused.)

It continues,

why-is-zero-party-data-important

Forrester’s Fatemeh Khatibloo, VP principal analyst, notes in a video interview with Wayin (now Cheetah Digital) that zero-party data “is gold. … When a customer trusts a brand enough to provide this really meaningful data, it means that the brand doesn’t have to go off and infer what the customer wants or what [their] intentions are.”

Sure. But what if the customer has her own way to be a precious commodity to a brand—one she can use at scale with all the brands she deals with? I’ll unpack that question shortly.

There’s the privacy factor to keep in mind too, another reason why zero-party data – in enabling and encouraging individuals to willingly provide information and validate their intent – is becoming a more important part of the personalization data mix.

Two things here.

First, again, individuals need their own ways to protect their privacy and project their intentions about it.

Second, having as many ways for brands to “enable and encourage” disclosure of private information as there are brands to provide them is hugely inefficient and annoying. But that is what Salesforce is selling here.

As industry regulations such as GDPR and the CCPA put a heightened focus on safeguarding consumer privacy, and as more browsers move to phase out third-party cookies and allow users to easily opt out of being tracked, marketers are placing a greater premium and reliance on data that their audiences knowingly and voluntarily give them.

Not if the way they “knowingly and voluntarily” agree to be tracked is by clicking “AGREE” on website home page popovers. Those only give those sites ways to adhere to the letter of the GDPR and the CCPA while also violating those laws’ spirit.

Experts also agree that zero-party data is more definitive and trustworthy than other forms of data since it’s coming straight from the source. And while that’s not to say all people self-report accurately (web forms often show a large number of visitors are accountants, by profession, which is the first field in the drop-down menu), zero-party data is still considered a very timely and reliable basis for personalization.

Self-reporting will be a lot more accurate if people have real relationships with brands, rather (again) than ones that are “enabled and encouraged” in each brand’s own separate way.

Here is a framework by which that can be done. Phil Windley provides some cool detail for operationalizing the whole thing here, here, here and here.

Even if the countless separate ways are provided by one company (e.g. Salesforce),  every brand will use those ways differently, giving each brand scale across many customers, but giving those customers no scale across many companies. If we want that kind of scale, dig into the links in the paragraph above.

With great data comes great responsibility.

You’re not getting something for nothing with zero-party data. When customers and prospects give and entrust you with their data, you need to provide value right away in return. This could take the form of: “We’d love you to take this quick survey, so we can serve you with the right products and offers.”

But don’t let the data fall into the void. If you don’t listen and respond, it can be detrimental to your cause. It’s important to honor the implied promise to follow up. As a basic example, if you ask a site visitor: “Which color do you prefer – red or blue?” and they choose red, you don’t want to then say, “Ok, here’s a blue website.” Today, two weeks from now, and until they tell or show you differently, the website’s color scheme should be red for that person.

While this example is simplistic, the concept can be applied to personalizing content, product recommendations, and other aspects of digital experiences to map to individuals’ stated preferences.

This, and what follows in that Salesforce post, is a pitch for brands to play nice and use surveys and stuff like that to coax private information out of customers. It’s nice as far as it can go, but it gives no agency to customers—you and me—beyond what we can do inside each company’s CRM silo.

So here are some questions that might be helpful:

  • What if the customer shows up as somebody who already likes red and is ready to say so to trusted brands? Or, better yet, if the customer arrives with a verifiable claim that she is already a customer, or that she has good credit, or that she is ready to buy something?
  • What if she has her own way of expressing loyalty, and that way is far more genuine, interesting and valuable to the brand than the company’s current loyalty system, which is full of gimmicks, forms of coercion, and operational overhead?
  • What if the customer carries her own privacy policy and terms of engagement (ones that actually protect the privacy of both the customer and the brand, if the brand agrees to them)?

All those scenarios yield highly valuable zero-party data. Better yet, they yield real relationships with values far above zero.

Those questions suggest just a few of the places we can go if we zero-base customer relationships outside standing CRM systems: out in the open market where customers want to be free, independent, and able to deal with many brands with tools and services of their own, through their own CRM-friendly VRM—Vendor Relationship Management—tools.

VRM reaching out to CRM implies (and will create)  a much larger middle market space than the closed and private markets isolated inside every brand’s separate CRM system.

We’re working toward that. See here.

 

We’re not data. We’re digital. Let’s research that.

The University of Chicago Press’  summary  of How We Became Our Data says author Colin Koopman

excavates early moments of our rapidly accelerating data-tracking technologies and their consequences for how we think of and express our selfhood today. Koopman explores the emergence of mass-scale record keeping systems like birth certificates and social security numbers, as well as new data techniques for categorizing personality traits, measuring intelligence, and even racializing subjects. This all culminates in what Koopman calls the “informational person” and the “informational power” we are now subject to. The recent explosion of digital technologies that are turning us into a series of algorithmic data points is shown to have a deeper and more turbulent past than we commonly think.

Got that? Good.

Now go over to the book’s Amazon page, do the “look inside” thing and then go to the chapter titled “Redesign: Data’s Turbulent Pasts and Future Paths” (p. 173) and read forward through the next two pages (which is all it allows). In that chapter, Koopman begins to develop “the argument that information politics is separate from communicative politics.” My point with this is that politics are his frames (or what he calls “embankments”) in both cases.

Now take three minutes for A Smart Home Neighborhood: Residents Find It Enjoyably Convenient Or A Bit Creepy, which ran on NPR one recent morning. It’s about a neighborhood of Amazon “smart homes” in a Seattle suburb. Both the homes and the neighborhood are thick with convenience, absent of privacy, and reliant on surveillance—both by Amazon and by smart homes’ residents.  In the segment, a guy with the investment arm of the National Association of Realtors says, “There’s a new narrative when it comes to what a home means.” The reporter enlarges on this: “It means a personalized environment where technology responds to your every need. Maybe it means giving up some privacy. These families are trying out that compromise.” In one case the teenage daughter relies on Amazon as her “butler,” while her mother walks home on the side of the street without Amazon doorbells, which have cameras and microphones, so she can escape near-ubiquitous surveillance in her smart ‘hood.

Lets visit three additional pieces. (And stay with me. There’s a call to action here, and I’m making a case for it.)

First, About face, a blog post of mine that visits the issue of facial recognition by computers. Like the smart home, facial recognition is a technology that is useful both for powerful forces outside of ourselves—and for ourselves. (As, for example, in the Amazon smart home.) To limit the former (surveillance by companies), it typically seems we need to rely on what academics and bureaucrats blandly call policy (meaning public policy: principally lawmaking and regulation).

As this case goes, the only way to halt or slow surveillance of individuals  by companies is to rely on governments that are also incentivized (to speed up passport lines, solve crimes, fight terrorism, protect children, etc.)  to know as completely as possible what makes each of us unique human beings: our faces, our fingerprints, our voices, the veins in our hands, the irises of our eyes. It’s hard to find a bigger hairball of conflicting interests and surely awful outcomes.

Second, What does the Internet make of us, where I conclude with this:

My wife likens the experience of being “on” the Internet to one of weightlessness. Because the Internet is not a thing, and has no gravity. There’s no “there” there. In adjusting to this, our species has around two decades of experience so far, and only about one decade of doing it on smartphones, most of which we will have replaced two years from now. (Some because the new ones will do 5G, which looks to be yet another way we’ll be captured by phone companies that never liked or understood the Internet in the first place.)

But meanwhile we are not the same. We are digital beings now, and we are being made by digital technology and the Internet. No less human, but a lot more connected to each other—and to things that not only augment and expand our capacities in the world, but replace and undermine them as well, in ways we are only beginning to learn.

Third, Mark Stahlman’s The End of Memes or McLuhan 101, in which he suggests figure/ground and formal cause as bigger and deeper ways to frame what’s going on here.  As Mark sees it (via those two frames), the Big Issues we tend to focus on—data, surveillance, politics, memes, stories—are figures on a ground that formally causes all of their forms. (The form in formal cause is the verb to form.) And that ground is digital technology itself. Without digital tech, we would have little or none of the issues so vexing us today.

The powers of digital tech are like those of speech, tool-making, writing, printing, rail transport, mass production, electricity, railroads, automobiles, radio and television. As Marshall McLuhan put it (in The Medium is the Massage), each of new technology is a cause that “works us over completely” while it’s busy forming and re-forming us and our world.

McLuhan also teaches that each new technology retrieves what remains useful about the technologies it obsolesces. Thus writing retrieved speech, printing retrieved writing, radio retrieved both, and TV retrieved radio. Each new form was again a formal cause of the good and bad stuff that worked over people and their changed worlds. (In modern tech parlance, we’d call the actions of formal cause “disruptive.”)

Digital tech, however, is less disruptive and world-changing than it is world-making. In other words, it is about as massively formal (as in formative) as tech can get. And it’s as hard to make sense of this virtual world than it is to sense roundness in the flat horizons of our physical one. It’s also too easy to fall for the misdirections inherent in all effects of formal causes. For example, it’s much easier to talk about Trump than about what made him possible. Think about it: absent of digital tech, would we have had Trump? Or even Obama? McLuhan’s  blunt perspective may help. “People,” he said, “do not want to know why radio caused Hitler and Gandhi alike.”

So here’s where I am now on all this:

  1. We have not become data. We have become digital, while remaining no less physical. And we can’t understand what that means if we focus only on data. Data is more effect than cause.
  2. Politics in digital conditions is almost pure effect, and those effects misdirect our attention away from digital as a formal cause. To be fair, it is as hard for us to get distance on digital as it is for a fish to get distance on water. (David Foster Wallace to the Kenyon College graduating class of 2005: Greetings parents and congratulations to Kenyon’s graduating class of 2005. There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”)
  3. Looking to policy for cures to digital ills is both unavoidable and sure to produce unintended consequences. For an example of both, look no farther than the GDPR.  In effect (so far), it has demoted human beings to mere “data subjects,” located nearly all agency with “data controllers” and “data processors,” has done little to thwart unwelcome surveillance, and has caused boundlessly numerous, insincere and misleading “cookie notices”—almost all of which are designed to obtain “consent” to what the regulation was meant to stop. In the process it has also called into being monstrous new legal and technical enterprises, both satisfying business market demand for ways to obey the letter of the GDPR while violating its spirit. (Note: there is still hope for applying  the the GDPR. But let’s get real: demand in the world of sites and services for violating the GDPR’s spirit, and for persisting in the practice of surveillance capitalism, far exceeds demand for compliance and true privacy-respecting behavior. Again, so far.)
  4. Power is moving to the edge. That’s us. Yes, there is massive concentration of power and money in the hands of giant companies on which we have become terribly dependent. But there are operative failure modes in all those companies, and digital tech remains ours no less than theirs.

I could make that list a lot longer, but that’s enough for my main purpose here, which is to raise the topic of research.

ProjectVRM was conceived in the first place as a development and research effort. As a Berkman Klein Center project, in fact, it has something of an obligation to either do research, or to participate in it.

We’ve encouraged development for thirteen years. Now some of that work is drifting over to the Me2B Alliance  which has good leadership, funding and participation. There is also good energy in the IEEE 7012 working group and Customer Commons, both of which owe much to ProjectVRM.

So perhaps now is a good time to start at least start talking about research. Two possible topics: facial recognition and smart homes. Anyone game?


What turns out to be a draft version of this post ran on the ProjectVRM list. If you’d like to help, please subscribe and join in on that link. Thanks.

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from community@email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

GDPR Hack Day at MIT

Our challenge in the near term is to make the GDPR work for us “data subjects” as well as for the “data processors” and “data controllers” of the world—and to start making it work before the GDPR’s “sunrise” on May 25th. That’s when the EU can start laying fines—big ones—on those data processors and controllers, but not on us mere subjects. After all, we’re the ones the GDPR protects.

Ah, but we can also bring some relief to those processors and controllers, by automating, in a way, our own consent to good behavior on their part, using a consent cookie of our own baking. That’s what we started working on at IIW on April 5th. Here’s the whiteboard:

Here are the session notes. And we’ll continue at a GDPR Hack Day, next Thursday, April 26th, at MIT. Read more about and sign up here. You don’t need to be a hacker to participate.

Customertech Will Turn the Online Marketplace Into a Marvel-Like Universe in Which All of Us are Enhanced

enhanced-by-customertech

We’ve been thinking too small.

Specifically, we’ve been thinking about data as if it ought to be something big, when it’s just bits.

Your life in the networked world is no more about data than your body is about cells.

What matters most to us online is agency, not data. Agency is the capacity, condition, or state of acting or of exerting power (Merriam-Webster).

Nearly all the world’s martech and adtech assumes we have no more agency in the marketplace than marketing provides us, which is kind of the way ranchers look at cattle. That’s why bad marketers assume, without irony, that it’s their sole responsibility to provide us with an “experience” on our “journey” down what they call a “funnel.”

What can we do as humans online that isn’t a grace of Apple, Amazon, Facebook or Google?

Marshall McLuhan says every new technology is “an extension of ourselves.” Another of his tenets is “we shape our tools and thereafter our tools shape us.” Thus Customertech—tools for customers—will inevitably enlarge our agency and change us in the process.

For example, with customertech, we can—

Compared to what we have in the offline world, these are superpowers. When customertech gives us these superpowers, the marketplace will become a Marvel-like universe filled with enhanced individuals. Trust me: this will be just as good for business as it will be for each of us.

We can’t get there if all we’re thinking about is data.

By the way, I made this same case to Mozilla in December 2015, on the last day I consulted the company that year. I did it through a talk called Giving Users Superpowers at an all-hands event called Mozlando. I don’t normally use slides, but this time I did, leveraging the very slides Mozilla keynoters showed earlier, which I shot with my phone from the audience. Download the slide deck here, and be sure to view it with the speaker’s notes showing. The advice I give in it is still good.

BTW, a big HT to @SeanBohan for the Superpowers angle, starting with the title (which he gave me) for the Mozlando talk.

 

 

“Disruption” isn’t the whole VRM story

250px-mediatetrad-svg

The vast oeuvre of Marshall McLuhan contains a wonderful approach to understanding media called the tetrad (i.e. foursome) of media effects.  You can apply it to anything, from stone tools to robots. McLuhan unpacks it with four questions:

  1. What does the medium enhance?
  2. What does the medium make obsolete?
  3. What does the medium retrieve that had been obsolesced earlier?
  4. What does the medium reverse or flip into when pushed to extremes?

I suggest that VRM—

  1. Enhances CRM
  2. Obsoletes marketing guesswork, especially adtech
  3. Retrieves conversation
  4. Reverses or flips into the bazaar

Note that many answers are possible. That’s why McLuhan poses the tetrad as questions. Very clever and useful.

I bring this up for three reasons:

  1. The tetrad is also helpful for understanding every topic that starts with “disruption.” Because a new medium (or technology) does much more than just disrupt or obsolete an old one—yet not so much more that it can’t be understood inside a framework.
  2. The idea from the start with VRM has never been to disrupt or obsolete CRM, but rather to give it a hand to shake—and a way customers can pull it out of the morass of market-makers (especially adtech) that waste its time, talents and energies.
  3. After ten years of ProjectVRM, we still don’t have a single standardized base VRM medium (e.g. a protocol), even though we have by now hundreds of developers we call VRM in one way or another. Think of this missing medium as a single way, or set of ways, that VRM demand can interact with CRM supply, and give every customer scale across all the companies they deal with. We’ve needed that from the start. But perhaps, with this handy pedagogical tool, we can look thorugh one framework toward both the causes and effects of what we want to make happen.

I expect this framework to be useful at VRM Day (May 1 at the Computer History Museum) and at IIW on the three days that follow there.

Save

Let’s give some @VRM help to the @CFPB

cfpbThe Consumer Financial Protection Bureau (@CFPB) is looking to help you help them—plus everybody else who uses financial services.

They explain:

Many new financial innovations rely on people choosing to give a company access to their digital financial records held by another company. If you’re using these kinds of services, we’d love to hear from you…

Make your voice heard. Share your comments on Facebook or Twitter . If you want to give us more details, you can share your story with us through our website. To see and respond to the full range of questions we’re interested in learning about, visit our formal Request for Information

For example,

Services that rely on consumers granting access to their financial records include:

  • Budgeting analysis and advice:  Some tools let people set budgets and analyze their spending activity.  The tools organize your purchases across multiple accounts into categories like food, health care, and entertainment so you can see trends. Some services send a text or email notification when a spending category is close to being over-budget.

  • Product recommendations: Some tools may make recommendations for new financial products based on your financial history. For example, if your records show that you have a lot of ATM fees, a tool might recommend other checking accounts with lower or no ATM fees.

  • Account verification: Many companies need you to verify your identity and bank account information. Access to your financial records can speed that process.

  • Loan applications: Some lenders may access your financial records to confirm your income and other information on your loan application.

  • Automatic or motivational savings: Some companies analyze your records to provide you with automatic savings programs and messages to keep you motivated to save.

  • Bill payment: Some services may collect your bills and help you organize your payments in a timely manner.

  • Fraud and identity theft protection: Some services analyze your records across various accounts to alert you about potentially fraudulent transactions.

  • Investment management: Some services use your account records to help you manage your investments.

A little more about the CFPB:

Our job is to put consumers first and help them take more control over their financial lives. We’re the one federal agency with the sole mission of protecting consumers in the financial marketplace. We want to make sure that consumer financial products and services are helping people rather than harming them.

A hat tip to @GeneKoo (an old Berkman Klein colleague) at the CFPB,  who sees our work with ProjectVRM as especially relevant to what they’re doing.  Of course, we agree. So let’s help them help us, and everybody else in the process.

Some additional links:

The new frontier for CRM is CDL: Customer Driven Leads

cdlfunnelImagine customers diving, on their own, straight down to the bottom of the sales funnel.

Actually, don’t imagine it. Welcome it, because it’s coming, in the form of leads that customers generate themselves, when they’re ready to buy something. Here in the VRM world we call this intentcasting. At the receiving end, in the  CRM world, they’re CDLs, or Customer Driven Leads.

Because CDLs come from fully interested customers with cash in hand, they’re worth more than MQLs (Marketing Qualified Leads) or  SQLs (Sales Qualifed Leads), both of which need to be baited with marketing into the sales funnel.

CDLs are also free.  When the customer is ready to buy, she signals the market with an intentcast that CRM systems can hear as a fresh CDL. When the CRM system replies, an exchange of data and permissions follows, with the customer taking the lead.

It’s a new dance, this one with the customer taking the lead. But it’s much more direct, efficient and friendly than the old dances in which customers were mere “targets” to be “acquired.”

The first protocol-based way to generate CDLs for CRM is described in At last, a protocol to connect VRM and CRM, posted here in August. It’s called JLINC. We’ll be demonstrating it working on a Salesforce system on VRM Day at the Computer History Museum in Silicon Valley, on Monday, October 24. VRM Day is free, but space is limited, so register soon, here.

We’ll also continue to work on CDL development  over the next three days in the same location, at the IIW, the Internet Identity Workshop. IIW is an unconference that’s entirely about getting stuff done. No keynotes, no panels. Just working sessions run by attendees. This next one will be our 23rd IIW since we started them in 2005. It remains, in my humble estimation, the most leveraged conference I know. (And I go to a lot of them, usually as a speaker.)

As an additional temptation, we’re offering a 25% discount on IIW to the next 20 people who register for VRM Day. (And it you’ve already reigstered, talk to me.)

Iain Henderson, who works with JLINC Labs, will demo CDLs on Salesforce. We also invite all the other CRM companies—IBM, Microsoft Dynamics, SAP, SugarCRM… you know who you are—to show up and participate as well. All CRM systems are programmable. And the level of programming required to hear intentcasts is simple and easy.

See you there!

« Older posts

© 2024 ProjectVRM

Theme by Anders NorenUp ↑