Category: Customer Experience (CE) (Page 1 of 3)

Survey Hell

On a scale of one to ten, how do you rate the  Customer Experience Management (CEM) business?

I give it a zero.

Have you noticed that every service comes with a bonus survey—one you answer on a phone or fill out on a Web page? And that every one of those surveys is about rating the poor soul you spoke to or chatted with, rather than the company’s own crappy CEM system?

I always say yes to the question “Was your problem resolved?” because I know the human I spoke to will be punished if I say no.  Saying yes to that question complies with Don Marti‘s tweeted advice: “5 stars for everyone always—never betray a human to the machines.”

The main problem with CEM is that it’s all about getting service to scale across populations by faking interest in human contact. You can see it all through McKinsey’s The CEO Guide to Customer Experience. The customer is always on a “journey” through which a company has “touchpoints.”

Oh please.

IU Health, my primary provider of health services, does a good job on the whole, but one downside is the phone survey that follows up seemingly every interaction I have with a doctor or an assistant of some kind. The survey is always from a robot that says it “will only take a few minutes.” I haven’t counted, but I am sure some of those surveys last longer than the interaction I had with the human who provided the service: an annoyingly looooong touchpoint.

I wrote Why Surveys Suck here, way back in 2007. In it, I wrote,  “One way we can gauge the success of VRM is by watching the number of surveys decline.”

Makes me cringe a bit, but I think it’s still true.


The image above was created by Bing Creator and depicts “A hellscape of unhappy people, some on phones and others filling out surveys.”

Coming soon to a radio near you: Personalized ads

And privacy be damned.

See, there is an iron law for every new technology: What can be done will be done. And a corollary that says, —until it’s clear what shouldn’t be done.  Let’s call those Stage One and Stage Two.

With respect to safety from surveillance in our cars, we’re at Stage One.

For Exhibit A, read what Ray Schultz says in Can Radio Time Be Bought With Real-Time Bidding? iHeartMedia is Working On It:

HeartMedia hopes to offer real-time bidding for its 860+ radio stations in 160 markets, enabling media buyers to buy audio ads the way they now buy digital.

“We’re going to have the capabilities to do real-time bidding and programmatic on the broadcast side,” said Rich Bressler, president and COO of iHeart Media, during the Goldman Sachs Communacopia + Technology Conference, according to Radio Insider.

Bressler did not offer specifics or a timeline. He added: “If you look at broadcasters in general, whether they’re video or audio, I don’t think anyone else is going to have those capabilities out there.”

“The ability, whenever it comes, would include data-infused buying, programmatic trading and attribution,” the report adds.

The Trade Desk lists iHeart Media as one of its programmatic audio partners.

Audio advertising allows users to integrate their brands into their audiences’ “everyday routines in a distraction-free environment, creating a uniquely personalized ad experience around their interests,” the Trade Desk says.

The Trade Desk “specializes in real-time programmatic marketing automation technologies, products, and services, designed to personalize digital content delivery to users.” Translation: “We’re in the surveillance business.”

Never mind that there is negative demand for surveillance by the surveilled. Push-back has been going on for decades.  Here are 154 pieces I’ve written on the topic since 2008.

One might think radio is ill-suited for surveillance because it’s an offline medium. Peopler listen more to actual radios than to computers or phones. Yes, some listening is online; but  not much, relatively speaking. For example, here is the bottom of the current radio ratings for the San Francisco market:

Those numbers are fractions of one percent of total listening in the country’s most streaming-oriented market.

So how are iHeart and The Trade Desk going to personalize radio ads?  Well, here is a meaningful excerpt from iHeart To Offer Real-Time Bidding For Its Broadcast Ad Inventory, which ran earlier this month at Inside Radio:

The biggest challenge at iHeartMedia isn’t attracting new listeners, it’s doing a better job monetizing the sprawling audience it already has. As part of ongoing efforts to sell advertising the way marketers want to transact, it now plans to bring real-time bidding to its 850 broadcast radio stations, top company management said Thursday.

“We’re going to have the capabilities to do real-time bidding and programmatic on the broadcast side,” President and COO Rich Bressler said during an appearance at the Goldman Sachs Communacopia + Technology Conference. “If you look at broadcasters in general, whether they’re video or audio, I don’t think anyone else is going to have those capabilities out there.”

Real-time bidding is a subcategory of programmatic media buying in which ads are bought and sold in real time on a per-impression basis in an instant auction. Pittman and Bressler didn’t offer specifics on how this would be accomplished other than to say the company is currently building out the technology as part of a multi-year effort to allow advertisers to buy iHeart inventory the way they buy digital media advertising. That involves data-infused buying and programmatic trading, along with ad targeting and campaign attribution.

Radio’s largest group has also moved away from selling based on rating points to transacting on audience impressions, and migrated from traditional demographics to audiences or cohorts. It now offers advertisers 800 different prepopulated audience segments, ranging from auto intenders to moms that had a baby in the last six months…

Advertisers buy iHeart’s ad inventory “in pieces,” Pittman explained, leaving “holes in between” that go unsold. “Digital-like buying for broadcast radio is the key to filling in those holes,” he added…

…there has been no degradation in the reach of broadcast radio. The degradation has been in a lot of other media, but not radio. And the reason is because what we do is fundamentally more important than it’s ever been: we keep people company.”

Buried in that rah-rah is a plan to spy on people in their cars. Because surveillance systems are built into every new car sold. In Privacy Nightmare on Wheels’: Every Car Brand Reviewed By Mozilla — Including Ford, Volkswagen and Toyota — Flunks Privacy Test, Mozilla pulls together a mountain of findings about just how much modern cars spy on their drivers and passengers, and then pass personal information on to many other parties. Here is one relevant screen grab:

spying

As for consent? When you’re using a browser or an app, you’re on the global Internet, where the GDPR, the CCPA, and other privacy laws apply, meaning that websites and apps have to make a show of requiring consent to what you don’t want. But cars have no UI for that. All their computing is behind the dashboard where you can’t see it and can’t control it. So the car makers can go nuts gathering fuck-all, while you’re almost completely in the dark about having your clueless ass sorted into one or more of Bob Pittman’s 800 target categories. Or worse, typified personally as a category of one.

Of course, the car makers won’t cop to any of this. On the contrary, they’ll pretend they are clean as can be. Here is how Mozilla describes the situation:

Many car brands engage in “privacy washing.” Privacy washing is the act of pretending to protect consumers’ privacy while not actually doing so — and many brands are guilty of this. For example, several have signed on to the automotive Consumer Privacy Protection Principles. But these principles are nonbinding and created by the automakers themselves. Further, signatories don’t even follow their own principles, like Data Minimization (i.e. collecting only the data that is needed).

Meaningful consent is nonexistent. Often, “consent” to collect personal data is presumed by simply being a passenger in the car. For example, Subaru states that by being a passenger, you are considered a user — and by being a user, you have consented to their privacy policy. Several car brands also note that it is a driver’s responsibility to tell passengers about the vehicle’s privacy policies.

Autos’ privacy policies and processes are especially bad. Legible privacy policies are uncommon, but they’re exceptionally rare in the automotive industry. Brands like Audi and Tesla feature policies that are confusing, lengthy, and vague. Some brands have more than five different privacy policy documents, an unreasonable number for consumers to engage with; Toyota has 12. Meanwhile, it’s difficult to find a contact with whom to discuss privacy concerns. Indeed, 12 companies representing 20 car brands didn’t even respond to emails from Mozilla researchers.

And, “Nineteen (76%) of the car companies we looked at say they can sell your personal data.”

To iHeart? Why not? They’re in the market.

And, of course, you are not.

Hell, you have access to none of that data. There’s what the dashboard tells you, and that’s it.

As for advice? For now, all I have is this: buy an old car.

 

 

The Rise of Robot Retail

end of personal dealings
From Here Comes the Full Amazonification of Whole Foods, by Cecelia Kang (@CeceliaKang) in The New York Times:

…In less than a minute, I scanned both hands on a kiosk and linked them to my Amazon account. Then I hovered my right palm over the turnstile reader to enter the nation’s most technologically sophisticated grocery store…

Amazon designed my local grocer to be almost completely run by tracking and robotic tools for the first time.

The technology, known as Just Walk Out, consists of hundreds of cameras with a god’s-eye view of customers. Sensors are placed under each apple, carton of oatmeal and boule of multigrain bread. Behind the scenes, deep-learning software analyzes the shopping activity to detect patterns and increase the accuracy of its charges.

The technology is comparable to what’s in driverless cars. It identifies when we lift a product from a shelf, freezer or produce bin; automatically itemizes the goods; and charges us when we leave the store. Anyone with an Amazon account, not just Prime members, can shop this way and skip a cash register since the bill shows up in our Amazon account.

And this is just Amazon. Soon it will be every major vendor of everything, most likely with Amazon as the alpha sphincter among all the chokepoints controlled by robotic intermediaries between first sources and final customers—with all of them customizing your choices, your prices, and whatever else it takes to engineer demand in the marketplace—algorithmically, robotically, and most of all, personally.

Some of us will like it, because it’ll be smooth, easy and relatively cheap. It will also subordinate us utterly to machines. Or perhaps udderly, because we will be calves raised to suckle on the teats of retail’s robot cows.

This system can’t be fixed from within. Nor can it be fixed by regulation, though some of that might help. It can only be obsolesced by customers who bring more to the market’s table than cash, credit, appetites and acquiescence to systematic training.

What more?

Start with information. What do we actually want (including, crucially, to not be bothered by hype or manipulated by surveillance systems)?

Add intelligence. What do we know about products, markets, needs, and how things actually work than roboticized systems can begin to guess at?

Then add values, such as freedom, choice, agency, care for others, and the ability to collectivize in constructive and helpful ways on our own.

Then add tech. But this has to be our tech: customertech that we bring to market as independent, sovereign and capable human beings. Not just as “users” of others’ systems, or consumers (which Jerry Michalski calls “gullets with wallets and eyeballs”) of whatever producers want to feed us.

Time for solutions. Here is a list of fourteen market problems that can only be solved from the customers’ side.

And yes, we do need help from the sellers’ side. But not with promises to make their systems more “customer centric.” (We’ve been flagging that as a fail since 2008.) We need CRM that welcomes VRM. B2C that welcomes Me2B.

And money. Our startups and nonprofits have done an amazing job of keeping the VRM and Me2B embers burning. But they could do a lot more with some gas on those things.

QR codes are becoming fishhooks

We’ve been very bullish on QR codes here, because they’re an excellent way for customers and vendors to shake hands, to start doing business, and to form constructive relationships.

Alas, they have become bait for tracking by marketers. In QR Codes Are Here to Stay. So Is the Tracking They Allow, Erin Woo (@erinkwoo) of the NY Times explains how:

Restaurants have adopted them en masse, retailers including CVS and Foot Locker have added them to checkout registers, and marketers have splashed them all over retail packaging, direct mail, billboards and TV advertisements.

But the spread of the codes has also let businesses integrate more tools for tracking, targeting and analytics, raising red flags for privacy experts. That’s because QR codes can store digital information such as when, where and how often a scan occurs. They can also open an app or a website that then tracks people’s personal information or requires them to input it.

As a result, QR codes have allowed some restaurants to build a database of their customers’ order histories and contact information. At retail chains, people may soon be confronted by personalized offers and incentives marketed within QR code payment systems.

“People don’t understand that when you use a QR code, it inserts the entire apparatus of online tracking between you and your meal,” said Jay Stanley, a senior policy analyst at the American Civil Liberties Union. “Suddenly your offline activity of sitting down for a meal has become part of the online advertising empire.”

So that’s one more thing to fix in our apps and browsers. But how?

Obviously, we can try to avoid QR codes; but there are a growing number of places where that’s not possible.

Providing ways to opt out is a giant non-starter, as we’ve learned at great pain on the Web. (Do you have any record at all of the separate privacy settings you’ve made at all the sites and services where those choices have been provided? Of course not.)

We need at least two things here, and fast.

One is some way, in our phones or browsers, to prevent QR code scanning on phones from turning into tracking. Are you listening, Apple and Google? Plus everybody else in the QR code business?

The other is regulation. And I hate to say that, because too many regulations protect yesterday from last Thursday, and distort markets in ways seen and unseen for decades to come. But this is a case where we really need it.

[Two days later…]

There has been much follow-up to this piece. If you’re interested in that, start with this clip rom Wednesday;s FLOSS Weekly podcast, where Jonathan Bennett (@JP_Bennett) provides some excellent answers to questions raised here and elsewhere.

On Twitter, @QRcodeART has some good follow-up under an @TWiT tweet pointing to that clip. In that thread I stand accused of “pure babbling,” to which I plead guilty (providing, as I do, an example of how, as Garrison Keillor once put it, “English is the preacher’s language because it allows you to talk until you think of what to say”).

The main point in the thread is that QR codes are essentially “innocent.” Also, “#Bluetooth is much worse! Creative names, unique IDs (!) and such and usually open and “seeable” for everybody. Similar to your #Wifi searching always for a #WLan in the perimeter. Unique funny names and identifiable MAC addresses. Think about that !”

Good advice. Clearly, there are concerns for all the tech we use, especially the networked kind. If we fail to take precautions such as those Jonathan recommends, we’re likely being tracked in ways we wouldn’t welcome if we knew about it. Returning to the metaphor, everything you carry, scan or click on can be a fishhook. And, to the hookers, you’re just a fish.

 

 

What makes a good customer?

For awhile the subhead at Customer Commons (our nonprofit spin-off) was this:

How good customers work with good companies

It’s still a timely thing to say, since searches on Google for “good customer” are at an all-time high:

 

The year 2004, when Google began keeping track of search trends, was also the year “good customer” hit at an all-time high in percentage of appearances in books Google scanned*:

So now might be the time to ask, What exactly is a “good customer?

The answer depends on the size of the business, and how well people and systems in the business know a customer. Put simply, it’s this:

  1. For a small business, a good customer is a person known by face and name to people who work there, and who has earned a welcome.
  2. For a large business, it’s a customer known to spend more than other customers.

In both cases, the perspective is the company’s, not the customer’s.

Ever since industry won the industrial revolution, the assumption has been that business is about businesses, not about customers. It doesn’t matter how much business schools, business analysts, consultants and sellers of CRM systems say it’s about customers and their “experience.” It’s not.

To  see how much it’s not, do a Bing or a Google search for “good customer.” Most of the results will be for good customer + service. If you put quotes around “good customer” on either search engine and also The Markup’s Simple Search (which brings to the top “traditional” results not influenced by those engines’ promotional imperatives), your top result will be Paul Jun’s How to be a good customer post on Help Scout. That one offers “tips on how to be a customer that companies love.” Likewise with Are You a Good Customer? Or Not.: Are you Tippin’ or Trippin’? by Janet Vaughan, one of the top results in a search for “good customer” at Amazon. That one is as much a complaint about bad customers as it is advice for customers who aspire to be good. Again, the perspective is a corporate one: either “be nice” or “here’s how to be nice.”

But what if customers can be good in ways that don’t involve paying a lot, showing up frequently and being nice?

For example, what if customers were good sources of intelligence about how companies and their products work—outside current systems meant to minimize exposure to customer input and to restrict that input to the smallest number of variables? (The worst of which is the typical survey that wants to know only how the customer was treated by the agent, rather than by the system behind the agent.)

Consider the fact that a customer’s experience with a product or service is far more rich, persistent and informative than is the company’s experience selling those things, or learning about their use only through customer service calls (or even through pre-installed surveillance systems such as those which for years now have been coming in new cars).

The curb weight of customer intelligence (knowledge, knowhow, experience) with a company’s products and services far outweighs whatever the company can know or guess at.

So, what if that intelligence were to be made available by the customer, independently, and in standard ways that worked at scale across many or all of the companies the customer deals with?

At ProjectVRM, this has been a consideration from the start. Turning the customer journey into a virtuous cycle explores how much more the customer knows on the “own” side of what marketers call the “customer life journey”†:

Given who much more time a customer spends owning something than buying it, the right side of that graphic is actually huge.

I wrote that piece in July 2013, alongside another that asked, Which CRM companies are ready to dance with VRM? In the comments below, Ray Wang, the Founder, Chairman and Principal Analyst at Constellation Research, provided a simple answer: “They aren’t ready. They live in a world of transactions.”

Yet signals between computing systems are also transactional. The surveillance system in your new car is already transacting intelligence about your driving with the company that made the car, plus its third parties (e.g. insurance companies). Now, what if you could, when you wish, share notes or questions about your experience as a driver? For example—

  • How there is a risk that something pointed and set in the trunk can easily puncture the rear bass speaker screwed into the trunk’s roof and is otherwise unprotected
  • How some of the dashboard readouts could be improved
  • How coins or pens dropped next to the console between the front seats risk disappearing to who-knows-where
  • How you really like the way your headlights angle to look down bends in the road

(Those are all things I’d like to tell Toyota about my wife’s very nice (but improvable) new 2020 Camry XLE Hybrid. )

We also visited what could be done in How a real customer relationship ought to work in 2014 and in Market intelligence that flows both ways in 2016. In that one we use the example of my experience with a pair of Lamo moccasins that gradually lost their soles, but not their souls (I still have and love them):

By giving these things a pico (a digital twin of itself, or what we might call internet-of-thing-ness without onboard smarts), it is not hard to conceive a conduit through which reports of experience might flow from customer to company, while words of advice, reassurance or whatever might flow back in the other direction:

That’s transactional, but it also makes for a far better relationship that what today’s CRM systems alone can imagine.

It also enlarges what “good customer” means. It’s just one way how, as it says at the top, good customers can work with good companies.

Something we’ve noticed in Pandemic Time is that both customers and companies are looking for better ways to get along, and throwing out old norms right and left. (Such as, on the corporate side, needing to work in an office when the work can also be done at home.)

We’ll be vetting some of those ways at VRM/CuCo Day, Monday 19 April. That’s the day before the Internet Identity Workshop, where many of us will be talking and working on bringing ideas like these to market. The first is free, and the second is cheap considering it’s three days long and the most leveraged conference of any kind I have ever known. See you there.


*Google continued scanning books after that time, but the methods differed, and some results are often odd. (For example, if your search goes to 2019, the last year they cover, the  results start dropping in 2009, hit zero in 2012 and stay at zero after that—which is clearly wrong as well as odd.)

†This graphic, and the whole concept, are inventions of Estaban Kolsky, one of the world’s great marketing minds. By the way, Estaban introduced the concept here in 2010, calling it “the experience continuum.” The graphic above comes from a since-vanished page at Oracle.

Let’s zero-base zero-party data

Forrester Research has gifted marketing with a hot buzzphrase: zero-party data, which they define as “data that a customer intentionally and proactively shares with a brand, which can include preference center data, purchase intentions, personal context, and how the individual wants the brand to recognize her.”

Salesforce, the CRM giant (that’s now famously buying Slack), is ambitious about the topic, and how it can “fuel your personalized marketing efforts.” The second person you is Salesforce’s corporate customer.

It’s important to unpack what Salesforce says about that fuel, because Salesforce is a tech giant that fully matters. So here’s text from that last link. I’ll respond to it in chunks. (Note that zero, first and third party data is about you, no matter who it’s from.)

What is zero-party data?

Before we define zero-party data, let’s back up a little and look at some of the other types of data that drive personalized experiences.

First-party data: In the context of personalization, we’re often talking about first-party behavioral data, which encompasses an individual’s site-wide, app-wide, and on-page behaviors. This also includes the person’s clicks and in-depth behavior (such as hovering, scrolling, and active time spent), session context, and how that person engages with personalized experiences. With first-party data, you glean valuable indicators into an individual’s interests and intent. Transactional data, such as purchases and downloads, is considered first-party data, too.

Third-party data: Obtained or purchased from sites and sources that aren’t your own, third-party data used in personalization typically includes demographic information, firmographic data, buying signals (e.g., in the market for a new home or new software), and additional information from CRM, POS, and call center systems.

Zero-party data, a term coined by Forrester Research, is also referred to as explicit data.

They then go on to quote Forrester’s definition, substituting “[them]” for “her.”

The first party in that definition the site harvesting “behavioral” data about the individual. (It doesn’t square with the legal profession’s understanding of the term, so if you know that one, try not to be confused.)

It continues,

why-is-zero-party-data-important

Forrester’s Fatemeh Khatibloo, VP principal analyst, notes in a video interview with Wayin (now Cheetah Digital) that zero-party data “is gold. … When a customer trusts a brand enough to provide this really meaningful data, it means that the brand doesn’t have to go off and infer what the customer wants or what [their] intentions are.”

Sure. But what if the customer has her own way to be a precious commodity to a brand—one she can use at scale with all the brands she deals with? I’ll unpack that question shortly.

There’s the privacy factor to keep in mind too, another reason why zero-party data – in enabling and encouraging individuals to willingly provide information and validate their intent – is becoming a more important part of the personalization data mix.

Two things here.

First, again, individuals need their own ways to protect their privacy and project their intentions about it.

Second, having as many ways for brands to “enable and encourage” disclosure of private information as there are brands to provide them is hugely inefficient and annoying. But that is what Salesforce is selling here.

As industry regulations such as GDPR and the CCPA put a heightened focus on safeguarding consumer privacy, and as more browsers move to phase out third-party cookies and allow users to easily opt out of being tracked, marketers are placing a greater premium and reliance on data that their audiences knowingly and voluntarily give them.

Not if the way they “knowingly and voluntarily” agree to be tracked is by clicking “AGREE” on website home page popovers. Those only give those sites ways to adhere to the letter of the GDPR and the CCPA while also violating those laws’ spirit.

Experts also agree that zero-party data is more definitive and trustworthy than other forms of data since it’s coming straight from the source. And while that’s not to say all people self-report accurately (web forms often show a large number of visitors are accountants, by profession, which is the first field in the drop-down menu), zero-party data is still considered a very timely and reliable basis for personalization.

Self-reporting will be a lot more accurate if people have real relationships with brands, rather (again) than ones that are “enabled and encouraged” in each brand’s own separate way.

Here is a framework by which that can be done. Phil Windley provides some cool detail for operationalizing the whole thing here, here, here and here.

Even if the countless separate ways are provided by one company (e.g. Salesforce),  every brand will use those ways differently, giving each brand scale across many customers, but giving those customers no scale across many companies. If we want that kind of scale, dig into the links in the paragraph above.

With great data comes great responsibility.

You’re not getting something for nothing with zero-party data. When customers and prospects give and entrust you with their data, you need to provide value right away in return. This could take the form of: “We’d love you to take this quick survey, so we can serve you with the right products and offers.”

But don’t let the data fall into the void. If you don’t listen and respond, it can be detrimental to your cause. It’s important to honor the implied promise to follow up. As a basic example, if you ask a site visitor: “Which color do you prefer – red or blue?” and they choose red, you don’t want to then say, “Ok, here’s a blue website.” Today, two weeks from now, and until they tell or show you differently, the website’s color scheme should be red for that person.

While this example is simplistic, the concept can be applied to personalizing content, product recommendations, and other aspects of digital experiences to map to individuals’ stated preferences.

This, and what follows in that Salesforce post, is a pitch for brands to play nice and use surveys and stuff like that to coax private information out of customers. It’s nice as far as it can go, but it gives no agency to customers—you and me—beyond what we can do inside each company’s CRM silo.

So here are some questions that might be helpful:

  • What if the customer shows up as somebody who already likes red and is ready to say so to trusted brands? Or, better yet, if the customer arrives with a verifiable claim that she is already a customer, or that she has good credit, or that she is ready to buy something?
  • What if she has her own way of expressing loyalty, and that way is far more genuine, interesting and valuable to the brand than the company’s current loyalty system, which is full of gimmicks, forms of coercion, and operational overhead?
  • What if the customer carries her own privacy policy and terms of engagement (ones that actually protect the privacy of both the customer and the brand, if the brand agrees to them)?

All those scenarios yield highly valuable zero-party data. Better yet, they yield real relationships with values far above zero.

Those questions suggest just a few of the places we can go if we zero-base customer relationships outside standing CRM systems: out in the open market where customers want to be free, independent, and able to deal with many brands with tools and services of their own, through their own CRM-friendly VRM—Vendor Relationship Management—tools.

VRM reaching out to CRM implies (and will create)  a much larger middle market space than the closed and private markets isolated inside every brand’s separate CRM system.

We’re working toward that. See here.

 

Markets as conversations with robots

From the Google AI blogTowards a Conversational Agent that Can Chat About…Anything:

In “Towards a Human-like Open-Domain Chatbot”, we present Meena, a 2.6 billion parameter end-to-end trained neural conversational model. We show that Meena can conduct conversations that are more sensible and specific than existing state-of-the-art chatbots. Such improvements are reflected through a new human evaluation metric that we propose for open-domain chatbots, called Sensibleness and Specificity Average (SSA), which captures basic, but important attributes for human conversation. Remarkably, we demonstrate that perplexity, an automatic metric that is readily available to any neural conversational models, highly correlates with SSA.

A chat between Meena (left) and a person (right).

Meena
Meena is an end-to-end, neural conversational model that learns to respond sensibly to a given conversational context. The training objective is to minimize perplexity, the uncertainty of predicting the next token (in this case, the next word in a conversation). At its heart lies the Evolved Transformer seq2seq architecture, a Transformer architecture discovered by evolutionary neural architecture search to improve perplexity.
 
Concretely, Meena has a single Evolved Transformer encoder block and 13 Evolved Transformer decoder blocks as illustrated below. The encoder is responsible for processing the conversation context to help Meena understand what has already been said in the conversation. The decoder then uses that information to formulate an actual response. Through tuning the hyper-parameters, we discovered that a more powerful decoder was the key to higher conversational quality.
So how about turning this around?

What if Google sold or gave a Meena model to people—a model Google wouldn’t be able to spy on—so people could use it to chat sensibly with robots or people at companies?

Possible?

If, in the future (which is now—it’s freaking 2020 already), people will have robots of their own, why not one for dealing with companies, which themselves are turning their sales and customer service systems over to robots anyway?

The Wurst of the Web

Don’t think about what’s wrong on the Web. Think about what pays for it. Better yet, look at it.

Start by installing Privacy Badger in your browser. Then look at what it tells you about every site you visit. With very few exceptions (e.g. Internet Archive and Wikipedia), all are putting tracking beacons (the wurst cookie flavor) in your browser. These then announce your presence to many third parties, mostly unknown and all unseen, at nearly every subsequent site you visit, so you can be followed and profiled and advertised at. And your profile might be used for purposes other than advertising. There’s no way to tell.

This practice—tracking people without their invitation or knowledge—is at the dark heart and sold soul of what Shoshana Zuboff calls Surveillance Capitalism and Brett Frischmann and Evan Selinger call Re-engineering Humanity. (The italicized links go to books on the topic, both of which came out in the last year. Buy them.)

While that system’s business is innocuously and misleadingly called advertising, the surveilling part of it is called adtechThe most direct ancestor of adtech is not old fashioned brand advertising. It’s direct marketing, best known as junk mail. (I explain the difference in Separating Advertising’s Wheat and Chaff.) 

In the online world, brand advertising and adtech look the same, but underneath they are as different as bread and dirt. While brand advertising is aimed at broad populations and sponsors media it considers worthwhile, adtech does neither. Like junk mail, adtech wants to be personal, wants a direct response, and ignores massive negative externalities. It also uses media to mark, track and advertise at eyeballs, wherever those eyeballs might show up. (This is how, for example, a Wall Street Journal reader’s eyeballs get shot with an ad for, say, Warby Parker, on Breitbart.) So adtech follows people, profiles them, and adjusts its offerings to maximize engagement, meaning getting a click. It also works constantly to put better crosshairs on the brains of its human targets; and it does this for both advertisers and other entities interested in influencing people. (For example, to swing an election.)

For most reporters covering this, the main objects of interest are the two biggest advertising intermediaries in the world: Facebook and Google. That’s understandable, but they’re just the tip of the wurstberg.  Also, in the case of Facebook, it’s quite possible that it can’t fix itself. See here:

How easy do you think it is for Facebook to change: to respond positively to market and regulatory pressures?

Consider this possibility: it can’t.

One reason is structural. Facebook is comprised of many data centers, each the size of a Walmart or few, scattered around the world and costing many $billions to build and maintain. Those data centers maintain a vast and closed habitat where more than two billion human beings share all kinds of revealing personal shit about themselves and each other, while providing countless ways for anybody on Earth, at any budget level, to micro-target ads at highly characterized human targets, using up to millions of different combinations of targeting characteristics (including ones provided by parties outside Facebook, such as Cambridge Analytica, which have deep psychological profiles of millions of Facebook members). Hey, what could go wrong?

In three words, the whole thing.

The other reason is operational. We can see that in how Facebook has handed fixing what’s wrong with it over to thousands of human beings, all hired to do what The Wall Street Journal calls “The Worst Job in Technology: Staring at Human Depravity to Keep It Off Facebook.” Note that this is not the job of robots, AI, ML or any of the other forms of computing magic you’d like to think Facebook would be good at. Alas, even Facebook is still a long way from teaching machines to know what’s unconscionable. And can’t in the long run, because machines don’t have a conscience, much less an able one.

You know Goethe’s (or hell, Disney’s) story of The Sorceror’s Apprentice? Look it up. It’ll help. Because Mark Zuckerberg is both the the sorcerer and the apprentice in the Facebook version of the story. Worse, Zuck doesn’t have the mastery level of either one.

Nobody, not even Zuck, has enough power to control the evil spirits released by giant machines designed to violate personal privacy, produce echo chambers beyond counting and amplify tribal prejudices (including genocidal ones)—besides whatever good it does for users and advertisers.

The hard work here is lsolving the problems that corrupted Facebook so thoroughly, and are doing the same to all the media that depend on surveillance capitalism to re-engineer us all.

Meanwhile, because lawmaking is moving apace in any case, we should also come up with model laws and regulations that insist on respect for private spaces online. The browser is a private space, so let’s start there.

Here’s one constructive suggestion: get the browser makers to meet next month at IIW, an unconference that convenes twice a year at the Computer History Museum in Silicon Valley, and work this out.

Ann Cavoukian (@AnnCavoukian) got things going on the organizational side with Privacy By Design, which is now also embodied in the GDPR. She has also made clear that the same principles should apply on the individual’s side.  So let’s call the challenge there Privacy By Default. And let’s have it work the same in all browsers.

I think it’s really pretty simple: the default is no. If we want to be tracked for targeted advertising or other marketing purposes, we should have ways to opt into that. But not some modification of the ways we have now, where every @#$%& website has its own methods, policies and terms, none of which we can track or audit. That is broken beyond repair and needs to be pushed off a cliff.

Among the capabilities we need on our side are 1) knowing what we have opted into, and 2) ways to audit what is done with information we have given to organizations, or has been gleaned about us in the course of our actions in the digital world. Until we have ways of doing both,  we need to zero-base the way targeted advertising and marketing is done in the digital world. Because spying on people without an invitation or a court order is just as wrong in the digital world as it is in the natural one. And you don’t need spying to target.

And don’t worry about lost business. There are many larger markets to be made on the other side of that line in the sand than we have right now in a world where more than 2 billion people block ads, and among the reasons they give are “Ads might compromise my online privacy,” and “Stop ads being personalized.”

Those markets will be larger because incentives will be aligned around customer agency. And they’ll want a lot more from the market’s supply side than surveillance based sausage, looking for clicks.

The only path from subscription hell to subscription heaven

I subscribe to Vanity Fair. I also get one of its newsletters, replicated on a website called The Hive. At the top of the latest Hive is this come-on: “For all that and more, don’t forget to sign up for our metered paywall, the greatest innovation since Nitroglycerin, the Allman Brothers, and the Hangzhou Grand Canal.”

When I clicked on the metered paywall link, it took me to a plain old subscription page. So I thought, “Hey, since they have tracking cruft appended to that link, shouldn’t it take me to a page that says something like, “Hi, Doc! Thanks for clicking, but we know you’re already a paying subscriber, so don’t worry about the paywall”?

So I clicked on the Customer Care link to make that suggestion. This took me to a login page, where my password manager filled in the blanks with one of my secondary email addresses. That got me to my account, which says my Condé Nast subscriptions look like this:

Oddly, the email address at the bottom there is my primary one, not the one I just logged in with.  (Also oddly, I still get Wired.)

So I went to the Vanity Fair home page, found myself logged in there, and clicked on “My Account.” This took me to a page that said my email address was my primary one, and provided a way to change my password, to subscribe or unsubscribe to four newsletters, and a way to “Receive a weekly digest of stories featuring the players you care about the most.” The link below said “Start following people.” No way to check my account itself.

So I logged out from the account page I reached through the Customer Care link, and logged in with my primary email address, again using my password manager. That got me to an account page with the same account information you see above.

It’s interesting that I have two logins for one account. But that’s beside more important points, one of which I made with this message I wrote for Customer Care in the box provided for that:

Curious to know where I stand with this new “metered paywall” thing mentioned in the latest Hive newsletter. When I go to the link there — https://subscribe.condenastdigital.com/subscribe/splits/vanityfair/ — I get an apparently standard subscription page. I’m guessing I’m covered, but I don’t know. Also, even as a subscriber I’m being followed online by 20 or more trackers (reports Privacy Badger), supposedly for personalized advertising purposes, but likely also for other purposes by Condé Nast’s third parties. (Meaning not just Google, Facebook and Amazon, but Parsely and indexww, which I’ve never heard of and don’t trust. And frankly I don’t trust those first three either.) As a subscriber I’d want to be followed only by Vanity Fair and Condé Nast for their own service-providing and analytic purposes, and not by who-knows-what by all those others. If you could pass that request along, I thank you. Cheers, Doc

When I clicked on the Submit button, I got this:

An error occurred while processing your request.An error occurred while processing your request.

Please call our Customer Care Department at 1-800-667-0015 for immediate assistance or visit Vanity Fair Customer Care online.

Invalid logging session ID (lsid) passed in on the URL. Unable to serve the servlet you’ve requested.

So there ya go: one among .X zillion other examples of subscription hell, differing only in details.

Fortunately, there is a better way. Read on.

The Path

The only way to pave a path from subscription and customer service hell to the heaven we’ve never had is by  normalizing the ways both work, across all of business. And we can only do this from the customer’s side. There is no other way. We need standard VRM tools to deal with the CRM and CX systems that exist on the providers’ side.

We’ve done this before.

We fixed networking, publishing and mailing online with the simple and open standards that gave us the Internet, the Web and email. All those standards were easy for everyone to work with, supported boundless economic and social benefits, and began with the assumption that individuals are full-privilege agents in the world.

The standards we need here should make each individual subscriber the single point of integration for their own data, and the responsible party for changing that data across multiple entities. (That’s basically the heart of VRM.)

This will give each of us a single way to see and manage many subscriptions, see notifications of changes by providers, and make changes across the board with one move. VRM + CRM.

The same goes for customer care service requests. These should be normalized the same way.

In the absence of normalizing how people manage subscription and customer care relationships, all the companies in the world with customers will have as many different ways of doing both as there are companies. And we’ll languish in the login/password hell we’re in now.

The VRM+CRM cost savings to those companies will also be enormous. For a sense of that, just multiply what I went through above by as many people there are in the world with subscriptions, and  multiply that result by the number of subscriptions those people have — and then do the same for customer service.

We can’t fix this inside the separate CRM systems of the world. There are too many of them, competing in too many silo’d ways to provide similar services that work differently for every customer, even when they use the same back-ends from Oracle, Salesforce, SugarCRM or whomever.

Fortunately, CRM systems are programmable. So I challenge everybody who will be at Salesforce’s Dreamforce conference next week to think about how much easier it will be when individual customers’ VRM meets Salesforce B2B customers’ CRM. I know a number of VRM people  who will be there, including Iain Henderson, of the bonus link below. Let me know you’re interested and I’ll make the connection.

And come work with us on standards. Here’s one.

Bonus link: Me-commerce — from push to pull, by Iain Henderson (@iaianh1)

Weighings

A few years ago I got a Withings bathroom scale: one that knows it’s me, records my weight, body mass index and fat percentage on a graph informed over wi-fi. The graph was in a Withings cloud.

I got it because I liked the product (still do, even though it now just tells me my weight and BMI), and because I trusted Withings, a French company subject to French privacy law, meaning it would store my data in a safe place accessible only to me, and not look inside. Or so I thought.

Here’s the privacy policy, and here are the terms of use, both retrieved from Archive.org. (Same goes for the link in the last paragraph and the image above.)

Then, in 2016, the company was acquired by Nokia and morphed into Nokia Health. Sometime after that, I started to get these:

This told me Nokia Health was watching my weight, which I didn’t like or appreciate. But I wasn’t surprised, since Withings’ original privacy policy featured the lack of assurance long customary to one-sided contracts of adhesion that have been pro forma on the Web since commercial activity exploded there in 1995: “The Service Provider reserves the right to modify all or part of the Service’s Privacy Rules without notice. Use of the Service by the User constitutes full and complete acceptance of any changes made to these Privacy Rules.” (The exact same language appears in the original terms of use.)

Still, I was too busy with other stuff to care more about it until I got this from community@email.health.nokia two days ago:

Here’s the announcement at the “learn more” link. Sounded encouraging.

So I dug a bit and and saw that Nokia in May planned to sell its Health division to Withings co-founder Éric Carreel (@ecaeca).

Thinking that perhaps Withings would welcome some feedback from a customer, I wrote this in a customer service form:

One big reason I bought my Withings scale was to monitor my own weight, by myself. As I recall the promise from Withings was that my data would remain known only to me (though Withings would store it). Since then I have received many robotic emailings telling me my weight and offering encouragements. This annoys me, and I would like my data to be exclusively my own again — and for that to be among Withings’ enticements to buy the company’s products. Thank you.

Here’s the response I got back, by email:

Hi,

Thank you for contacting Nokia Customer Support about monitoring your own weight. I’ll be glad to help.

Following your request to remove your email address from our mailing lists, and in accordance with data privacy laws, we have created an interface which allows our customers to manage their email preferences and easily opt-out from receiving emails from us. To access this interface, please follow the link below:

Obviously, the person there didn’t understand what I said.

So I’m saying it here. And on Twitter.

What I’m hoping isn’t for Withings to make a minor correction for one customer, but rather that Éric & Withings enter a dialog with the @VRM community and @CustomerCommons about a different approach to #GDPR compliance: one at the end of which Withings might pioneer agreeing to customers’ friendly terms and conditions, such as those starting to appear at Customer Commons.

« Older posts

© 2024 ProjectVRM

Theme by Anders NorenUp ↑