The company Ashley Madison offers an audacious capability: extramarital affairs. “Ashley Madison is the most famous name in infidelity and married dating,” proclaimed the company’s marketing pitch in 2015. “Have an Affair today on Ashley Madison. Thousands of cheating wives and cheating husbands signup everyday [sic] looking for an affair . . . With Our affair guarantee package we guarantee you will find the perfect affair partner.”
A great value prop for those seeking such experiences – until July of that year, when hackers broke into the company’s data files. The thieves coined a name for themselves, The Impact Team. A modest appellation, considering the extensive collateral damage their activities produced.
Mission accomplished. The Impact Team’s cyber-haul included 25 gigabytes of profiles describing the people who signed up for Ashley Madison’s services. Many records included email addresses ending with .gov and .mil (the domain extensions for the US government and Department of Defense, respectively), which stoked curiosity, to put it mildly. Had the hackers compromised the US nuclear launch codes, there would have been less panic in Washington.
But unlike most hackers, The Impact Team was motivated by more than extracting ransom. Impact Team ostensibly wanted to preserve morality, citing that the reason for the hacking was Ashley Madison’s facilitation of marital infidelity. Another website, EstablishedMen, was also targeted. Both are owed by parent company Avid Life Media (ALM), which rebranded as Rubylife in July, 2016. “Too bad for those men, they’re cheating dirtbags [sic] and deserve no such discretion,” the hackers wrote. The Impact Team threatened to expose the identities of Ashley Madison’s customers if ALM did not shut down the websites.
There was more. The hackers complained that although ALM charged users $19 to delete personal data from the Ashley Madison website, the company did not fulfill its promise – not fully, anyway. Instead, ALM simply relocated the “deleted” records to its backend servers. “Too bad for ALM, you promised secrecy but didn’t deliver,” the hackers said. Clearly, the hackers feel that philanderers deserve honest treatment from vendors.
Despite getting caught with their cyber-drawers down, “Avid Life Media defiantly ignored the warnings and kept both sites online [Ashley Madison and EstablishedMen] after the breach, promising customers that it had increased the security of its networks. That wouldn’t matter for the customers whose data had already been taken. Any increased security would be too little too late for them. Now [those customers] face the greatest fallout from the breach: public embarrassment, the wrath of angry partners who may have been victims of their cheating, possible blackmail and potential fraud from anyone who may now use the personal data and bank card information exposed in the data dump,” according to a story in Wired Magazine published shortly after the incident (Hackers Finally Post Stolen Ashley Madison Data, August 18, 2015).
The Ashley Madison hacking was not the first incident involving a vendor that failed to adequately protect customer information from hackers. There was TJX, parent company of retailer TJ Maxx in 2003 (94 million stolen records), Sony PSN in 2011 (77 million), Target Stores in 2013 (70 million), Home Depot in 2014 (56 million), and eBay in 2014 (145 million). In fact, of The Nine Biggest Data Breaches of All Time (Huffington Post, August 20, 2015), Ashley Madison doesn’t even make the list.
But if someone maintained a list titled Most Awful, Ashley Madison would rise to the top. Ashley Madison scared the bleep out of everyone because the incident compromised not only financial information, but lifestyle preferences – the kind an individual would not likely share with friends or family. Purloined credit card numbers can be deactivated, but evidence of promiscuity and related information, well, once liberated, those horses aren’t heading back to the barn.
Should companies care about protecting personal customer information? The question is not rhetorical. By being opaque with customers about what they were doing with their sensitive data, Ashley Madison apparently didn’t care enough. Some could say they didn’t care at all. And their cyber-barriers weren’t insurmountable for the dedicated hackers on The Impact Team. Post-Ashley Madison, people began to think about their information in the IT cloud, and the associated risks to personal privacy. “Click to submit!” – software developers have made sharing personal data all too easy.
People worried about where their private information goes, where it’s stored, and who might have access to it. They began to imagine voyeurs who might crave such information, and they wondered what criminals could do with it. Consumers realized they couldn’t entrust their privacy to firewalls, encryption, secure data storage, and other jargony techno-obfuscations that marketers routinely use to sweeten their “privacy assurances.” Poignantly, Ashley Madison meant that most consumers did not need any imagination to understand the outcomes when vendors are lackadaisical about data governance.
Customer worry becomes a marketing worry. If customers can’t trust that their privacy won’t be abused, they won’t trust the many mechanisms that happen in online commerce, notably, allowing their primary information and data exhaust to be collected, stored, and analyzed. If – when – that happens, marketers will experience a setback in solving a perennial problem: Finding the likeliest buyers. Right now, marketers depend on both to fuel their ravenous lead-generation engines, and to close transactions. With every data hacking, regulators raise their hackles, and customers become ever warier. “Hell hath no fury like a woman scorned!” The same for customers when their trust and privacy are abused.
Fury – aka The Do Not Call Registry. In the ’80’s and ’90’s marketers got increasing blowback from agonized customers who felt their privacy had been violated, a development that directly contributed to the US Federal Trade Commission establishing the Do Not Call Registry in 2003. The registry’s intention was to curtail what became a reviled business practice: marketers using telephone contact to prospect for new business. Many telemarketing calls were made to residences, and numbers-driven marketers didn’t care about customer experience, often prescribing the calls to occur at dinner time, when prospects were more likely to be home.
Telemarketing began with the advent of the telephone, according to Wikipedia. It flourished in the 1970’s, when marketers got savvier about effective tactics, which were widely shared as “best practices.” That was the beginning of its demise.
The primary customer data needed was culled from lists of residential phone numbers, and ZIP Code directories, all available to the public. For marketers, the telemarketing sales channel became stupid-easy to switch on, and – this is crucial – wicked-hard for customers to avoid. Before caller-id and call blocking, the only choice for a customer when a telemarketer called was to not answer the phone, and wonder whether they had just missed something important. Vendors became addicted to the low costs and revenue results. For senior executives, self-regulating one’s cash cow did not have wide appeal.
Yet, Do Not Call was a bellwether in the customer fight for privacy, and it caught on like wildfire. While today, it appears that Do Not Call doesn’t have sufficient penal claws to deter vendors from flouting its provisions (my home regularly receives numerous daily phone solicitations, despite being on the registry), its symbolic message is stunning. Today, there are 217 million numbers on the list. Since its inception, that averages to 42,465 numbers added per day for 14 years. I consider that an “opt-in” success story that should make any CMO drool with envy, albeit for the wrong reasons. The message to marketers: “Do not intrude on my privacy. Do not abuse my personal information. Because if you do, you’ll lose your privilege. Sincerely, Your Customers.”
When it comes to privacy, marketers have no scruples. None. COPPA, The Children’s Online Privacy Protection Act was enacted to prohibit the collection and use of personal data from children under 13 years old. But there’s a problem: “More than 50 percent of Google Play apps targeted at children under 13 – we examined more than 5,000 of the most popular (many of which have been downloaded millions of times) – appear to be failing to protect data,” writes Serge Egelman, research director of the Usable Security & Privacy group at the International Computer Science Institute, in a Washington Post article, We tested apps for kids. Half failed to protect their data (August 7, 2017). For example, when parents download an app from Google’s Designed for Families section in the Google Play store, they assume data about their child (or children) remains safe. Turns out, that’s a bad assumption.
Which kid-generated data is compromised? Device serial numbers (which are often associated with location data), email addresses, and other “personally identifiable information,” according to Egelman, who wrote that his company found such data had been transmitted to third-party advertisers, and that the nature of the data meant that those companies could engage in long-term tracking of these children.Fortunately, Egelman has developed a website for parents to check the “privacy behaviors of the apps” his company has automatically tested. Just when we thought it was safe to allow our children to stay inside and play on the computer . . .
Personal privacy: why ongoing consumer trust isn’t assured. “Today your data can be of four kinds: data you share with everyone, data you share with friends and coworkers, data you share with various companies (wittingly or not), and data you don’t share,” writes Pedro Domingos in his book, The Master Algorithm. As consumers, we’re betting that as companies like Facebook, Amazon, and others gain more data, their learning algorithms improve, returning more value to us. But Domingos says that the “problem is that Facebook is also free to do things with the data and the models that are not in your interest, and you have no way to stop it.”
“When we say we’ll protect your data, you must believe us! . . or not.” Today’s marketers extoll privacy in their customer messaging. After all, they smell money. “Onavo Protect for Android helps you take charge of how you use mobile data and protect your personal info. Get smart notifications when your apps use lots of data and secure your personal details,” the copy on Onavo’s website assures us. But Facebook, which spent $150 million to acquire Onavo four years ago, hasn’t been fully transparent what it does with the data. One thing is certain: Facebook didn’t plunk down $150 million because they fancied the name Onavo. “Facebook is able to glean detailed insights about what consumers are doing when they are not using the social network’s family of apps, which includes Facebook, Messenger, WhatsApp and Instagram,” according to an article in The Washington Post, Facebook’s Affinity for Copying Seen as Stifling Innovation (August 11, 2017) . How private is the data? Will Facebook use it for benign purposes? Will customers experience harm? I don’t know, and the answers aren’t provided in corporate fine print and written disclaimers.
In another example, this year Princess Cruises announced its Ocean Medallion bracelet that promises passengers a unique personalized travel experience:
“It’s cruise planner meets concierge — a guide that you can access everywhere — on touchscreens throughout the ship, your stateroom TV and your own mobile devices. Ocean Compass helps you navigate your ship and your cruise, like streamlining the boarding process, personalized shore excursions invitations, ordering your favorite drink and more . . . Upload your documentation and set your preferences ahead of time so you can swiftly walk on board and communicate everything your ship needs to know about you.”
“Customize your personal Ocean Tagalong™ by body shape, color, pattern and marks (like tattoos) to best reflect your “alter ego”. This responsive digital companion follows you from initial registration to the end of your cruise (as well as rejoin you on future cruises). You’ll find it online within your profile, during interactive PlayOcean games like Tagalong Sprint, as well as through Ocean Portals found onboard Medallion Class ships. Tagalongs even evolve throughout the cruise, reflecting your unique personality and interactions, and will collect ‘charms’ that show off your achievements.”
To me, Ocean Medallion is a marketing name for sophisticated surveillance technology, and there’s “Ewwwwwwwwww!” by the bucket load throughout this cheery write up. Clearly, I’m not the type of customer Princess wants to reach, and I’m sure they’ve heard similar sentiments from others. They’re looking for a much different prospect. One who absolutely, positively cannot stand to separate from technology. Not even for a minute. I can distill Princess’s prose into a single sentence: “We know much about you even before you begin your vacation, and we track you from the time you come aboard, until the time you disembark.”
In today’s digital era, batches of delicate personal customer information are produced, captured, selected, sub-selected, curated, sorted, stored, compiled, combined, listed, cut, “value-added,” repackaged, warehoused, transmitted, sold, and shared, like rail cars of soybeans. Your data, e-shot, helter-skelter to the world! A massive logistics system operating in subterfuge, trafficking the data minutia of a human being’s existence, one individual at a time. Without industry self-enforcement, strong governance policies, and legal restrictions, tell me there’s not another Ashley Madison-type wreck about to happen, or already underway.
There’s an entrepreneurial opportunity here, in case anyone wants to step in. Pedro Domingos suggested one in how he envisions a new business model for privacy protection:
“The kind of company I’m envisaging would do several things in return for a subscription fee. It would anonymize your online interactions, routing them through its servers and aggregating them with its other users’. It would store all the data from your life in one place – down to hour 24/7 Google Glass video stream, if you ever get one. It would learn a complete model of you and your world and continually update it. And it would use the model on your behalf, always doing exactly what you would, to the best of the model’s ability. The company’s basic commitment to you is that your data and your model will never be used against your interests. Such a guarantee can never be foolproof – you yourself are not guaranteed to never do anything against your interests, after all. But the company’s life would depend on it as much as a bank’s depends on the guarantee that it won’t lose your money, so you should be able to trust it as much as you would trust your bank.”
I wonder whether a company can honestly commit to never acting against a customer’s interests, when those interests inevitably change. Still, I like his entrepreneurial vision. In the meantime, Domingos asks, “Who should you share your data with? That’s perhaps the most important question of the twenty-first century.”
Author’s note: This article is the second in a series about consumer privacy. You can read the first article, In the Digital Revolution, Customers Have Nothing to Lose But Their Privacy by clicking here. In an upcoming article, I’ll outline important keys for corporate data governance.