Category Archives: Revenue Risk Management

Constructive Paranoia: Coming Soon to a Sales Organization Near You!

Admiral Hyman Rickover, who pioneered the US nuclear-powered navy, famously required job candidates to sit on a chair that he intentionally made awkwardly lopsided. He felt anyone who could deal with the aggravation was likelier to succeed on his team.

“Sure it’s uncomfortable working without a firm underpinning of stability. That’s why we need to be comfortable with being uncomfortable.” Thompson Morrison wrote in a blog, The Uses of Discomfort.

Strange as this idea seems, it exposes a great truth. We all need to be comfy. We all like to be comfy. But we can’t exist without some discomfort, and the paranoia that comes with it. Turns out, just like caffeine and “bad” cholesterol, there’s a healthy side to paranoia, as Intel’s Andy Grove explained in his book, Only the Paranoid Survive.

In 2013, cultural anthropologist and author Jared Diamond wrote about paranoia and risk in an essay in The New York Times, That Daily Shower Can be a Killer. In the article, Diamond reasoned that if he wanted to achieve his statistical quota of 15 more years of life (he was 75 at the time he wrote his essay), that meant taking 5,475 (15 x 365) more showers. “But if I were so careless that my risk of slipping in the shower each time were as high as 1 in 1,000, I’d die or become crippled about five times before reaching my life expectancy. I have to reduce my risk of shower accidents to much, much less than 1 in 5,475. This calculation illustrates the biggest single lesson that I’ve learned from 50 years of field work on the island of New Guinea: the importance of being attentive to hazards that carry a low risk each time but are encountered frequently [emphasis mine].” He coined a quirky term, “constructive paranoia,” to explain why New Guineans are effective at avoiding routine hazards, such as getting crushed under falling trees.

In the developed world, we don’t normally get all jittery when walking under foliage. But in the area of New Guinea where Diamond studied, medical clinics and 911 emergency call centers don’t exist. For Diamond, constructive paranoia, which he defines as a hyper-vigilant attitude toward repeated low risks, makes complete sense. By comparison, he warns that “Americans’ thinking about dangers is confused. We obsess about the wrong things, and we fail to watch for real dangers . . . Studies have compared Americans’ perceived ranking of dangers with the rankings of real dangers, measured either by actual accident figures or by estimated numbers of averted accidents. It turns out that we exaggerate the risks of events that are beyond our control, that cause many deaths at once or that kill in spectacular ways — crazy gunmen, terrorists, plane crashes, nuclear radiation, genetically modified crops. At the same time, we underestimate the risks of events that we can control [emphasis mine].”

In business development, risks from what we can’t control command great attention. In 2010, I conducted a sales risk perception study with CustomerThink in which the two most concerning risks sales executives identified were economic and competitive. Yet, prosaic, everyday risks that could be considered controllable were not cited: the customer’s technical question that was answered incorrectly, the proposal that was presented but didn’t fully match the prospect’s stated needs, the problem resolution that took longer than promised, the price discount that was offered to a buyer but inadvertently was not applied, the Tweet or social media post that pushed just over the boundary of good taste.

All of these are discrete incidents that, individually, aren’t horribly risky or catastrophic. But when they spread into patterns – as they often do – they insidiously aggregate to huge risks that undermine positive outcomes, and erode value. Are we obsessing about the wrong things, and failing to be vigilant for visceral dangers that are closer to home? Are marketers and salespeople numb to constructive paranoia?

Emphatically, yes. At least, anecdotally. Little gaffes and service hiccups from situations that we can control go viral, spiraling into larger risks. United Airlines broke my guitar. One incident, and anyone in United corporate marketing can tell you how many people heard the story on YouTube: 14.6 million, and counting. Sure, the economy is iffy. Go pump up your sales pipeline. 3X seems pretty safe in this market. Just let your claims department know not to enforce your 24-hour incident-report policy so strenuously. Better still, change the policy.

In a recent CustomerThink column by Christine Crandell, What Causes B2B Customers to Churn? Three Things, and “Price” Isn’t One of Them, a commenter, John Ragsdale, wrote, “If customers are receiving value, i.e., the outcomes they anticipated, they will stick with you through missing features and occasional lapses in support levels. However, I think very few tech companies are capable of assessing customer outcomes, and are not sure what to do to improve them. Renaming your customer support organization as “customer success” is not solving the problem.”

His statement exposes a curious paradox that infects customer service organizations around the globe: we expect customers to put up with ambient, low-level, vendor misfires. But that tolerance insulates us from understanding the gravity of the outcomes. In effect, it insulates us from customers. And we don’t know the point at which those absent features and service lapses will create a former customer out of a current customer. What’s needed is an NSUS – Net Screw-up Score – so we can watch customer relationships melt down from a convenient dashboard. So, yes, I think constructive paranoia is a good thing. As Brian Tracy said about selling, “everything counts.” A difficult standard, but he makes a valuable point.

According to Crandell, “. . . large and small businesses often change the products [they use] . . . to the bewilderment of the vendor’s sales, marketing, and customer success teams.” Her use of “bewilderment” grabbed me, because it underscores the problem that Diamond pointed out: that we obsess about the wrong things and fail to watch for the real dangers.

Bewilderment? Really? Hazards will always occur, customers will still jump ship from time to time, but when you’re constructively paranoid about repeated low risks, bewilderment will be – should be – a very rare reaction.

Do You Mangle What You Measure? Nine Pitfalls to Avoid

“You can’t manage what you can’t measure.”

Grrrrrr! It sounds authoritative. Catchy, too. I like it!

I wanted to learn who originated this hallowed maxim, and my search led to none other than W. Edwards Deming, the quality guru. He must have known what he was talking about. Thanks to Deming’s pioneering work in statistical process control, the seals on my car doors don’t leak when it rains, and when I put the vehicle in drive, it goes forward, and not into reverse.

But it turns out that Ed never made that statement. In fact, his original thought got garbled over the years. What Deming did say was “The most important figures that one needs for management are unknown or unknowable . . . but successful management must nevertheless take account of them.” That’s too long to Tweet, and it looks clunky on a PowerPoint slide. No wonder his verbose sentence mutated into those seven presentation-friendly words.

Deming’s circumspect voice has spawned a font of finger-wagging spinoffs:

  • “You treasure what you measure.”
  • “You can expect what you inspect.”
  • “You can’t control it, unless you measure it or model it.”

Measuring and metrics are smoking hot right now, with nearly 63 million search results when I last checked a few seconds ago – more than double the number I got when I entered crime prevention. And for good reasons:

  • uncertainty and risk confront us every day
  • humans love explanations
  • we have boundless capabilities for keeping track of stuff

Not to mention, we have pragmatic business needs. As Deb Calvert wrote in a recent blog, “With the emphasis on results, plus the accolades and rewards you’ve received for producing results, you may be singularly focused on the numbers, the volumes, the productivity, and the bottom line.” But even wonky Deming cautioned that one of the seven deadly diseases of management is running a company on visible figures alone. Danger Will Robinson! You might mangle what you measure!

Steer clear of these pitfalls:

1) Measuring what’s not meaningful.
Number of outbound calls made. Number of demos given. Quantity of Facebook likes. Satisfaction indices. For managing revenue risk, do these measurements matter? For some companies, maybe. But for others, no. A problem that’s compounded if employee compensation is based on measurements for things unrelated to delivering value. At one company I worked with, the district manager announced to the sales force that he was going to track Windshield Time Ratio – or WTR. He described this metric as monthly revenue divided by total miles driven. The salesperson who covered suburban Philadelphia had a more efficient ratio than the rep who covered the entire state of Nebraska. Much more. But so what?

2) Succumbing to the flaw of averages.
“The Pollyanna way of forecasting the future is to take averages from the past and project them forward,” Steve Parrish wrote in Forbes (October 3, 2012). Adding, “It is not necessarily wrong to use averages in making financial decisions, but it is dangerous to rely on this measuring tool alone. Computers are powerful tools; let’s put them to work. Why not look at various assumptions and scenarios to get a feeling for possible outcomes?”

3) Putting too much credence in “what the numbers say.”
Otherwise known as Dashboard Love. “Monitor individual departments, multiple websites and anything else using dashboards,” one website proclaims. Anything else? Impressive, if it was possible.

4) Not questioning the accuracy of the measurements. Or their origin.
“66% of the buying process is complete by the time a salesperson gets involved.” (I’ve also read 60%, and 67%). “Two thirds of all marketing content is never used by salespeople.” A separate article squawks, “70% of sales enablement content is never used.” Hmmmmm. We bandy these numbers about like they are de facto standards, without defining terms, or clarifying the context when these measurements were allegedly made.

In current public policy debates, such as racial profiling and law enforcement, we hew to metrics that support our points of view. But, as The Wall Street Journal reported in December, 2014, “It is nearly impossible to determine how many people are killed by the police each year . . . Three sources of information about deaths caused by police – the FBI numbers, figures from the Centers for Disease Control and data at the Bureau of Justice Statistics – differ from one another widely in any given year or state.”

5) Injecting biases. Confirmation bias, anchoring, bandwagon effect.
Biases are always present when using metrics for decision making. According to Bob Hayes, (The Hidden Bias in Customer Metrics), “generally speaking, better decisions will be made when the interpretation of results matches reality” – an outcome no one should ever take for granted. He illustrated with an example: “we saw that a mean of 8.5 really indicates that 64% of the customers are very satisfied (% of 9 and 10 ratings); yet, the [Customer Experience] professionals think that only 45% of the customers are very satisfied, painting a vastly different picture of how they interpret the data.”

6) Observer effect.
The measurement perturbs the results. This idea goes back to 1927 when physicist Werner Heisenberg developed his uncertainty principle, in which he proved that the way scientists observe objects changes the way the objects behave. The same occurs in business development. For example, the reason that opt-in and opt-out online surveys yield different results. With opt-in, respondents could be motivated to receive a reward. That could create a fundamental misperception of reality, a point that James Mathewson made in an article, How to Measure Digital Marketing with Observer Effects. He wrote, “we have to accept that digital marketing management is not all science. It’s almost as much art as science. If we embrace the art, we can get better information on our users, and ultimately serve them better without violating their privacy.”

7) Compiling faulty indices.
Many indices are flawed and easy to manipulate, The Economist reported in an article, How to Lie with Indices (November 8, 2014). Their sardonic expose reveals some commonly used tricks. “Above all, remember that you can choose what to put in your index – so you define the problem and dictate the solution.”

8) Embedding unneeded complexity.
Seth Goldman, Co-founder of Honest Tea (now part of Coca Cola), wrote in an article, Way too Many Metrics, “At Honest Tea, we’ve pondered many different metrics when trying to quantify the impact of our mission driven business, including:

  • The reduced calories for each bottle, can or drink pouch we sell
  • The increase in organic acreage fueled by our expanded demand, which helps support a less chemical-intensive approach to agriculture
  • The community investment dollars that we are able to generate with our Fair Trade premiums, such as support for schools, ambulances or eye care for villagers in a tea garden
  • The influencer/ripple effects of our success, when we create pressure for competitors to expand their low/no-calorie or organic options

But instead we keep it simple – we evaluate the impact of our mission by counting the number of bottles we sell.” At the time he wrote the article, he knew the number: 930,601,802 bottles from inception.

9. Creating the Cobra Effect. The Cobra effect references a policy in British-controlled India where the colonial government placed a bounty on cobras to stem overpopulation. Because the snakes had a new monetary value, smart people began to breed them, turning in the dead snakes to receive payment. When the government caught on to the practice, the bounty payment was rescinded, so breeders freed the now-worthless snakes, and the cobra population increased – exactly the opposite outcome the policy-makers intended. Today, there are many analogous situations in corporations where specific measurements are taken, and people are rewarded, but the outcomes are not congruent with goals.

There’s unlimited room for mangling what’s measured. The faulty insight that yields the misdirected strategy or tactic. The hubris that results after statistical output is anointed as having predictive validity. “The threat is that we will let ourselves be mindlessly bound by the output of our analyses even when we have reasonable grounds for suspecting something is amiss,” wrote Viktor Mayer-Schonberger and Kenneth Cukier in their 2013 book, Big Data. Nicholas Carr, author of a recent book, The Glass Cage, gives similar caution: ” . . . templates and forumulas are necessarily reductive and can too easily become straightjackets of the mind.”

Author’s note: this article originally appeared in my monthly column, Navigating Uncertainty, on CustomerThink.

Revenue Uncertainty – Part I: Known Unknowns, Unknown Unknowns, and Everything in between

“. . . There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.”—Donald Rumsfeld

Rummy sure has a way with words, concealing some powerful insight within bureaucratic gobbledygook. For most of us, uncertainty appears to be one large, amorphous mass, and Rummy has tackled that problem with a distillation, albeit one that’s a tad verbose. We should applaud him for even taking this on.

Let’s put Rummy’s idea to work. Suppose your company has decided to sell an established product into a new market. You have knowledge about the past and assumptions about the future. You understand that there are many possible outcomes, some of which are likelier than others. You know that one outcome will prevail, and even though you are fixated on your goal, you don’t know exactly how things will turn out. Question: how do you ensure the outcome you get is the outcome you envisioned? (Hint: the answer is probably not stay the course. The people who coined the term agile will get upset.)

This describes a classic uncertainty problem, and one that is especially common in revenue creation. How do vendors sort through the universe of data, artifacts, anecdotes, and information to develop sufficient knowledge to place bets intelligently? Rummy’s taxonomy can help.

Three distinguishing characteristics of an intelligent bet are 1) the odds of winning are understood, 2) the bettor can sustain a failed outcome, and 3) the best possible result should be one worth having. As I’ve learned, smart people can make dumb bets, and the converse is also true – it doesn’t require extraordinary brainpower to make wagers that are remarkably astute. Something to consider before forking over a hefty chunk of venture capital to a high-IQ adult. Want an extreme frinstance? Click here to see eight defunct dot.com’s that purchased expensive ads during SuperBowl XXXIV. “Oops. The money was nice while it lasted.” Dealing successfully with uncertainty involves having at least a shred of common sense.

Imagine that Rummy has a seat at the table as part of your strategic team. Here’s how he might whiteboard your planned market entrance:

The Known-Known’s. Pretty straightforward, but known-known’s are a small fraction of needed information: names of target organizations and their executives. Regulatory restrictions and pending legislation. Major competitors. Revenue and other financial information for each prospect. Specific Key Performance Indicators. Industry trends.

The Known-Unknown’s. Typical stuff that marketers and salespeople ask about: Size of the market. Trends. Forces. Competitive strengths and weaknesses. Average length of the selling cycle. Pain points. Influencers, movers, and shakers. Level of buyer knowledge and understanding. Decision criteria. Buying processes. Internal politics. Competing projects. Motivation. Money and budgets. Biases. Perceived opportunities. Perceived risks. The list stretches from here to forever.

The Unknown-unknown’s. Everything else. Things that nobody ever thought to ask about or discover. Events that happened before, but went under the radar. Events that never happened before, but might happen. Customer backlash over who-knows-what that might have a measurable impact on revenue. Mistakes that will be made that no one even knew could be made. The metaphorical blindside tackle. What author Nassim Nicholas Taleb calls Black Swans.

Rummy’s taxonomy guides a useful, and much needed conversation about revenue uncertainty. In the last twenty years, we’ve made great strides in adding to the corpus of known-known’s, and we’ve come a long, long way in learning how to discover the known-unknown’s. But we’re still left dangling, because we know that categorization only takes us so far. We still must answer, “now what?” And for that, we need mathematical rigor. Eighty years before Rummy, economist Frank Knight, author of Risk, Uncertainty, and Profit, examined uncertainty under that lens, outlining three types: a priori probability, statistical probability, and estimates. I’ll stick to the high level, so hang in there with me.

a Priori probability. You have a box with 12 blocks, and you know up front that six are green and six are red. Assuming you cannot see into the box, what is the probability of drawing a red block? The probability distribution has been determined by definition. This is an iconic example in which an individual can place a bet based on straightforward calculation.

Statistical probability. Imagine the same box, but now you don’t know how many blocks are in it, or how many different colors there are. This uncertainty problem is more complicated, and therefore more difficult to cope with. The probability distribution of the result is can be described by statistical analysis of empirically-collected data. Therefore, the way to manage the uncertainty in this scenario is to keep drawing and keep recording the result until you have sufficient information about the outcomes on which to base a future projection.

Estimates. Again, imagine the same box, but this time, you have no knowledge whatsoever about its contents. It could be holding anything. Any data that you might choose to collect don’t lend themselves to any statistical analysis.

Knight was keenly aware of the dangers of conflating “the problem of intuitive estimation” with “the logic of probability,” whether a priori or statistical.

Here’s what he wrote: The liability of opinion or estimate to error must be radically distinguished from probability or chance of either type, for there is no possibility of forming in any way groups of instances of sufficient homogeneity to make possible a quantitative determination of true probability. Business decisions, for example, deal with situations which are far too unique, generally speaking, for any sort of statistical tabulation to have any value for guidance. The conception of an objectively measurable probability or chance is simply inapplicable . . .

Knight must be turning over in his grave today. I’d love to see his reaction watching sales executives discuss revenue forecasts, or listening to data wonks crow about the ‘predictive validity’ in their models for B2B decision-making. And I don’t see Knight endorsing any company’s policy for assigning increased purchase probability based on where a deal sits on a hypothetical sales process continuum. Yet, many companies abdicate probability to the “forecasting logic” embedded within their CRM applications, while their senior executives scratch their heads wondering why Sales can’t furnish a more accurate number. “If only our sales reps would populate the information we’re asking them for!” Hmmmm. Which unknown-unknown’s might you be referring to?

I’m not advocating that forecasts have no value, or that companies should discontinue preparing them. Only that we’re squandering opportunities to gain insight about what makes revenue uncertain, and we’re failing to use the insights that we do gain to reduce the volatility in revenue results.

We all want less uncertainty. I get that. But we expect people responsible for revenue generation to be prescient beyond their capacity – heck, beyond anyone’s capacity – and then kicking them in the rear when they are wrong. Happily, there’s a way out of this frustrating cycle. In Part II, Putting Uncertainty to Work at Your Company, I’ll cover how to create a repeatable process for identifying and assessing revenue uncertainties, and in Part III, How to Model Revenue Risk, I’ll show how probability distributions can be applied to specific uncertainties, and how to interpret and use the results.

Revenue Uncertainty Part III: How to Model Revenue Risk

Note: this column was originally written for Navigating Uncertainty on CustomerThink. To read the original article, please click here.

 

Tension is high, and anticipation is thick as the annual sales kickoff for DisruptaCorp begins. Employees at the young tech startup settle in their seats. Mobile devices are hurriedly silenced and stowed. Chatter dissolves into quiet.

The CEO, Priya Neghandi, stands in front of the room alongside her VP of Business Development, Kelvin Wickersham. Without saying a word, Priya takes a black marker and scrawls a single number at an upward tilt on a whiteboard, and swiftly underlines it with a confident stroke.

$15,000,000

She stops for a moment, turns, and gazes across the noiseless room. “That is our sales goal for next year,” she states calmly. Her resolute demeanor is infectious. Kelvin looks directly at his team sitting in the front row, and exclaims, “We have our goal. Let’s go take that hill!” Cheers erupt. Hugs and fist bumps all around. Everyone feels confidence and love. Life is good. But nobody breaks out the bubbly. Not yet, anyway.

In sales, this is a common vignette, emblematic of a deterministic approach to goal-setting. A senior executive or committee establishes a single numerical target which is indelicately lowered onto the waiting shoulders of the business development team. Assumptions and logic about how the target was derived are not discussed. Shaky words such as might not, could, should, probably, maybe, and likely are notably absent. All conversation centers on how to achieve the goal. “We should do webinars!” “We need the right content, and we’ll get the sales force trained on how to use it!” Few, if any, talk about what could derail their efforts. “I want to know how you are going to make your number,” Kelvin growls in his team meeting immediately afterward, “not how you aren’t . . .”

But, what if DisruptaCorp moved away from determinism? What if Priya hedged a little, and candidly admitted that she’s, well . . . not completely sure about achieving the $15 million target? What if she confided that there are things she doesn’t know? For example, whether the expected number of customers will upgrade to the new software release, what happens if the operating platform that Development plans to use is late to market, what will occur if the company’s main competitor introduces a better product three months ahead of forecast, and what will happen to demand if the economic recession deepens?

How would that candor change conversations? Which new insights could be revealed? Which actions would be taken?

Replay the kickoff meeting scenario, except now, imagine that Priya approaches her company’s revenue challenge differently, and writes this on the white board:

2015 revenue target –

Worst case: $7,000,000

Most likely: $10,000,000

Best case: $15,000,000

By uncloaking her reservations, Priya has initiated a key step for managing revenue risk. She has suggested that there are uncertain conditions – namely, forces and events – that can cause different outcomes. Her team begins to think about future situations, and the pressures they will exert on the company’s revenue strategy and tactics. They begin to think about their likelihoods and the ways they could converge.

Instead of instinctively running pell-mell toward The Hill, Priya and her team have undergone a paradigm shift in their worldview. No less tenacious and focused, they now have situational awareness. DisruptaCorp can begin to evaluate the future in terms of probability and risk, and the company can determine what matters. Most important, Priya’s team can anticipate trouble, and can take action before the revenue graph takes a southward turn. Similarly, they can also recognize positive forces and developments, and be in a better position to capitalize on the opportunities.

Priya’s openness helps the conversation grow and blossom in new, productive directions. Is the most likely revenue scenario of $10 million too conservative? Is it fair to assume that no particular revenue outcome is likelier than any other – making most likely simply the average between worst and best ($11 million)? And, what about the probability of hitting equally important revenue targets, such as break-even, which DisruptaCorp’s CFO has pegged at about $8.8 million? Without further analysis, it’s impossible to provide intelligent answers. But at least now the questions have been raised!

By considering worst case-most likely-best case, DisruptaCorp’s team has also discovered new questions to ask, including:

– How much to invest in lead generation investment to cover expected customer churn

– Whether hiring additional salespeople will reduce the risk of achieving their revenue target

– What is the best price point when entering a new market, like healthcare

– Whether offering volume discounts will improve net revenue

– Whether investing in skills training and staff development for the sales force will reduce the probability of missing break-even revenue

For revenue planning, positive correlations between variables are not difficult to identify, explain, or understand. More lead generation effort generally leads to more revenue (though not always efficiently). Greater social media presence increases the opportunities to converse with customers online.

But other relationships are convoluted. Price increases don’t always result in greater total revenue. Reductions in defect rates don’t always improve customer loyalty metrics. Plus, situations can combine in millions of different ways, which makes revenue planning a Sisyphean task. How do executives align investment and effort to needed outcomes? With so many pieces and parts, it’s nearly impossible to manage day-to-day operations without the aid of statistical risk analysis.

What underlies revenue volatility are uncertainties and risks that have come home to roost. The hot product that a competitor just launched with a big media splash. The top sales producer who quits without warning, taking his biggest accounts with him. The customer complaint video that embarrassingly went viral. The corporate Tweet that overstepped the boundary of good taste. These situations underscore why determinism – anchoring on revenue goals without accounting for risk – creates failure.

Fortunately, for business planning, many uncertainties can be accounted for because they can be modeled and analyzed probabilistically. In my previous article, Part II: Putting Uncertainty to Work at Your Company, I outlined five steps for exposing risks that jeopardize revenue. This article explains how to use statistical models and Monte Carlo analysis to develop a more realistic vision for revenue achievement under a set of assumptions or conditions.

Here are the next five steps:

1. Select a distribution model for the variables that are consequential for revenue. Examples include unit price, cost, demand, and currency valuation. These models will be used for statistical analysis to determine outcomes of interest for planning targets – for example, the expected cost for achieving different pipeline multipliers, how many new customers to acquire for achieving 25% revenue growth, and how many new salespeople to hire. The top question to ask: based on the volatility inherent in the ranges developed in Step #5 in my previous article, which risks are low-risk risks, and which are high-risk risks? The answer to this question enables companies to prioritize what’s considered for statistical risk analysis.

2. Take a “reality check” on the expected distribution by asking “does this appear right?” If not, adjust the values.

3. Run scenarios that randomly combine key variables using Monte Carlo simulation. In the example that follows, we can examine how different probability distributions for cost, price, and demand might combine.

4. Develop quantities of interest that can be modeled by establishing a relationship between the variables. For example, gross revenue (derived by demand times price), net profit (price minus cost), customer service staff level (the sum of all inbound contact incidents divided by number of inquiries each agent can handle), etc.

5. Discover new questions to ask. Once the key variables have been identified and the best probability distributions have been selected, decision makers can ask many other planning questions. For example, based on future assumptions, what is the probability of achieving $200 million in revenue in year three? Or, if DisruptaCorp loses money in a given year, what will be the average expected loss?

Remember Priya’s original goal of $15 million in annual revenue next fiscal year? The statistical analysis provides a nuanced perspective. She will probably fall short by about $4.5 million, based on her team’s worst case-most likely-best case estimates for customer demand, cost, and selling price (see Table 1).

I discovered that after making 10,000 stochastic (random) iterations of the probability distributions for the Table 1 variables (see Table 2). But, there’s also some good news: 1) DisruptaCorp will likely be ahead of break-even by about $1.6 million, and 2) their projected profit margin is 34% – well above the CFO’s target of 25%. But best of all, DisruptaCorp can determine these likely results beforehand, and take action to manage the risks.

Next year at this time, Priya and Kelvin want to open a case of champagne for the DisruptaCorp team to celebrate a sales year in which they achieved (or over-achieved) their $15 million goal. Considering the estimates they have provided in Table 1, what is the probability of that happening? The Monte Carlo analysis tells us that 29% of the time, the variables will align to produce that outcome.

Priya must keep her investors happy, and she clearly wants to improve the odds. Through risk analysis, she can explore the most effective way to ensure that. Priya believes that if she she can reduce demand volatility (now between 1.5 million units and 3.7 million) by beefing up outbound marketing, DisruptaCorp will mitigate some risk in achieving its revenue goal. But will the risk reduction justify the cost? And, if selling price does not drop below $4.00 per unit, will that improve overall revenue, given the risk that it might also reduce demand if the economy does not improve? With risk models, Priya’s team can test different scenarios and develop the best strategies and tactics to meet DisruptaCorp’s objectives.

Table 1 – Ranges of key variables influencing revenue

Worst case Most Likely Best Case
Expected demand (units) 1,500,000 2,500,000 3,700,000
Price per unit $3.80 $4.00 $4.75
Cost per unit $3.15 $2.80 $2.15

Table 2 – Variables after 10,000 stochastic iterations:

Key variables after simulation:
Expected demand (units) 2,612,546
Price per unit $4.01
Cost per unit $2.65
Total Revenue $10,477,112
Profit Margin 34%

Compared to determinism, probability analysis adds many new complexities. After all, what could be easier at the outset than pointing toward a revenue hill and encouraging your team to go take it? But risk modeling will help you figure out whether there are any obstacles in between, and it will enable you to understand and estimate their magnitude. That insight will help you get past them, better ensuring your success.

Which champagne should DisruptaCorp buy before next year’s sales kickoff? No doubt, that’s a problem Priya and Kelvin will be delighted to have.

Revenue Uncertainty Part II: Putting Uncertainty to Work at Your Company

Last year I snapped a photo of a curious bumper sticker, and posted it on Facebook. It read, If you’re prepared for flying irradiated zombies that can swim, then you’re prepared for anything. I figured the car’s owner to be either a risk manager or an insurance agent. Who else would be moved to share this wisdom?

When dealing with uncertainty and risk, we humans follow a pattern. We collect an array of facts about things that matter. We relate these facts to other facts. Then, we assess that mass of information to glean understanding for how future outcomes might unfold. Ultimately, we must untangle this messy conglomeration of fact and feeling to answer a vague question: now what?

“Any decision relating to risk involves two distinct and yet inseparable elements: the objective facts and a subjective view about the desirability of what is to be gained, or lost, by the decision,” wrote Peter Bernstein in his book, Against the Gods. Here’s where things get interesting, because at this point, the pattern begins to fray. The actions that we plan are based on our dynamic, individual mix of optimism, confidence, and loss aversion. A constant mental tug-of-war that has shaped our personalities since we were tiny infants. These emotions combine within us as uniquely as water crystals in snowflakes.

Maybe, just maybe, that irradiated zombie visage pushes a bright-red risk button for someone – especially someone who has learned about drone technology, and recognizes its potential sinister uses. But one person’s sincere concern over possible zombie infestation can be another person’s dismissal of an irrational fear. In business development, I still marvel that people unabashedly proselytize rules pertaining to buyer behaviors.

Compared to human fickleness in risk assessments, software algorithms are coldly indifferent. Give a computer clean data along with a set of logical rules for analyzing it, and you’ll get consistent interpretations. Don’t like the results? Refine the algorithm! Alas, for now, we humans are stuck with pesky biases that interfere with the uniformity we often crave.

This yin-yang of risk seeking and risk aversion between and within individuals creates immense organizational challenges because people – not algorithms – still make most of the high-level, strategic decisions in an enterprise. And executives suffer a love-hate relationship with uncertainty by sometimes confronting it, sometimes sweeping it under the rug, and sometimes, doing both. So here’s the problem: how do you bubble up the most relevant, consequential uncertainties, and put them into a collaborative space for people to consider, analyze, and use for strategic planning and decision making?

Not surprisingly, there’s a process for that! Here’s how to put uncertainty to work for your company:

1. Start with a deterministic statement. In most organizations, they’re easy to find. For example, “In the next five years, we will increase our annual revenue by seven times our current level,” or “our target operating margin for next fiscal year will be 20%.”

2. Identify areas of concern that might inhibit achievement of that goal or target. This requires people – preferably, many people – to raise a hand and say, “well, what about, what about, what about, and what about . . ?” Write those what about’s on the white board, and you’ll develop a picture of specific uncertainties that exist in what was an opaque swirl of unknowns. Some thought starters: “Customer demand,” “parts availability,” “meeting hiring targets,” “economic conditions,” “currency valuations,” “pending regulations” “competitive product introductions.”

3. Prioritize those areas of concern by ranking them from most likely to apply pressure on revenue results, to least likely.

4. For each high-priority area of concern, take a view on a related process, and, over a specific planning time frame, forecast the minimum, most likely, and maximum values that could occur. Example 1: “in the next 12 months, revenue from service agreements will not be less than $10 million. The best result we could achieve will be $30 million, but we’ll probably be somewhere around $22 million.” Example 2: “next year, our worst case for customer churn will be 18,000, our best case will be 7,000, but we should anticipate churning about 14,000.”

Note: the most likely value is not necessarily the average between minimum and maximum – and most often, it isn’t. For example, the most likely revenue produced by a new sales rep will skew toward the minimum value, and the opposite is typical for a more experienced rep.

5. For every minimum value, explain why it’s not possible to achieve a result that is lower, and for every maximum value, explain why it’s not possible to achieve a result that’s higher. For example, there might be a ceiling on units sold because factory production might be unable to exceed a specific capacity, and outsourcing manufacturing isn’t feasible. Or, for planning quota achievement by sales rep, the minimum value could be derived if every territory generates run-rate revenue of $X million.

As daunting as uncertainties might be, they serve a vital function to open conversations, and to enable people to develop an understanding of probabilities for business outcomes. By now, you have recognized that for every target or goal, there are identifiable circumstances that are consequential for achieving them. The variables have many possible values, and they can combine in thousands – or millions – of different ways. If unemployment increases by 6%, AND average sales price is $145.00, AND salesperson productivity remains static, AND the development team delays the new software release by six months . . . will the company meet its financial goal? You get the picture. When forecasting an outcome of interest – revenue, net profit, new customer acquisition, average revenue per transaction – the sheer magnitude of number crunching requires software for simulating the results.

Through analytic tools, insight for very complex uncertainty problems can be revealed. Managers can ask which outcomes are most likely given a particular condition, or set of conditions. How might price increases affect demand? Which projects will likely achieve the biggest increases in revenue? Probability modeling makes the answers accessible.

Next month’s column, How to Model Revenue Risk, adds five steps to the five provided in this article, and using an example, I will illustrate how to solve common problems in revenue uncertainty through Monte Carlo simulation.

Put uncertainty to work for your company. What begins as a whirlwind of uncertainty can be used to gain clarity on how to achieve your most important, mission-critical goals. An infestation of flying irradiated zombies won’t be on everyone’s list of worries, but without first having a conversation about what is, we can never know for sure.

To read the first article in this series, please click here: Revenue Uncertainty – Part I: Known Unknowns, Unknown Unknowns, and Everything in Between.

© Contrary Domino 2013-2016.
Website development by Crisp Point.