“Bring me sales, not excuses!” Notorious demands from a VP of Sales whose identity I’ll protect. He unwittingly aided his competitors because his sales-not-excuses mantra doomed him to repeating stupid selling mistakes.
A real-world example:
Carmella (not her real name), his sales representative, lost a software opportunity to a rival vendor. She explained to the sales-not-excuses VP that she lost for two reasons. First, her sales proposal included a large custom modification that her competitor provided as a standard capability. Second, her sales support team was unable to hold a final meeting with her prospect on the day the prospect preferred.
Carmella’s boss felt she mismanaged the sale, and that she was only making excuses. He assumed she didn’t emphasize the strengths of her packaged software product, and that she didn’t apply effort toward “overcoming objections” about the customized feature. Her inability to meet on the most convenient date for the prospect? He believed she didn’t try hard enough to overcome that problem, either. Carmella’s points hit a dead end. They were never heard outside her department.
Did Carmella mismanage the opportunity? The answer ranges from maybe to definitely. But put that question aside for a moment. Did the problems she identified exist when she talked with her boss? Yes. Did her competitor know the reasons he won? Definitely. Will his company exploit that knowledge? Absa-freaking-tively! Should the risks Carmella identified be reduced or eliminated? That is a management question Carmella’s boss should have considered—but didn’t.
What’s the takeaway? When observations about performance gaps are routinely dismissed, there is high risk for recurring problems. Why, then, would any sales manager stifle dialog that reduces the risk of lost revenue opportunities? I wish I had an easy answer. Ego? Myopia? Impatience? There’s a long list . . .
Besides toning down his arrogance, what should Carmella’s boss have done? He could have begun by being circumspect about why her opportunity didn’t succeed. Expressed in methodical terms, he should have performed an After Event Review.
Effective reviews:
Capture observations from multiple viewpoints
Identify risks and opportunities
Promote organizational learning, and facilitate continuous improvement
Enable more accurate sales forecasts
Improve attention to detail in recognizing and reflecting on success factors and problems
Reviews ask and answer six questions:
1. What was the intended outcome?
2. What actually happened?
3. What was learned?
4. What do we do now?
5. Who should we tell?
6. How do we tell them?
Here’s what the VP of Sales would have learned:
1. The custom modification Carmella proposed was just one of several product shortcomings the demo team encountered.
2. The sales team was poorly prepared for two client meetings
3. The proposed custom modification was a clunky 5-step process using three screens. The competitor’s product accomplished the needed capability with one step.
4. For unknown reasons, the essential custom modification was never considered for the development queue.
5. Carmella had to postpone her key sales meeting because company policy required that all pre-sales staff obtain a manager’s approval before committing time. The department manager was on vacation (and unavailable) at the time she made her request.
An opportunity for performance improvement crushed by bravado. It happens every minute of the selling day.
If you’re conducting After Event Reviews, here are pitfalls to avoid:
1. Not integrating them for ongoing sales improvement. In a sales risk survey conducted with CustomerThink in February, 2010, we found that of approximately 100 salespeople and managers, just under half reported doing post-sale reviews as part of risk management.
2. Formulating a conclusion, then backing it up with facts. Reviews are not witch hunts: “I think we’ll find that Tim really mismanaged our sales process.” If you draw conclusions first, you can always find supporting data. But it won’t bring you closer to solving the problem.
3. Protecting a specific employee or department. If your prospect shared that one reason you lost was that your VP of Marketing made an off-color joke at a presentation, that fact goes out on the table! When it comes to uncovering the truth, no artifact should be concealed or considered sacrosanct to bring into the spotlight.
4. Calling them “Win/Loss” reviews. Don’t. “Win/Loss” distorts the analysis. Wins lull people into examining what was done well. Losses shunt the analysis into what went wrong. Typically, it’s some of both for each. Further, how do you classify a Pyrrhic “win”—when the cost of sales exceeds revenue? Referring to the discovery as an After Event Review helps ensure objectivity.
5. Reviewing only wins or only losses. There’s a wealth of vital information contained in both. Patterns emerge. By looking at what went wrong, you can infer what’s right. Same for the reverse.
6. Focusing on just improving performance of individual salespeople. As the example illustrates, multiple issues can be revealed. And all opportunities for improvement should be considered.
7. Confusing After Event Reviews with social media. Unlike social media, After Event Reviews aren’t public knowledge. Findings require closely-governed boundaries and privacy protection. The last thing a new customer wants to see is an accolade she provided in confidence posted on a Facebook fan page.
As Larry Bossidy and Ram Charan wrote in their book, Confronting Reality, “the tools, practices, and behaviors that will distinguish success from failure can be summed up in one phrase: relentless realism.” That can only happen when key issues are out on the table, and when people aren’t threatened by knowing what they are.