| ||||
February 28, 2011
The Limits of Influence: America’s Role in Kashmir
Matchmaking With Math: How Analytics Beats Intuition to Win Customers
ASSURANT SOLUTIONS sells credit insurance and debt protection products. Maybe you’ve bought a product like theirs. If you lose your job or have medical problems and are unable to make a credit card payment, Assurant Solutions will help you cover it.
Like a lot of insurance products, payment protection is a discretionary add-on often made at the point of purchase. But when customers get the bill and see the additional fee of, say, $10.95 per month for payment protection, maybe they think, “Well, I’ll take my chances” and decide to cancel.
When those customers call, they reach Assurant Solutions customer service representatives, because the company manages insurance activation, claims, underwriting and customer retention (for many industry-leading banks and lending institutions).
It’s in that last piece — that attempt to retain customers, beat the churn and stem a high exit rate — that Assurant Solutions faced a now-universal management challenge. As a call center positioned as the pivot point of all customer interaction for its clients, Assurant had access to hoards of data as well as the ability to create the kinds of rules and systems that any operationally optimized call center would deploy. With skills-based routing, customized desktops with screen pops, and high-end voice recording and quality assurance tools, its efforts were state-of-the-art.
THE LEADING QUESTION
If analytics are brought to bear on a call center, how are operations and results affected?
FINDINGS
- Many conventional beliefs about call centers prove to be wrong. For instance, customers will wait longer than expected.
- Evidence trumps intuition when predicting outcomes.
- Conflicting goals can be reconciled in real time by analytically driven models.
But it wanted to do better. Its 16% retention rate was consistent with the best industry standards, but that still meant that 5 out of 6 customers weren’t convinced to keep their coverage, let alone consider other products. That’s a lot of room for opportunity.
So Assurant Solutions tried something new: deep analytics. And it invented an operations system that capitalized on what the analytics prescribed.
The result? The success rate of its call center nearly tripled.
What Assurant Solutions found was that all the conventional tenets about contact centers “are not necessarily wrong, but they’re obsolete,” says Cameron Hurst, vice president of Targeted Solutions at Assurant. Hurst previously headed up development for HSBC’s Indian offshore Global Technology group and served as HSBC’s group head of contact center technology after HSBC acquired the call center software company he founded in 1992, so he was already expert in getting the most out of data to run call centers. Or so he thought.
But, he says, “we operated under the fallacy — and I believe it’s fallacious reasoning — that if we improve the operational experience to the nth degree, squeeze every operational improvement we can out of the business, our customers will reflect these improvements by their satisfaction, and that satisfaction will be reflected in retention. And that was fundamentally wrong. We learned that operational efficiency and those traditional metrics of customer experience like abandon rate, service levels and average speed to answer arenot the things that keep a customer on the books.” Assurant Solutions was looking for the key to customer retention — but was looking in the wrong place.
So management attacked the challenge from a different angle. They brought in people like mathematicians and actuaries — people who didn’t know anything about running call centers — and they asked different kinds of questions, using analytics to answer them. “We’re an insurance company,” Hurst says, “so it’s in our DNA to be very data-driven. We are able to look at large volumes of historical data and find ways to mine for gold nuggets or needles in haystacks. But this use of analytics was fresh for us.”
What they found surprised them. In a sense, it was simple: They found that technology could assist the company in retaining customers by leveraging the fact that some customer service reps are extremely successful at dealing with certain types of customers. Matching each specific in-calling customer to aspecific CSR made a difference. Not just an incremental difference. A huge difference. Science and analytics couldn’t quite establish why a particular rapport would be likely to happen, but they could look at past experience and predict with a lot of accuracy that a rapport would be likely to happen.
In the interview that follows, Hurst explains how Assurant Solutions figured out the right questions to ask, used analytics to focus on new ways to match customers with reps and figured out the best ways to solve the problem of conflicting goals. He spoke to MIT Sloan Management Review editor-in-chief Michael S. Hopkins.
Different Questions, Different Results
Most organizations already mine their data for insights. How can they apply analytics in new ways that will discover untapped opportunities for value creation?
One of the first questions anyone would have, reading about your experience, is how did you get answers to questions you didn’t even know you should be asking? What triggered the epiphany that caused you to start looking at things differently?
The epiphany occurred because we knew we wanted more. We wanted to retain more customers, and we wanted to get more wallet share by up-selling them.
And so we put the problem to a different group. We went to the decision sciences group, to the actuaries and the mathematicians, and we asked them, “Is there anything you can see that we can do better or that we can optimize more?” They weren’t looking at it from the perspective of “How do I run a contact center?” In fact, these people don’t know anything about contact centers. So I think the first important step was to have a different set of eyes looking at the problem, and looking at it from a completely different discipline.
If they didn’t know how a contact center runs, or what things have been effective, where did they start?
The first thing that was interesting about their approach was that rather than thinking about the average speed of answering phone calls, or the average “handle time,” or service level metrics, or individual customer experiences or using QA tools to find out what we did right and what we did wrong — all the things we usually consider when looking at customer and representative interaction — they started thinking of it purely from the perspective of, “We’ve got success and we’ve got failure.”
Success and failure are very easy things to establish in our business. You either retained a customer calling in to cancel or you didn’t. If you retained them, you did it by either a cross-sell, up-sell or down-sell.
So this is what they started asking: What was true when we retained a customer? What was true when we lost a customer? What was false when we retained a customer? And what was false when we lost a customer? For example, we learned that certain CSRs generally performed better with customers in higher premium categories while others did not. These are a few of the discoveries we made, but there were more. Putting these many independent variables together into scoring models gave us the basis for our affinity-based routing.
That broadens the information they were looking for, right?
Definitely. These are data-oriented people, so they just simply said, “Give us everything — all the data you’ve got.” And we had a lot, because we’ve been running this business for years. We had information about our customers that seemed, from the perspective of call center routing, totally irrelevant. We had a lot of data in the contact center about agents’ performance, the time they spend on calls and the like. They took the whole data set and started crunching it through our statistical modeling tools.
The approach they took was to break down our customers into very discrete groups. To see what’s true about our customers. Any bank or insurance company or financial services company that sells products to customers is tempted to cluster their customers into discrete groups. Almost everyone does.
The thing is, it’s not 10 clusters that define your unique customer groups, it’s usually hundreds of clusters. That was the first process, to find out all the different kinds of customers that we have: customers with high balances who tend to pay off early, customers who have high credit-to-balance ratios, customers who have low credit scores. The more variables that go into the creation of a cluster, obviously the more clusters you can have; so, not just customers with high balances who tend to pay off early, but customers with those characteristics who also have low credit scores.
When you’ve got it down to that granular level, you can then look at all the different customer interactions that we had with people in that cluster and say, “How did we do in this particular case? How did we do in that one?”
Wait — are you looking at every single interaction?
Yes. It’s wasn’t on an aggregate macro but on an individual basis, every single interaction that we recorded over the last four or five years. Looking at all of these interactions let the team see patterns that establish that this CSR tends to do well, historically and evidentially, with customers in these specific sets of clusters.
What they also discovered was that the results were completely different from the existing paradigms in the contact center.
Sales as Matchmaking (Because “Variability” Means There’s Someone for Everyone)
What it means to understand — and act on — the critical difference between theoretically inferring why something might be likely to happen and evidentially knowing that it is likely to happen.
Let me stop you. As you’ve said, call centers tend to be pretty statistically driven places from the start. You named a bunch of the metrics that you would be looking at from the customer service side, and I’m sure you would have known when a customer called in what his products were and what his history was, and potentially matched him up with CSRs who had expertise in those particular product lines, yes?
That’s what everyone does in the call center world. When they sit down to write and build their routing strategies for how they’re going to move their macro clusters of customers around to CSR groups, they do it almost 100% onanecdote. We say that CSRs have expertise in an area. The problem is that expertise is a subjective term. When you deal with what we’ll call carbon-based intelligence — that is, inferential judgments made by us humans — versus silicon-based intelligence, or computerized judgments based on analytics, the carbon-based intelligence will say that this rep goes into this segment because they have expertise. They took a test. Or they grade out well in the QA tools.
What the evidence showed us is that the carbon-based intelligence tends to judge incorrectly. The silicon never does. If the model is set up properly and it has the ability to detect performance through whatever way you tell it to detect performance — by noting cross-sell, down-sell, up-sell, whatever — it will always measure a CSR’s performance correctly and in an unbiased way.
So for the first time you’re looking at both ends of the equation in some different ways. You’ve just described the CSR end, where you have this incredible database that reveals patterns about performance with different groups of customers, in spite of what you may or may not have inferred. What happens on the customer side? Are you looking at them in a different way?
Yes. There are obvious characteristics that we can study in our core systems. Think about what a bank or an insurance company would collect about its customers. Credit score, demographics, maybe some psychographics. We might know how many children they have.
You can predict what you think they’re going to do in the future, as long as you have a large enough customer base with enough interactions and enough variability to look at. Because what this whole thing is based on is variability. There’s a high degree of variability in your customer base, and there’s a high degree of variability in your CSR base. We learned to exploit that variability.
It’s the old adage in business: People do business with people they want to do business with. If you are successful at first establishing rapport with your customer, you have a higher probability of selling them, because there’s a trust relationship versus just taking orders.
We drive rapport and affinity in conversations by finding attributes that we can exploit to match, that create likeness across the CSR-and-customer synapse. It scales to potentially dozens of variables that operate dependently and independently of each other to drive this affinity/rapport relationship.
Having said all this, probably the most significant aspect of our use of analytics to drive conversational affinity was the persistency factor. That is, the length of time that customers remain on the books. We established almost right away that we could save a larger number of customers, as well as more profitable ones, through our new routing engine. But what we wouldn’t learn until later was the fact that we were keeping these customers longer than ever before. This was really exciting to us! As the months went by and we watched the new system operate, we observed an overall higher persistency rate for our saved customers compared to the old system. And since we’re talking about subscription-style products in our business, the longer the customers keep the product, the more revenue we generate. This turned out to be a much more important factor than a pure save or saved fee rate.
Some of this affinity matching is like a version of online dating.
That’s a beautiful metaphor, although there’s one breakdown in it. I would suppose that online dating sites work in a somewhat anecdotal way. It’s driven somewhat based in fact, but it’s also very psychographic.
We also go down to a deep level of granularity. Not body type and hair color like online sites might ask, but we do know that, for instance, certain CSRs perform well with customers that have $80 premium fees, but they don’t do so well with customers that have $10 premium fees. We don’t necessarily know the reason why. Nor do we need to.
And therein lies the difference. In our system there isn’t a lot of science behind why these differences exist, why a rep might be good with $80 versus $10. It’s just evident that that person is good with a certain customer type. So we operate off the fact that it’s true, based on the body of data that we have about the customer base and our past CSRs’ interactions with those customers. On the other hand, matchmaking sites wouldn’t have a lot of historical data about aparticular individual’s interactions with their service (unless, of course, they use it frequently), so they operate off a body of data about people’s general characteristics and what makes them interesting to each other.
So do you see the difference? We’ve become purely evidence-driven: “This CSR always does well with this particular customer type because we’ve seen it happen.”
I would describe it like this: The science does not explain why an affinity will be likely to exist, but it does show that an affinity will be likely to exist.
Exactly.
How Analytics Solves the Problem of Conflicting Goals
What do you do when models predicting things such as best CSR match, willingness of a customer to wait and value of a customer to the company all recommend actions that are in conflict?
It sounds like the kind of information you have about customers is not that different from the kind of information you might have had before this whole process began, and that it’s really on the CSR side that you have all this new data, plus the data about what happens in each specific interaction between a customer and a CSR. Is that what drives your models?
That’s right. There’s one other element that goes into the solution that drives revenue: the predicted economic value of a particular customer. Now, there’s not a lot of new science in that, and we have models that tell us how to calculate that. But it’s important to the solution, because in a call center we sometimes have to decide which customer to focus on. We like the idea that there’s a CSR for everyone, but that’s not always true because of call volumes and agent availability. So if your goal is long-term revenue, you can use these economic predictors to determine which customers we should be focusing on.
There was a problem we didn’t quite know how to solve right out of the gate, and that was the fact that the best matches are almost always not available. In other words, if we have 50 callers in queue and 1,000 CSRs on the floor, we can create 50,000 different solutions, and we make those calculations 10, 15 times a second. One of the 1,000 CSRs is the best match, so that’s the score to beat — the number that shows how often we make that perfect match.
The vast majority of the time, though, those matches weren’t immediately possible because that CSR was on the phone, so we had to factor in another predictive model, and that was “time to available.” That’s not a massively complex model, because the industry has been solving that kind of problem for a long time.
But when you layer “time to available” into the actual scoring engine, you get some interesting results. If an agent’s average handle time is three minutes, 30 seconds, and he or she has been on the phone three minutes, 15 seconds, then we can predict they’re about 15 seconds away to available. Then we can weigh in our prediction of customer tolerance or customer survivability — how long they’re willing to wait in the queue before just hanging up.
We know how long we keep customers in queue. We know what the outcomes are when they’ve been in queue, and we can find out where the curve starts to steepen in terms of abandon rates or bad outcome rates. We connect that information with our CSR’s predictive availability curve. If the optimal match is too far away, maybe 45 seconds or three minutes away, then the score for that optimal match becomes dampened and someone else might look more attractive to us. Because while they may not have perfect affinity, the fact that they’re going to become available sooner certainly makes them look more attractive to us.
When you became more rigorously evidence-based, what did you discover about what might have been wrong in your old assumptions?
The conventional wisdom in the contact center is 80/20 — 80% of calls answered in 20 seconds or less. That’s a promise that most businesses make, because they believe that drives satisfaction.
What we learned is that satisfaction has almost nothing to do with that. Obviously the faster you answer, the better, over a larger body of interactions. But we found most customers are willing to wait much, much longer, on the order of 39 to 49 seconds, before annoyance affects outcome.
So our observation was, if customers are willing to wait, why are we trying so hard to force them into that 80/20 or 80/25 window? The longer we’re willing to wait, the better the match is, the better the outcomes, the more revenue generated.
We’ve done tests that push all the way out to 60/60 — 60% of calls answered in 60 seconds or less. At some point there is a negative effect on abandon rates. But what we were surprised to learn is that there is no negative effect on abandon rates until you start approaching 60 seconds. Which obviously means we’ve got that time to work with in order to find the most ideal customer/CSR match. It leads to a very, very direct impact on revenue. A direct correlation between time and revenue.
To see this work so obviously is amazing, because to go from 80/20 and then jump it to 80/40, and then within a few days to see immediate results in terms of save rates and saved fee rates, it’s stunning. It makes you wonder why the rest of the world doesn’t get this.
Summarize the results you’ve seen. The problem was that you were at a 15% to 16% retention rate despite operating in a fairly optimized state-of-the-art way. What’s happened since?
We’ve seen our retention rates, our actual save rates, go as high as 30% to 33%. But that’s not the end of the story. For us, we’re more focused on saved fee rate. Save rate is if two people call in, save one, lose one, that’s 50%. But if two people call in and one is worth $80 to you and the other is worth $20, you save the $80 one, you’ve got an 80% saved fee rate, because you saved $80 out of a total $100 eligible.
This relates back to what you said earlier about having to make choices about which customer to serve during busy periods?
Yeah. We use those predicted economic availability models to help us focus on the more valuable customers. That’s not to say we discard the less valuable ones, because diversity in our customer base matches the diversity in our CSR force, so if a $20 customer calls in, we’ve got a $20 CSR to match him to. But our focus is on revenue, so saved fee rate is more important to us.
So while our save rates went into the 33-ish range, even as high as 35%, our saved fee rates went into the 47% to 49% ranges. We’ve seen days where we’ve been in the 58% range. Effectively that means that 58 cents of every dollar that was at risk has been saved. Those are very substantial numbers for us in our business.
Just so we can do the apples-to-apples comparison, what was the saved fee rate before?
The same as the overall save rate, 15% to 16%. And that’s actually a very exciting point to us, that our saved fee rates went up so much more than save rates, because we were focusing on saved fee as opposed to just saved customers alone.
If It Makes So Much Money, Why Doesn’t Everyone Do It?
What are the impediments to adopting evidence-based analytics? What can organizations do to overcome them?
Why don’t more people see the bottom-line impact of this sort of analytics?
In my own space, in the contact center world, I still am amazed when I come across very, very large Fortune 50 organizations that are still running very, very old technology. They don’t have the appetite to adopt it yet. Their current system is basically working, it’s been doing fine.
You scratch your head, saying, “Yeah, but don’t you want all the benefits that you can get from analytics?” And the answer is sort of subjectively, yes, we want those benefits, but next year.
There are early adopters and there are adopters. I wouldn’t call what we do an early adoption of a technology; it’s using very state-of-the-art tools just in a little bit of a different way. I think our creativity is in how we deployed it.
The program you developed at Assurant — called “RAMP,” for “Real-time Analytics Matching Platform” — is now available to other organizations that have to manage inbound calls. What do you see in organizations that makes it hard to apply analytics in this kind of an effective way?
The first one is, “I don’t have the IT resource to go do this right now.” You have to go compile the evidence, and that’s not a trivial task for most IT departments. It’s all data that they have, but in these days everyone’s stressed and pushed for projects and IT time.
Another objection is the perception that this is just a skills-based routing solution and that we already have skills-based routing. That’s an interesting one to overcome because, first off, this use of analytics is not skills-based routing. It’s evidence-based or success-based routing. We don’t really care about a CSR’s skills as defined by a skills-based routing system, and in fact we tell you that the skills that you assign a CSR are practically irrelevant.
Those are legitimate objections. What do you say to get someone started down the path that could enable them to get results like yours?
Well, we have proof that it works. But hearing about 187% improvement over baseline at Assurant is hard to believe at times. So we say, let us prove it to you by giving us some teaser data.
We can show you, based on your data, that you are not fully optimized and that you are relatively randomized in your routing — because effectively that’s the premise statement here. We are taking randomness and chaos and making order out of it.
Are You Giving Globalization the Right Amount of Attention?
At an aggregate level, global attention consists of the time and mental effort that a group of senior executives directs to a company’s international activities or its global environment. We measured executives’ global attention by looking at three areas:
- executives’ efforts to scan the global environment for opportunities and threats;
- the degree to which the executive group immersed themselves in global issues — as measured by communications with overseas managers, CEO travel and the extent to which senior management meetings were held in other countries;
- the degree to which the company executives discussed major globalization issues together.
More details about this research can be found in an article we published with Julian Birkinshaw of London Business School in the Journal of International Business Studies, as well as in Cyril Bouquet’s book Building Global Mindsets.
RELATED RESEARCH
C. Bouquet, A. Morrison and J. Birkinshaw, “International Attention and Multinational Enterprise Performance,”Journal of International Business Studies 40, no. 1 (2009): 108-131.
C. Bouquet, Building Global Mindsets: An Attention-Based Perspective (New York: Palgrave Macmillan, 2005).
Through our research, it became clear that the management of executive attention can have a significant impact on the performance of global companies. However, relatively few companies seemed to optimize global attention. Most seemed to either spend too little or too muchtime and mental effort on global issues. And both too little and too much attention to global issues were correlated with lower company performance — a phenomenon we call the “Goldilocks problem.”
The Problem of Too Little Global Attention. Too little attention to foreign markets seems to result in missed opportunities for sales growth, operational inefficiencies and the risk of being blindsided by fast-moving foreign competitors. In companies with large international operations, senior executives who gave too little attention to global issues tended to follow one of two approaches. In the first instance, they forced head office solutions on overseas subsidiaries using a kind of “my way or the highway” mentality. In the second approach, they delegated primary strategic and operating decision-making authority to foreign subsidiary managers, employing a sort of “it’s not my job to worry about these things” mindset.
While both approaches may be efficient from a head office perspective, neither maximizes company performance. Both fail to tap the rich resources of the global company. Both also fail to achieve the learning and best-practice benefits that come through global knowledge-sharing and skill generation.
Too Much Global Attention. On the other hand, an excessive level of attention to opportunities and threats abroad seems to create even bigger problems. It leads senior executives to take their minds off potentially more critical issues at home or interferes with the smooth functioning of foreign operations that don’t need intense scrutiny from head office managers. A related problem we frequently witnessed was mental overload and exhaustion on the part of members of the top management team. Given the complexity of global markets, staying abreast of and interpreting world events is taxing.
Keys to Optimizing Attention
Given the uniqueness of each company’s history, organization and resources, there isn’t a particular global attention level that is advisable for all companies. So, what is optimal for your company? We discovered that three factors determine whether a company’s situation warrants more or less global attention.
1. How Overseas Subsidiaries Are Organized. In some of the companies we studied, international subsidiaries operated with relatively high levels of independence; in others, the activities of overseas subsidiaries were closely integrated through the efforts of strong head offices. The greater the independence of overseas subsidiaries, the higher the performance benefits that came through increasing the global attention of head office managers.
Relatively high levels of subsidiary independence encourage country managers to take actions that maximize their local effectiveness but are much more difficult to track and make sense of by executives at the head office. In such cases, the global attention of people at the top is crucial to identify the pockets of knowledge and expertise that can be shared worldwide. Conversely, when the company is already functioning as a fully integrated operation, high levels of global attention can easily interfere with the smooth functioning of the overall organization.
2. The Dynamism of the Industry. Dynamism refers to the rate of change in the industry. In our research we found that the greater the rate of industry change, the more performance “kick” companies get when their executives focus their attention globally. This is because the factors driving industry dynamism have a large global component that companies ignore at their peril.
3. The International Experience Levels of Executives. We found that the greater the international experience levels of managers, the greater the benefits that come from global attention. Managers with more international experience generally had greater ability not only to make sense of rather complex international stimuli but also to process them quickly and in ways that improved the quality of their decision making. Bottom line: If you want to get the most benefit from global attention, put people in place with lots of international experience.
How to Increase Global Attention Levels in Your Company
Executives are not powerless when it comes to influencing the amount of attention their subordinates place on global issues. What works best if executives want to increase the level of attention that their subordinates give to global issues? We found that three commonly used practices — having a highly global corporate strategy, giving people global job titles and responsibilities and having executives tell people to pay more attention to global issues — had only limited effectiveness. However, our research did identify three important practices that senior executives can influence.
Economic Incentives. We found a significant and positive relationship between the global attention of senior executives and the degree to which their compensation was linked to their company’s global performance. In other words, most companies could raise the global attention levels of their head office executives — if that’s desirable — by tying individual compensation to the company’s global performance.
Global Leadership Development Activities. Our research found that company-specific leadership development programs that focused on developing or strengthening global leadership competencies had a powerful impact on the overall global attention levels of participants. The influence of global leadership development programs was statistically significant and, interestingly, existed even for people who had never attended the program. Apparently, the fact that their company supported such a program provided legitimacy to the globalization topic and encouraged managers to pay more attention to global issues. Nevertheless, the greatest impact of global leadership programs was witnessed in the actual participants of past programs or in those who were enrolled to take part in programs in the future.
The Power of Symbols. One of the most powerful global attention tools we came across was symbolism. It is a subtle, indirect tool that produced surprisingly powerful results. In fact, of all the tools we examined, the effective use of symbols, from how the company rewards and celebrates people, to the specific look and feel of company logos and other visual images, had the most profound impact on global attention levels. Companies often use different types of symbolic tools to engage people’s attention in ways that allow them to more effectively embrace objectives critical to the organization. For example, in the 1990s, British Airways replaced the British flag on its jets with art or writings from around the world to encourage people — the public and employees — to think of BA as a global airline. While BA’s strategy of de-emphasizing its British heritage was short-lived, the company’s artwork was an effective attention-grabbing tool. Other companies are finding great success by using international themes in their office décor or incorporating an international theme into the corporate logo or presentation templates.
Visual images signal important issues to the brain through reminders and reinforcements. In a similar manner, the rules of career advancement in organizations can also reinforce the message that attention to specific strategic themes is a good thing. In organizations with high levels of global attention, we observed very clear signals that paying attention to global issues was career-enhancing over time. Deliberate and publicized promotions of staffers who got global attention right had a profound and lasting impact on the attention levels of others in the office. If you want people to pay attention to global issues, then the single most important thing you can do is to promote employees who have excelled on global projects or initiatives. Give them power and raise their profiles. And make sure everyone else knows why you are doing it.
Why Companies Have To Trade “Perfect Data” For “Fast Info”
DOES THIS SOUND FAMILIAR?
Your company collects data. You want to act on it. First, though, you really, really, want to make sure that data is accurate. So you focus on getting it right. Better to wait on a decision until you have the absolutely correct information than act based on partial information.
That might make sense, but it’s the wrong way to go, say the top two executives at Attivio, a privately-held enterprise software company that focuses on unified information access to help its customers find and understand vast amounts of content and data. The problem with concentrating on getting the numbers too right is that most companies sacrifice speed for accuracy.
Ali Riaz, Attivio CEO, and Sid Probstein, CTO, are “practically relatives” at this point, according to Riaz. “I think I saw his first child being born, the second child and the third child,” he says. They met when Probstein interviewed for and then initially “refused to work with” Riaz at FAST Search & Transfer, a company Riaz was President and COO of (it’s now owned by Microsoft).
Probstein “understood something I didn’t understand right away, that FAST, at the time, didn’t have its strategy quite right — something I didn’t understand because I’m kind of a hopeless romantic,” says Riaz. “When I realized that he actually got it, he got that this company was not yet on the right path, I thought, ‘That’s a smart guy.’ I called him personally and begged him. He was a big contributor to FAST’s success, and we’ve been together since.”
Riaz and Probstein spoke with MIT Sloan Management Review editor-in-chief Michael S. Hopkins about the stifling downside of the quest for perfect data, why “eventually consistent” is a concept every company should take to heart, and how to deal with the need for speed.
Where do you think tech-driven information and data trends stand in terms of how companies understand them? How has the capture and use of information changed most in recent years?
Ali Riaz: Let me go back in history. I used to work at Novartis Pharmaceuticals, and one of the things that was really bothersome for me at the time was that we could never agree on the data. We got to the management team meetings and one system would say we have 17,500 employees and another would say we have 17,300 employees. Or one system would say we have 400 patients enrolled on this trial and another would say 800. These might not seem like big issues, but they ended up consuming a lot of our leadership time and frustration.
We never got to really be an intelligent company, in the sense that we were seeing the right things and being able to act and collaborate based on them. But this isn’t unique – I’m not throwing Novartis under the bus. I would say that this is a problem that most companies have had, and still have.
Sid Probstein: I think that’s exactly right. I worked in financial services, and 20 years ago the issues were all around the things Ali’s talking about. We couldn’t agree on how many units were sold, because there were 12 different products and 12 different systems storing the information on them. How could we get a unified view of our customers?
One of the first projects I worked at at a big financial services firm was to do the traditional Pareto breakdown, looking for the 20 percent of customers who were providing 80 percent of the revenue, to figure out if we could eliminate focus on some unprofitable customers. Classic modern business theory, right? It was a decade-long project just to unify the data.
But then the company grew. And part of the challenge of what makes it so difficult to achieve an intelligent enterprise is change. That financial services firm bought another company and they had yet another ERP and another CRM system. Resolving all that becomes a huge challenge.
So what I think we’ve seen developing over the last ten years is the value of what I’ll call interim steps. The idea is, look, let’s not try to move all the data together, let’s not worry too much about putting it together in one coherent way. Instead, let’s figure out what lives where as interim first step, so that when we perform an analysis we can know the provenance of data.
Let me make sure I understand the concerns about where the data came from, what you call the provenance of data.
Sid Probstein: Well, that’s definitely another thing I’d say has changed. People today are very concerned with provenance. Before, you used to argue about which report is right. Now, you want to know where that piece of data comes from.
People are focused on understanding if data is trustworthy. What assumptions might this source have made? For instance, it’s very common in a company that has newly acquired another company to trust its reporting less than their own. That’s a very natural, human effect. You think, “Well, that seems interesting, but I don’t know how they calculated revenue.”
The thing is, if two companies, before they even get into discussions about how their pieces fit together, start asking how did they collect the data and what led to the data, they’re probably going to convince themselves very quickly that it’s going to be hard to put this all into one view.
That’s why an interim set of representations is so appealing. It’s all about how the intelligent enterprise responds to the need to move faster. It’s important to integrate and understand the data, but managers are accepting that they can start to do all that without necessarily having to push all the data into the same technology stack.
I think companies thought 10 or 15 years ago that the systems they were putting in place would deliver uniform, universally accessible, trustworthy, analyzable data. And yet, here they are all these years later, after significant investment, often feeling no better off.
What’s your sense of what people expected back then and what they’re most or least frustrated about now?
Ali Riaz: First, I think we can’t ignore human nature in corporations. If I get data that says, “Ali, you did a really good job this month,” then I trust it. If the data says, “Ali, you did a bad job this month,” I may not trust it. I may question it; I may want to know more about it. People only select the information that supports their beliefs, so using dispassionate analytics is the only way to dispel this problem. The early transaction systems didn’t contain the “why” of information, just the “what”. It’s the more recent ability to merge all the sources that makes for better information and better decisions. Triangulating on a fact or an event validates it and also lets you discover what you might never have known by looking at all your data sources separately. Where we are now is that if I get information that says, “Ali, you did a good job on A, B and C, but you could have done a better job on X, Y and Z,” and if that information is complete, analyzed, and presented in a timely fashion, there’s not a lot of places I can hide.
And none of us expected two things: the amount of scale that we need, and the speed that we need. Companies like Comcast and Verizon have millions of clients, and every day, hundreds of clients move from them to competitors. There’s no point in finding out tomorrow why my customers left me yesterday, but it would be great to know who is about to leave me a week or two from now.
Sid Probstein: That’s a really key point. Early reporting was backward looking. We need to use reporting to predict what is going to happen, and how to act on it. That means massive amounts of data so that we get a good sampling of what’s going on, and it also requires speed. People thought they were going to fix reporting: before, maybe it would take a week to run a report but we didn’t know if the data was correct or not, so our focus was on getting the data to be accurate. Today, managers don’t just want the report to be accurate, they want it accurateand they want it every ten minutes or in a dashboard that updates continuously. Or they want it plus a report analyzing the hundreds of millions of emails inside the company. The systems that have to start to address that kind of performance are not changing fast enough to keep up.
Even if I’m an old brick-and-mortar company, I start up a website where I’m making sales; all of a sudden the tempo of my business has changed dramatically. I have a store that’s open 24/7. I collect information about what these people are doing on my site, but if I don’t crunch it and analyze it and come up with the best offer for people each time they arrive at the site, they’ll go to another website that does a better job, and they’ll do it for free since there’s no switching cost.
These are things that we didn’t even know to ask about 10 years ago.
So let’s look at where we are now. It sounds like you’re saying that even if you solve the challenge of making your data perfect, you might be doing it too slowly to act on. Should we be asking differentquestions of our data, and therefore of the tools we use to parse and analyze it?
Sid Probstein: Yes, you’re exactly right. One of the most important questions is whether we should even worry about whether this report is exactly right or not.
There’s a term called “eventually consistent” that grew up around a whole fleet of open-source-type technologies for crunching the huge amounts of data generated by website click-throughs. If you’re an e-commerce site, you want to understand the convergence of what the user is looking at and why he is clicking on it. Amazon, of course, is really good at this, asking, “For this customer at this very moment, what’s the best thing to show them?” They have high, high rates of success on recommendations, on product bundles, on follow-on advertising.
Amazon is good at this because they don’t worry about everybody. They develop a model where they’re eventually going to get a consistent model of the world, but at the moment they need to do it, they don’t care that they can’t roll it out for everyone. They’ve got hundreds of millions of clicks a day, and they figure, why don’t we just look at 20 percent of them? The key thing is to do it quickly and to make sure that whatever we conclude, that there are many observations for it.
This is when the term “analytics” becomes interesting. Analytics doesn’t have to be based on super-precise data. That doesn’t again mean wrong data, but it might mean some outcome that wins for the customer. If you profile a jazz CD that people didn’t know they wanted, and some people buy it, great. The fact that some of the 100,000 people that you showed it to didn’t buy it is irrelevant.
I think of that as an incredible innovation, to be able to say that the report doesn’t have to be perfect. It needs to capture the behavior, not the totality of it.
Ali Riaz: For this to actually work, we need a whole new philosophy around leadership and decision-making and performance management. People spend a lot of time worrying, “Hey, did I earn my bonus? Was I at 103 percent of the target, or 97 percent?” That worrying takes a lot of energy. Those conversations take a lot of time.
Now, really, as a CEO of the company, should I really be focusing on whether a valued employee’s bonus is 97 or 103 percent? Don’t I just want the employee to be happy? Personally, I don’t want a disgruntled employee, I want them to get the benefit of the doubt and go out and be happy and meet clients and be productive. But we are trained; we have this in our DNA, that we fight about 103 versus 97. Our boards want to know if it’s 103 or 97. Our management wants to know. Our line managers want to know. That’s just the way this tail is curled.
But for us to live with the realities of information growing more and more and speed getting faster and faster, we need a new way of thinking about not having precision but having a good understanding. And being able to live with that.
This is really interesting. I guess the obvious question at this point is how to bridge these gaps?
Sid Probstein: One thing I’m hopeful about is that I think managers get that they need to understand the frame a lot better. Say you’re a brand manager and one item is selling well and then slows down. You need to consider if that’s because you stopped promoting it, or because a competitor has a better product, or because users’ social media comments and blog entries that cover this stuff are negative.
Yes, the sales figures are relevant. Yes, a breakdown of features is relevant. But understanding the outside context is huge, too. What do the customers think? What’s the trend in the marketplace? What’s the buzz?
You’re saying there is an understanding among the executives you have contact with of those distinctions?
Sid Probstein: Absolutely. Ten years ago, it would be perfectly normal to participate in a meeting where nobody had done any — and I’ll use the term directly — “Googling” of the larger environment. They wouldn’t have looked up news stories, or looked up trends, or tracked down what people were saying about it. Now I think it’s rare to have a get-together where people haven’t educated themselves on the larger frame. And that’s a significant change.
Ali, can you say more about the leadership challenge that you began to describe, which is at odds with the very metrically driven way that people evaluate performance and lead organizations. The 97 percent versus the 103 percent is driven by trying to parse distinctions which, in the end, don’t really matter to a company’s overall thriving and success.
Ali Riaz: Generally speaking, most corporations are inefficient in a lot of ways, given the human factor, the data factor, the change factor. There’s a lot of factors involved. But in order to abandon that, managers have to come to believe that having a range of information is better than having one piece of information.
Attivio’s chairman and main investor, Per-Olof Söderberg, is an instigator for this type of dialogue. There have been times when one person or one team didn’t reach their goals, and we talked to him and we said, “They didn’t reach that quantitative goal, but qualitatively speaking, they’ve done a tremendous job.” They got their full bonus. So we are living it and breathing it today. But it has to come from the top.
I would love to see MBA programs that talk about what to do when two people come with two different sets of data for the same issue. I don’t think we have baked the reality into our education programs that this is going to happen, that you may never get the exact number right, and that as a leader, as a manager, as somebody who has to actually deal with this process, you have to figure out how to still move forward.
I want to play devil’s advocate for a minute. If I’m an executive, maybe I know how to capture customer feedback and external information about the competitive landscape. I get all this stuff. But it was hard enough for me to take my apples-to-apples report and make meaningful choices based on it. How the heck am I going to take all this other stuff and actually put it to meaningful use?
Sid Probstein: Right. That is exactly the role of leadership, which is to deal with uncertainty in this information age. Maybe the question is whether you should be so concerned with comparing apples to apples. Maybe the better question should be, “What is the result?”. If I can produce an analytic that ignores 10 percent of my customer data but increases my conversion rate 2 percent, should I focus on fixing the problem so that I include that extra 10 percent of customer data, or should I just try to get that extra 2 percent?
At the end of the day, and you cite this in the intelligent enterprise survey, innovation is a key driver. Dealing with uncertainty in innovative ways. You don’t throw out the analytic that’s producing a 2 percent improvement just because it’s not 100% thorough.
And what happens — if I can just follow up on that — when you talk to a company about the kind of approach you’re describing, and they have previously been focused on trying to get their data right. What’s their response?
Ali Riaz: There are enough stories today about companies that are focusing on trends, not perfection, and winning against their competition as a result, so that we can demonstrate the value of this approach. We haven’t told our customers not to compare apples to apples. What we have done is to say, “Sure, look at how many apples you have, but what does that tell you about the market for apples?”. So you may have grown 10 percent, but every competitor grew 30 percent. That’s important context. Or your apples have a shelf life of five days, but everybody else’s have a shelf life of 15 days. Or the clients you’re acquiring have a drop-off rate of 30 percent, while other companies have 2 percent.
Setting quantitative goals and measuring quantitative goals is human nature. I don’t think the capitalistic world would function without goals. I couldn’t function without them; I have personal goals that are quantitative, and I monitor them. That’s just the way we work. But providing more context, providing more sources of intelligence so that you are not only looking at the apples, is better. An Atlanta team delivering 97 without any local support offices may be fantastic compared to a Boston team delivering 103 with headquarters right behind it.
You’ve got to get the data right, and not just data, but the range of data, and then you have to have context for what the data means. Then you have to have leadership and business processes that allow for a dialogue. Do all that and you’ll actually be making intelligent decisions and not political CYA, all those things that happen every day in organizations and governments. Having a wider set of data and content, structured and unstructured, will allow you to learn to paint with colors that are new and old.
February 19, 2011
Tata Motors reworks its Fiat distribution strategy
Admitting that its five-year old tie up with Italian car maker Fiat has not yielded desired results, auto major Tata Motors on Tuesday said it was redrawing its distribution plans to prop up sales of its Turin-based partner.
As part of the plan, Fiat will now have its own independent brand showroom to showcase products, even as its cars would continue to be sold through Tata Motors showrooms. The brand showrooms would give Fiat a distinct identity.
“We are looking at ways to improve the sales numbers. We are not where we want to be (of Fiat-branded cars). We are redrawing plans to boost its sales,” said R Ramakrishnan, vice-president, commercial-passenger car business unit, Tata Motors. “To improve the image, we plan to open image brand centres (for Fiat). To begin with, two centres will be opened in Delhi and Pune.”
For the April-January period of the current fiscal, Fiat India clocked a 14.6% decline in sales at 17,405 units against the year-ago period. In January, Fiat sold 2,174 units, against 2,215 units a year-ago. Tata Motors on the other hand sold 283,100 units in April-January 2010-11, a 28% jump over 2009-10, while the firm sold 34,688 units in January, up 11.6% over last year.