Balancing Act
July 1, 2004
Contact centers today are all about empowering agents, investing in their skill development and reducing escalating attrition. Yet, most companies still overly rely upon “the old ways” involving hard technical metrics that emphasize speed and quantity to track and manage agent performance.
It has become all too easy to judge those agents who have the highest contacts per hour (CPH), lowest minutes per incident (MPI) or highest sales per hour (SPH) as the “best” performers, based solely on average handle time (AHT.) Figure 1 illustrates the typical stack-ranking of agents based on productivity. Unfortunately, productivity is not all that matters to customers. These speed metrics need to be counter-balanced with “new ways,” or quality measurements that are much harder to collect and even harder to associate with specific contacts.
One of these is real-time customer satisfaction. Right now, you can measure and assign quality to each agent and their contact in three ways:
Most contact center quality assurance (QA) teams currently use standard score sheets to assess agent call/e-mail quality. These are calibrated by supervisors and total five to 12 contacts per month; this is good sampling, but can be hit or miss, which doesn’t necessarily reflect customer perceptions of the interaction.
Very few contact centers measure post-contact customer satisfaction via an e-mail survey, launched within minutes after a phone or e-mail interaction.
This method, our favorite, asks a maximum of four questions about the customer’s reactions. The sample is much broader than in the previous method, but is very timely.
Virtually no contact centers track actual customer actions after the interaction to find out if they actually placed the order, remained a customer, or purchased more - the ideal scenario.
The next step in the process of evaluating quality is to answer these questions:
How good of a job did they do? Did they solve the problem or conduct a successful sale? Not until we collect and balance the “soft” quality side, can we get a true picture of agent performance.
Viewed in this “new way,” scary facts of support center life are often unveiled as Figure 2 illustrates. In this matrix, we see that some of the “fastest” agents do a poor job, causing repeat contacts, or even losing customers (see coordinates for agents 1 and 3) while some of the “slowest” performers, who might lose their job because they can’t keep up with CPH or SPH standards, are doing a great job with customers, delivering outstanding customer experiences (see 11).
The new balanced scorecard produces significantly better results on both axes - productivity (speed element) and quality (soft side), helping agents better understand where to improve. Once companies collect and report on these balanced performance data like shown in Figure 3, they can identify “best agents” (blue), those “at risk” (red), those “miscast” (brown) and those “on the bubble” (green).
So what action plans can you next implement to drive all agents to the ideal, blue upper left-hand corner of the agent performance management matrix?
“Best agents” (2): recognize and reward them, but also find out how they can be so fast and good. They probably don’t follow norms set in training and figure out how to apply to new and ongoing training.
“At-risk agents” (1 and 3): slow them down and build quality responses (add capacity while they aren’t handling/producing as much work), targeting them to match agent 5 and then arc to meet agent 4.
“Miscast agents” (11): find something else for them to do since you won’t be able to get them to work faster. These people typically make great trainers or QA staff, or mentors/level 3 helping junior agents.
“On-the-bubble agents” (12, even 10): ask what happened and find the right path for them since they might be a drain on the team’s spirit. They might have been assigned to the wrong team leader or might suggest better hiring criteria.
In the end, managing agent performance is all about finding the right balance between technical and quality metrics to generate a suitable roadmap that will help your support organization achieve greater levels of productivity and quality. Old technical metrics may be easy to measure, but eventually, it is the customer who holds the key to your success. Measuring customer satisfaction in real time is what is going to help you project a true, accurate picture of the performance of your support team.
Bill Price is founder, president and CEO of Driva Solutions LLC, a strategic consulting and operational implementation services firm serving the global customer contact industry. He can be reached at [email protected]. Villette Nolon is president and CEO of NetReflector Inc., a Seattle-based provider of customer satisfaction measurement solutions that use online survey technology. She can be reached at [email protected].
Links |
---|
Driva Solutions LLC www.drivasolutions.comNetReflector Inc. www.netreflector.com |
Read more about:
AgentsAbout the Author
You May Also Like