Hey @Heidi Geilke41!
Mike Shannon here from Vend by Lightspeed :)
We went fairly granular with our Rubric setup and have already seen some great insights as a result within the 3 months we've been using Klaus. I was afraid that if we went at too high a level, we wouldn't be able to see at a glance what was driving those scores.
Luckily by setting up our rubric the way we did, and how easy Klaus makes reporting - we were able to gain great granular insights with little effort!
Our main categories are as follows: Resolution, Communication, Knowledge, Efficiency, and Contact Hygiene.
The main focus of the categories was the Customer Experience, but we wanted to make sure to include the day-to-day operational bits too, which is why we also include Contact Hygiene. While Contact Hygiene doesn't necessarily always translate into a better customer experience, it's important that we monitor our processes - such as the accuracy of our tagging for the purposes of feeding reports into Product and Engineering etc.
Each Category has a number of criteria within them that run on a 4-point scale - more or less being 1 = FAIL, 2 = POOR, 3 = GOOD, 4 = EXCELLENT. We avoided 3 or 5 point scales to avoid fence-sitting and meant the scores would be a little less ambiguous with people plopping scores in the dead middle!)
For example, within the Communication Main Category, we have the following criteria.
- Etiquette
- Tone
- Personalization
- Spelling & Grammar
- Formatting
By setting it up like this, we can see at a glance how we're scoring in each sub-section.
By getting these detailed snapshots, it's given us insight into things we may not have been able to get out of traditional metrics.
For example, in our Contact Hygiene category, we have criteria for case documentation / internal notes. Upon launching our QA program, we quickly discovered we were doing very poorly in this area. Scoring 48-51% in a few consecutive months. This was increasing our AHT and causing friction with our customers as other agents hopping in had to retread the same information to be able to assist.
By discovering this via QA, we were able to do a crash course/refresher on the importance of proper case documentation. As a result, we saw the score increase from 51% to 84% in a month's time!
Here's a simplified look at our rubric
Resolution
- Solution (quality of the solution given)
- Root Cause (was the root cause identified)
Communication
- Etiquette (how professional is the response)
- Tone (how well does the agent match the tone/personality of the customer)
- Personalization (how well does the agent personalize the experience)
- Spelling and Grammar
- Formatting (ie, rather than a wall of text, does the agent use bold fonts, bullet points, embedded images, and hyperlinks
Knowledge
- Product Knowledge
- Customer Education (Did the agent capitalize on the chance to teach the customer something new about the product and leverage our help centre)
Efficiency
- Troubleshooting (How well did the agent troubleshoot the issue)
- Conversation Guiding (Did the agent retain control of the conversation or lose it to the customer and let it go off topic?)
- Wait Times (Did the agent avoid long holds, periods of silence and stalling)
Contact Hygiene
- Internal Notes / Case Documentation
- Contact Subjects
- Contact Tagging Structure
- Security Compliance
This the base rubric we piloted for our first quarter. As mentioned above, there has already been a great amount of insight gained that our standard performance metrics might not tell us. A number of projects have kicked off as a result of the start of our QA program.
We're excited to be fine-tuning our rubrics for Q2. We offer Support on Live Chat, Phones, and Tickets. We want to make sure our rubric is really capturing what great support looks like on each dedicated channel!
While this approach might not work for everyone, we've found it's proven really useful!
I'm happy to discuss further or clear up any questions you might have on the above. I hope it gives a bit of food for thought :)
-- Mike Shannon