Many contact centres feel their speech application is something they “caught”. Often channel owners feel a lack of control over their own tools, and management aren’t able to clearly articulate the performance of their system to key stakeholders. Worst case is that anecdotal stories become as powerful as the truth, because a lack of facts exist to dispel them.
But we’re changing that, putting the business on the front foot. This new customer experience (CX) methodology is allowing our clients to:
In the past two months we’ve seen two of our clients embark on this new CX methodology, getting more business insight and control in relation to managing their respective open speech systems. This is our goal of up-skilling and sharing insights to how to get the most from speech so our clients can develop them over-time themselves and drive the program of work.
In support of our clients, we’re now running these two initiatives, placing a significant number of real customer utterances (using Cyara automated testing) through their open speech system to establish a baseline for the performance of those system. This methodology is completely different to typical manual testing and from “tuning” based on transcription analysis. As an example, manual testing rarely, if ever, can emulate common customer behaviours. What our client wanted to understand was the customer experience when the system was confronted with real customer behaviour. Tuning, on the other hand is volume based statistically and doesn’t necessarily reflect the same priority of call handling that the business may desire. Tuning is also rarely used for benchmark comparison except for the purposes of reporting an average % recognition.
While the “benchmark rate/score” reported is typically much lower than a tuning “recognition rate”, it shows our client:
a) First and foremost, from a regression perspective, if any enhancements have had a detrimental effect elsewhere in the grammars. The test we just concluded showed a 33% improvement in requests that “didn’t recognise” previously – great!, but a larger decline in requests that DID recognise before – NOT SO GOOD! What this demonstrated was that the vendor was able to improve targeted utterances but, in this instance, destroyed some of the working aspects of the statistical language model (SLM). The result is now a clear business accept or decline decision that can be made for this and any future SLM candidate.
b) Impact from a business unit perspective, on detailed enquiry types, not weighted by volumes (assuming that business value is based on enquiry value not total volumes). The latest test showed the key areas of weakness that appeared and these could immediately be linked to the business impact, i.e. which areas of the contact centre would be dealing with the recognition errors (and what those errors would “look like”, so a handling strategy could be developed).
c) Performance: enabling focus on more detailed KPIs and where the SLM/application needs more tuning attention (helps direct effort that needs to be applied). Remember that your solution provider needs to go through in-depth analysis themselves so these insights also help them spend their time more wisely. At the start of the tuning cycle the business is able to communicate the key areas impacting business performance. Now the vendor has critical input to where the speech system needs the most attention. In the past this conversation rarely took place at all making it almost impossible for suppliers to meet the expectations of the business. A very tricky position to move forward from.
BUT, please don’t think this is all about “tuning” the SLM better. That is part of the way forward but by placing this knowledge in the hands of the business other innovative solutions can be applied:
1. Rethink destinations and tags, avoiding conflict based on business needs and possible tweaks/redesign of the operating model to avoid ambiguity (for the caller and business). e.g. We’ve talked in previous blogs and case studies about “new” as a term with sometimes different meaning for customers than business. A replacement credit card is a simple example of a customer referring to this as a new card. The solution we came to was to simply handle these, what were very simple updates, within the new applications team. Get over the illogical barrier that made this appear to be a problem in the first pace. Problem solved.
2. Get smarter with data intelligence. Again, we’ve presented case studies where ambiguity or outright “wrong” requests can be put back on track if other information can be used to fully understand the customers spoken request. For example, “I need faults, my phone isn’t working” would be re-interpreted as a Credit Department issue if the caller has been disconnected, something very obvious in their data. We’ve seen transfer savings in the vicinity of 10% for making these associations in the logic.
The key is clarity – and that’s what this benchmarking is re-defining.
We’re hoping to see many more businesses take control of their speech systems this way – it’s in everyone’s interest and will give the business confidence to get their vendors even more engaged in developing the system and getting greater business leverage from the asset.
This is a whole new approach to gaining insight to the performance of your speech application and placing more control in the hands of the business to drive innovation. If you’d like to know more drop us a note.