Measure the success of your help knowledge base content
Let me start first by talking a bit about scope and strategy. The discussion on measuring your help knowledge base often gets caught up in the measure of call deflection on your website and the strategy of your knowledge base -- whether the knowledge base is primarily for your agents or customer help website. Regarding call deflection, it’s extremely likely that not all customers on your help website would contact you if they are unsuccessful with self-resolving, which is why how to measure or what to measure using call deflection is frequently debated.
While your agents and customers have different needs from a knowledge base, it must serve both audiences! The fundamental issue is that agents and customers have different needs.
- In most cases, your agents know exactly the issue and what to do to solve it, they just need to find the right article to share with the customer. These leads them to often try searching based on a solution.
- Your customers, on the other hand, are visiting your help website because they have a symptom and they don’t know the solution. The customer is looking for an answer based on a symptom.
I’ll say it again, your knowledge base must serve both audiences. Also, throughout this article, I will periodically reference processes that should be a part of your support or knowledge strategy in order for you to measure the success of your help knowledge base content.
I have 1 more related area worth mentioning before getting started, which is the ability to discover and find your help content. No matter how effective your help knowledge base articles are at solving problems and issues, if your customers and agents can’t find what they need, you’ve still failed with your knowledge base initiative.
With those related topics out of the way, I’m going to specifically focus on the knowledge base article measurements in this article, with a concentration on effectiveness. I define effectiveness to be whether the help article achieves its intended purpose or not. You may be asking, “Aren’t all articles intended to solve a problem?” Well, ultimately yes, but no, not necessarily. The intent of an article could be any of the following.
- Solve a problem, i.e. a solution. This article typically clear on the symptom and there is one way to resolve. It could also be an article that solves many issues, such as clearing your browser cache.
- Troubleshoot an issue which then leads to the steps to resolve. A good example of a troubleshooting article - the article would have the customer try something and based on the results, the customer would click-thru to an article that contains the solution.
- Direct the customer to your support staff because the issue can’t be resolved by the customer.
- Others such as training and reference materials.
Measure articles used by agents
Let’s dig right into measuring effectiveness when articles are used by agents. In most cases, I expect an agent would send an article with a solution to the customer’s issue, but with email support, the customer may not have provided enough details so your agent may send a troubleshooting article to get the customer down the right path. This leads to some questions:How do you measure an article’s effectiveness when an agent sends the customer an article, but the customer is still unable to resolve their issue?
Was the article too difficult to understand or did the agent send the wrong article?I’ll get to this in a minute, but 1st consider the processes you’ve defined for your agents when it comes to knowledge base article usage. Your processes will be key to getting the measurements you want. If you are forcing your agents to search for an article on every case, you will very like introduce inaccurate data. If your phone agent knows the answer to an issue off the top of her head, once the call is wrapped up with the customer, the agent may just select the easiest article to associate with her call instead of the correct article. An agent that feels the pressure to close as many cases as possible within a timely fashion with a process that adds little value to them, will lead to shortcuts on cases just so they have met the required criteria.
So this is your first step in measuring your article effectiveness. Make it super-easy for agents to associate and send an article. Therefore, for every article that they’ve already used 20 times today and 100s of times this month, let the agent retrieve the article using a simple ID that they will quickly memorize. Also, make it easy for them to reference highly used/shared articles. (Don’t worry, when something changes, as long as you always update the same article, the agent will always have the most current information.) Only have your agent search for issues they’re unfamiliar with. Trust me, they’ll love you for this. (While you’re at it, make sure you teach them how to search using natural language AND provide them familiar, easy to use filtering).
Once we have the right processes and we know our data is clean, how do we measure? Simply report against the last article referenced (shared) in the support case -- that last article represents the primary customer issue and resolution. What if the customer return for more help after an article was sent? That’s the measure. If your agent sends the customer article X and the customer comes back for additional help, either the wrong article was sent or the article failed. (It really is black and white - the customer problem was solved or it wasn’t.) Now you just need to figure out, was it the article or the agent?
On the follow-up customer interaction (for those first interactions that failed), if no additional article was shared, i.e. your agent only had to provide more details or clarity, then the article failed -- it was ineffective. (Don’t get fooled on this. If your agents are routinely providing more context for articles when they’re sent on the 1st interaction, the article is still failing, but your data won't show that. Instead, your agents should have submitted a correction for the article. [sounds like another process you must have]) If a subsequent article is shared with the customer, then we assume that the original article shared was done in error and you can still report against the last article shared - the correct article for the issue.
Yes, there will always be exceptions, but they should be minor. For example, if you frequently send multiple articles and/or handle multiple issues with a single case, add a checkbox in your system where your agent can specify the primary issue/article for the case.
To bring this together for measuring help article effectiveness based on agent usage, the measure works as follows. Evaluate all articles that were the last article sent/shared on a case by an agent. For those articles associated with a case where there was a subsequent interaction with your customer, the article failed to live up to its intended purpose, i.e. the article needs some work. For all other cases, the article was successful. (For the article that failed, you can use the case verbatim to figure out what needs improving/correcting.)
You will end up with a percentage of success (or failure) for each article. At the end of the discussion on measuring articles on your help website, I’ll talk about different ways to slice this data.
Pro Tip: Instead of separate close codes, you can use article usage data to identify your top call drivers.There are a few more issues you might have thought about measuring when thinking about measuring articles used by your agents.
- Did any of your agents send an article where the article’s intention is to send a customer to support to get the resolution? If so, was it an agent error or something about the article? From my experience, the strategy around creating articles that direct a customer to contact you are ONLY created when there are NO “things to try” for customer self-solving, and therefore there wouldn’t be any information in the article that an agent would need to convey to a customer -- they've already contacted you. So if any of these articles show in your data, you have some additional digging to do, to determine why it’s happening -- is it the article or the agent?
[This goes hand-in-hand with your contact strategy, which is beyond the scope. Briefly, at no other time should you include contact specific details within an article. Your help website should have easy-to-find information on how to contact you. That “contact us” article should be for a unique exception and very brief: (1) enough info for the customer to confirm the specific issue and (2) instructions to contact you to get assistance in resolving.] - Articles sent by agents that your customer could have found on your help website. Until we look at the help website data, we don’t know whether the customer even tried to self-resolve before contacting you, but when we measure articles based on agent usage, it really doesn’t matter. We just want to see if the article works (was it effective?) or not in solving the customer problem or issue.
- Measuring your knowledge base improvement process that your agents use. I hope you are already using the KCS (Knowledge Centered Support) methodology or your own processes and methods to improve your knowledge base. As experts of your products and services, your agents should be identifying article shortcomings for every topic that runs across their desk. In short, you need to measure:
- Agents are flagging or reporting articles that need improvement or the need for a new article.
- Those submissions are legitimate. For example, a submission for change because a difference in opinion on article style wouldn’t be a legitimate submission.
- The articles being corrected are done in a timely fashion.
Everything I’ve covered so far works for articles where you have agents that are also using those articles to support your customers. What happens for those articles where you have no agent support? For those knowledge base articles that are never used by an agent, you absolutely need a good effectiveness measurement. (I’m not suggesting that you only measure articles on your help website if you don’t have agents, but rather it’s critical that you have this process for measuring when you don’t have agents.)
Measure articles on your help website
A huge portion of your customer support strategy should be focused on customer self-resolution, which your help website will likely be your largest component of that strategy -- it’s cost-effective and very likely the preferred support model for a large portion of your customers. (This varies by several factors including the product, its complexity, and your customer demographic.) I’ll leave the measure of the entire help website to another article and continue looking specifically at measuring help content effectiveness on your help website.Before getting into the measurement details, consider for a moment why a customer might come to your help website for assistance (or even call you). What were they doing or trying to accomplish just prior to visiting your help website? I’m willing to bet that a significant majority of your customers didn’t wake up this morning thinking they wanted to come read your help content like they may do each morning reading their Facebook feed or Twitter posts. While a few may be looking for educational purposes, most paid you a visit because something went wrong when they were trying to use your product or service and they were unable to resolve it on their own. Maybe they were trying to plan a trip to visit old friends and your email software quit sending emails. Perhaps they were trying to book a reservation at their favorite restaurant and your app crashes again and again. Or maybe they have a deadline to finish a drawing for one of their customers and they're unable to save the file.
My point is, your customer is using your products and services to live their life or do their job. While they may have loyalty to your company (for now), ideally they would never have any friction or interruption from their intended goal, i.e. ideally they would never have a reason to seek your help. Why is this point so important? Because as soon as their issue or problem is resolved, your customer is going to go back to their regularly scheduled life. They might make a note to reevaluate you as a provider or to look for a discount or refund from you, but their intention is to go back to what they were doing before something happened with your product or service that interrupted their personal or professional life.
With that in mind, let’s start by defining a conversion. When measuring a website, conversion is the key metric. If you’re selling products (think Amazon), a conversion is the successful completion of the purchase step(s) to buy the products in your virtual shopping cart (i.e. your credit card was charged and a fulfillment process was triggered). For other companies, a conversion could be the point when you provide your contact information in exchange for a whitepaper or other product details. (The company likely had one or more marketing initiatives that use the collection of your contact info to share with their sales team as a critical measure of the success of their campaigns.) These conversions are the measure of success for the site’s intended goal. Here’s one more: a content site such a Medium or CNET want you to read article after article, so they can serve you ads, which generates revenue for them. They have multiple conversions -- each time you click on another page on their website, which loads another story for you with ads -- which generates revenue for them. (Those companies who have the ads also have a conversion on the website -- when you click on the ad.)
For your help website, you probably have just 2 conversions:
- [Preferred] Your customer resolves their issue through reading 1 or more support articles. Unlike Medium or CNET, the goal is to solve the customer issue with the fewest number of articles (but not at the expense of having huge articles covering multiple topics). Remember, the customer is on your site because they need to be to solve an issue, not because they want to be on the website.
- Your customer successfully submits a ticket for assisted (agent) support. This is similar to the 1st two conversion examples I shared. An exchange of some info and in return you will provide a support service (free or for a fee).
All visitors that left your help website (site exit) immediately after reading a help article
All visitors to your help website
When you look at your web analytics data, this is the exit rate on your site from knowledge base help articles. If you had 100 visitors enter (visits or sessions) your help website and 75 of those visitors left your website immediately after reading a help article, you would have a 75% conversion rate or a 75% article effectiveness score.
The first argument I usually hear is “The visitor left the website to try what the article suggested. How do you know it solved their issue?”
Which I ask in return, “Did they come back?”While there are a number of potential corner-cases as to why a visitor didn’t return, the most likely case is that they actually resolved their issue and then continued with their goal or task, ie. got on with their life. As long as the visitor returns before their session timed out (usually 30- or 60 minutes) then you can conclude that the issue wasn’t resolved. Of course, if they come back before the time-out, they will still eventually leave. For each visitor session on your website evaluate what was the last page viewed, i.e. the exit page, when they finally left your site.
Pro Tip: If visitors go away for a long period of time, but still return under the session timeout limit, you will see long page view times. When evaluating how to correct the article and yet it seems correct, it's likely the article is just too long or too complex.
Let’s dig a bit deeper on our measure -- I don’t think we chose the best denominator. Have you looked closely at all the visitors to your help website? How many of those visitors actually never even try to self-resolve? Or how many visitors tried searching, but never clicked into an article? While each of these visitor types suggests there may be other opportunities to improve on your website, it’s diluting the real help article effectiveness measurement. Therefore I recommend a change to the formula and not include any visitors that never viewed at least 1 article.
All visitors that left your help website (site exit) immediately after reading a help article
All visitors who viewed 1 or more articles
Now you can use this to evaluate overall success and also slice it by a few things. (Likely you’re going to want to combine these together to gain the most insight.)
- Articles for a given product. Different products likely have different support volume, This will allow you to evaluate for each individual product, so your most frequently visited products don’t over-shadow the performance for other, less popular products and services. Use the formula above, but limit it just to those help website visitors for a particular product or service.
- Per article basis. By looking at the exit rate on a per article basis using your web analytics software, it’s easy to find what articles have a low exit rate. Even better, add a weighted factor so those articles that get most views, even if not the worst scoring, get improved first.
- By article type. Remember I talked about 3 different types at the beginning of this article. This is where you’re going to really find where improvement is needed by also considering what should the visitor be doing AFTER reading the article. In some cases, clicking into another article or onto another page IS the correct behavior.
- Solver. This is going to be the bulk of your articles. These articles should have been written specifically to resolve issues without requiring the need to read additional articles and therefore the exit rate from this article types should be very high.
- Troubleshoot and redirect. While some customers may resolve their issue by reading this article type, a big proportion of the traffic should lead to another article. So the exit rate for these articles will likely be small, which is the opposite compared to those “Solver” articles above.
- Contact support. Like the “troubleshoot and redirect” articles, there really is no expectation for customers to exit from these article types. You want to verify these visitors actually clicked-thru to your contact flow.
- Others. Remember the other possible article types I previously mentioned? They also should be measured based on their intended purpose. For example, training documentation. Unless you have a single, large training document, you would expect customers to jump through all your training documents. And while the overall roll-up score of exiting from an article should still score well, at an individual article level, those documents may not appear to perform well when using the article exit as the benchmark of good performance. If nothing else, you want to exclude those documents from your overall article performance number.
Based on the discussion of measuring per article type, you should adjust your formula accordingly. Remember, if you're looking at the overall performance of your help website, the more customers who leave upon reading an article, the more effective your help knowledge base content is. But, if you’re looking to measure and evaluate on a per article basis, you need to consider the type of the article (the article intention) when measuring, as some articles are expected to route customers and not have them exit the website from that article.
To this point, I’ve discussed article effectiveness measures and some related processes for your agents. As part of the greater effectiveness analysis, we should briefly discuss some measures of related topics.
Article changes
Are you looking at how often articles are being updated? Do you know why they’re being updated? I previously mentioned you should measure and make sure you’re only getting changes submitted that are appropriate. Let’s assume the needed changes are appropriate, then consider these 2 additional measurements.- Within the first 30- or 45-days of a new release, articles for a product release may require 1 or more updates. This could be as simple due to a patch or minor change after a release, but it could also very much indicate the knowledge base articles were incomplete or inaccurate at the time of a release. This measure is a great way to flag you with potential problems, specifically:
- A product was released too early. In this situation, there may be frequent product updates, which therefore require frequent knowledge base article updates. The cost for many frequent changes is often not considered or not fully understood. If you’re also paying for translation, this can be a big financial hit.
- Poor work was done by your article writer. This might be the best method to identify an issue with a writer who hasn’t been effective in authoring new content.
Remember, while you will likely always have some changes, such as a screenshot from the final released version to an update on a late changing error message, the total number of changes should be few. - Frequent changes to an article during any time period. From my experience, this is usually a case where an argument is happening on the accuracy of an article. (Though it could also be due to a new KCS program where agents are looking for any excuse to update an article.) The point being here is that if you have several changes to an article in a short time period, you likely have a problem that requires further investigation.
New article creation for a release
There are 3 points worth measuring and therefore discussing regarding article creation.- Changes after a release. This is the same measure and argument I made above.
- Articles that are never used. Think about the time and effort it takes to create each article. So after you have a new product released or update was made to an existing product, were there articles created that have never been viewed? Why is that?
- The article was a subject that is not needed or of interest to your agents and customers. This should be captured, so it doesn’t continue for future new product releases.
- Your agents and customers were unable to find the article. This could be due to a number of reasons from the article not getting published to a poorly written title to an issue with SEO and/or search indexing.
- Creation of new articles. It can be difficult to predict all possible issues and therefore all necessary knowledge base content for a new product. If you have to create too much new content after a product launch, there’s likely something wrong with your process to identify necessary content needs. There are definitely some differing opinions on the right approach to help content development as it relates to a launch. Some companies want very minimal and let agents identify what’s needed while others may not even have agents and want to get this as close to perfect as possible at the time of launch. No matter where you fit, you want to watch this to make sure it’s not different than what you expect.
Find what’s missing
I already talked about the expectation that your agents should be identifying both problems with the facts of your content and in identifying gaps in your content, but that's likely not enough. Who’s making sure your customers are making the most use of your help content? Perhaps your content is already extremely well written, but you’re still only achieving a 75% effectiveness score? There are (at least) 2 more areas to look.Internal article audit
When was the last time you audited articles that are only available to your agents? For every knowledge base article you have that is only viewable by an agent when your customer has that issues, they MUST talk to one of your agents to resolve the issue. Examine how often internal articles are used and make sure you don’t have topics that you would be ok with sharing with your customers, but currently you are not sharing publicly. It’s time to rewrite those so customers can self-resolve.
[Related to this, the higher percentage of issues solved by your agents that require an internal article or access to an internal tool is a great indicator of how well your help website is working. The more customers are able to help themselves, the fewer agents you need to employ.]
Article ratings and feedback
If you aren’t already, start collecting a rating (thumbs up / thumbs down) and feedback on your knowledge base help articles from your customers. Based on my experience, assuming you have statistically relevant data, i.e. enough ratings, the ratings will closely mirror the exit rate for the articles intended to solve issues. But what’s really important is that the feedback will leave clues as to how the content isn’t meeting the needs off your customers. While I’ve also seen a lot of irrelevant or inapplicable feedback, if you’re able to filter through the noise, it’s a great place to identify gaps in your help content.
Conclusion
By now you should have “the big rocks” to measure the success of your help knowledge base content. I realize there is much more to talk about, including where to focus next now that you know how your content is performing. Between your agents and feedback plus measuring the processes I’ve shared, you should actually be able to make quite a bit of positive progress.So what’s left? Here’s a list of related topics and measures.
- Measuring the success of the discoverability or findability of your content (and how to fix it). At the time of this writing, I’ve been working on another article that discussed 5 things that are more important in finding your help content than your site’s search engine.
- Measuring your help website. This is very much inter-related to your content and findability success, but there are a few more things such as navigation, taxonomy, and your contact flow to look at, too.
- Effectiveness of images and videos. You might be surprised here. Remember, these can impact page load times and force unnecessary page scrolling.
- Regional and cultural differences. Hint - No matter where your customer is located, no matter what language they speak, they didn’t wake up this morning thinking that wanted to read all your new knowledge base article posts.
- Analyzing your internal knowledge processes, whether KCS, Scrum, or something else.
There are also very related, relevant topics that I’d like to address at some point.
- Help website strategy and knowledge base content strategy. These are inter-related but each deserves its own discussion.
- Knowledge base content maintenance. I actually covered much of these, and the rest I would likely include in a knowledge base content strategy discussion.
- Chatbot, community, and social support strategy fit within your strategy discussions, too. For example, a chatbot can be very helpful in content discoverability while a community is a great resource to help you identify gaps in your help content.
- Writing for the web, writing for translations, writing for mobile, and SEO. I would consider all of these as subsets of your knowledge base content strategy including you should have an ongoing effort to manage your SEO performance.
After this long list, I feel like I’ve likely left a few things out. Maybe abuse and safety, for example. Oh, and dealing with trolls and scammers. What else?
Let me know your thoughts and feedback on this and potential future topics.
Comments