Understanding Success by Channel [White Paper]
KCS v6 Knowledge Domain Analysis Reference Guide by the Consortium for Service Innovation is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. (Please follow the link for more information and definitions.) For information about commercial use or any permissions beyond the scope of this license please email info@serviceinnovation.org.
Background and Context: Why and how we seek to measure knowledge success.
Executive Overview: How can we measure the return on investment in knowledge?
Assumptions and Limitations: Assessing the customer self-service experience is difficult and there is no one measure that indicates success.
Prerequisites: Meaningful channel measurement requires you to consider definitions for your organization, access to data, and time within your organization.
Service Engagement Measures Spreadsheet: The spreadsheet is divided by types of measures and offers good, better, and best methods to gather data.
Glossary of Terms: Definitions relevant to self-service measures and in the context of customer support.
Project Acknowledgements: History and participants.
Understanding Success by Channel
Measuring the value that knowledge provides to the organization has been a topic of conversation for Consortium for Service Innovation members for a number of years.
Specifically, Consortium members have been concerned with measuring the success, and justifying the investment, of a self-service implementation. Experience has shown that a robust self-service mechanism provides more value than just "case deflection," but quantifying this is a complex endeavor.
Our ultimate goal is to provide a comprehensive view of the value of serving up knowledge across all relevant channels within a company.
This is Phase 1: we are looking at activity in company-provided self-service mechanisms and the assisted support center (where knowledge workers are using KCS).
Future phases will explore community and social channels as well as bots, automation, and proactive engagement.
We propose tracking a short list of measures over time (the Service Engagement Measures Spreadsheet) to provide visibility to your organization's total demand for knowledge, and how that demand is being fulfilled. We've done our best to provide context and supporting materials, and look forward to hearing from you and what you learn on this journey.
This resource reflects ongoing work done by a group of about 35 Consortium members, representing 18 member companies of various sizes and markets, and is offered as a supplement to the Measurement Matters v6 paper.
Background and Context
Why and how we seek to measure knowledge success.
Why Measure Channel Success?
The goal of the service or support organization is to improve customer success and productivity with our products and services. To do this successfully, we must understand the customer's activity and experience.
Traditionally, we in support have been focused solely on managing the cost of incoming cases, but the scope of customer engagement has expanded way beyond "the case." In environments where Knowledge-Centered Service (KCS®) is being used, the value created for the organization by removing roadblocks between knowledge and requestors (often by publishing it through a self-service mechanism) must be looked at in the context of the entirety of customer demand - not just engagement with an agent/knowledge worker through an assisted channel.
The benefits to the overall business of successfully delivering knowledge through self-service have led to industry-wide changes to customer engagement strategies and a wide array of new tools and technology to enable self-service content creation, delivery, and measurement.
Objective
Our objective is to provide a measurement framework that enables us to better articulate the value of delivering knowledge through various channels. While we have referred to this as "measuring self-service success" over the course of this project, it has become clear that what we aim to do is measure the organization's success with delivering knowledge through self-service in the context of total demand. To do this, we need to have input on what success means from both the requestor’s and the company’s point of view.
Prior to this work, the support and service industry did not have a common vocabulary or a standard set of measures for assessing customer success with knowledge, nor did we have standard measures to assess the value of self-service mechanisms within our organizations.
Without a standard, companies had to create - and defend - their own measurement model. With different measurements and terminology, it is difficult to have meaningful conversations about customer engagement channels and their success.
Audience
This paper was developed with two audiences in mind:
The introduction, background and context, executive overview, and assumptions and limitations sections are intended to provide context for practitioners and leadership around why this is important.
The remainder of the paper aims to provide "how-to" for folks who develop and manage the organization’s measurement model.
The Service Engagement Measures Spreadsheet enables us to articulate the value of an effective customer engagement model beyond the current mindset of only focusing on cost-saving or the “give me one number to track” mentality. The spreadsheet is designed to be used for an organization to benchmark against itself.
Understanding Demand
The Customer Demand Model is one of the foundational ideas that identifies the value of knowledge sharing.
Traditionally, the support industry has been focused solely on managing the cases that come into the support center, and has understood cases as representing the total demand for support. As Consortium Members have gained more visibility into the activity of their customers, it has become clear that for nearly all organizations, the volume of questions asked or issues raised in a self-service mechanism is ten times the demand coming into the assisted model. In communities and social media spaces, the demand is thirty times what we see in the support center. If we do the math on this, that means that we are serving less that 3% of the total demand our customers have through the assisted model.
This means we have a huge opportunity to improve customer success and productivity with our products and services. If we can leverage what we are learning in the assisted model by publishing that knowledge through other, more easily-accessible channels, we can reduce time and level of effort spent on finding answers to known issues, while making sure our knowledge workers are available in the assisted channel to work on new issues.
Greg Oxton describes the Customer Demand Model in this 10 minute video.
Cost Avoidance is a Limiting Perspective
Also known as contact avoidance or case deflection, assessing the cost savings associated with issues solved through self-service engagement includes a number of pitfalls.
From a holistic look at customer demand, customers pursue resolution to their issues in a number of ways. Some of these issues are resolved with self-service. But only a percentage of failed self-service attempts becomes a case or incident in the assisted model.
The Demand Flow example helps visualize this. While there is some small percentage of issues that a customer will start researching in self-service, move to a chat bot, and then open a case in pursuit of an answer, there are many others where a customer will poke around in Google, maybe land on a self-service article, then get distracted, and then decide it wasn't that big of an issue to begin with.
Therefore, approaching self-service success with only cost avoidance in mind is a limiting perspective. Our focus is improved customer success and reduced customer effort; leveraging our internal investment in KCS by providing knowledge through self-service is a key factor in meeting that goal.
The Customer Experience
Two key concepts that are fundamental to the Consortium’s work are the Value Erosion and Value Add Models. These models address the dynamics of maximizing customer value realization from our products and services, and provide additional context around our efforts to measure channel success.
Very briefly: value erosion happens when a customer encounters an issue. We want to minimize value erosion by facilitating customer success in finding a resolution as early in their pursuit of a resolution as possible. Customer service and support has focused on the value erosion model for years, but we have an opportunity to look at how we add value. Can we increase the customer’s capability, reduce their effort, and create a pleasant experience without them experiencing an issue? These can be important components of a successful self-service interaction.
Summary
Self-service is an important customer engagement method and, in many segments, it is the requestor’s preferred way to get information about our products, services, processes, and policies. Self-service activity is many times greater than assisted support activity. Requestors seek information via company-provided self-service mechanisms, communities, and social networks.
A requestor (customer, partner, or user) doesn’t necessarily distinguish an organization’s self-service mechanisms from what is available through communities and social networks.
Our goal is to provide a view of the health and value of knowledge offerings across all relevant channels. As a starting point, we are looking at the activity via self-service mechanisms and the assisted support center.
Executive Overview
How can we measure the return on investment in knowledge?
Our goal is to provide a view of the health and value of knowledge offerings across all relevant channels, or put another way: How well are we doing at connecting customers to content?
Why Measure Success With Self-Service?
We generally refer to the benefits of KCS in three stages:
Operational efficiency
Customer success with self-service
Improved products & services
Operational efficiency is relatively easy to measure. After implementing KCS, it doesn't take long to see improved resolution times, improved first call resolution, and/or reduced escalations, for example. But once those gains in efficiency are achieved, it becomes how you run your business. The incremental value has already been realized.
Success with self-service is a mid-term benefit of KCS. It takes some time to publish and improve articles that are findable and usable by an external audience. For years, Consortium members have struggled with quantifying the benefits realized by improved success with self-service. We have lots of anecdotal evidence: customers are happier when they find answers without opening a case, and knowledge workers are happier when they get to solve new problems instead of repeating known answers. However, these things often do not help justify an investment in knowledge and self-service.
Justify Your Investment in Knowledge
The Service Engagement Measures Spreadsheet is intended to help justify your transformation investment. It demonstrates ROI for your KCS program, your staff, and your tools. It also offers a way to communicate to the company at large the health and engagement of your install base. This may require an investment in analytics, or at least some connecting of folks who have the data you need.
Delivering knowledge through self-service is about multiplying reach! We can leverage knowledge captured during assisted interactions at a VERY low cost. How do we know we're getting full value out of our knowledge implementation?
Benchmark Against Yourself
Measuring success with self-service becomes very complex, very quickly. Because of the variables involved, the only meaningful baseline is against your own performance. This means you need to define your scope in a way that makes sense in your environment, and you cannot report out on it without context.
Judgment is required!
This work was developed and tested over the course of more than 20 meetings, with a group of Consortium members representing 18 widely varied companies - including small, medium, and large enterprises, both publicly and privately traded, focused on both business and consumer audiences, and at multiple stages of KCS maturity. The one constant in our initial findings was: these numbers are only meaningful as part of the larger environment of an organization, and there is very little value in attempting to benchmark against other organizations. Our recommendation is to gather data monthly, and report out quarterly. Once you have a completed spreadsheet - once you start to get a sense of the total demand for support outside the assisted model - can you imagine what would happen if you turned off self-service?
Please see the Measurement Matters v6 paper for more benefits offered by a mature KCS implementation.
Assumptions and Limitations
Assessing the customer self-service experience is difficult and there is no one measure that indicates success.
Assumptions
Definition of Self-Service
For our purposes, self-service is defined as:
Information that improves customers' success and productivity with our products and services
Usage and "how to" (learning)
Frequently asked questions
Basic configuration and/or interoperability information
Information about fixes: troubleshooting, patches, drivers, and workarounds
Product documentation: user manuals, guides
User intent is about solving an issue as opposed to purchase, design, or value-added services
See Glossary of Terms for more definitions used in this project.
Scope: A Phased Approach
Phase I: defining self-service measures
Assessing the requestor's experience and success with self-service
Assessing the value of self-service for the organization
Phase II: measures for communities and social networks
Given the varying maturity of self-service deployments and resources available, we provide "good," "better," and "best" measurement options.
Limitations
Every Consortium Member company who worked on this project brought their own handful of indicators - and everyone felt their indicators could be improved. Different business models need different metrics. For example, when looking at clickstream analysis, everyone has different levels of sophistication or a different journey map as to when/where clickstream analysis starts.
Not all unsuccessful self-service attempts result in case creation, and not all successful self-service engagements represent a case deflected. While we attempt to delineate attempts between self-service and assisted interactions, another perspective to consider is parallel solving while a case is open. We cannot accurately measure all scenarios at scale.
Due to the complexity, the most useful strategy is to trend against yourself. Consider how you can establish a baseline for your organization and measure your progress against it.
Questions That Require Assumptions to Answer
How many customers with support demand never made it somewhere that we can measure their engagement? (e.g. they started in Google and stayed there)
What does a successful engagement look like? There are numerous possibilities for a successful pattern of engagement for various personas (break/fix vs. goal-oriented tasks vs. long-form learning) and from different origins (Google, Direct, Click Navigation vs. Search, In-Product Help).
Are anonymous users customers? Members experimenting with this report that after adjusting for bots and spiders, approximately 90% of hits to their public self-service content comes from external search engines (like Google). Based on clickstream data and surveys, it appears a very large percentage of those hits are from customers who did not take the effort to log into the support portal.
Metrics and Measurement Challenges
Sessions: We measure by sessions, but not all sessions are equal. We aim to measure sessions that provide value, not just the total quantity of sessions. This may mean figuring out how to remove non-valuable / ineligible sessions from your count.
Data sources: Measurement tools (Google Analytics vs. SEMrush vs. Internal measures) may have visibility or measurement parameters that differ.
Effort: What is a high effort vs. low effort visit? It's a subjective measurement that constantly evolves.
Goals: Measurement goals change based on user persona or use case.
Bounce Rate: Bounce is not a good qualitative measure because a single-page session can be successful. Bounce calculations can be impacted by event tracking. Better calculations are time on page, scroll percentage, and other contextual measures.
Article Surveys: Limited value due to low participation (~1.5%). Article feedback survey lacks user context. Are customers rating the article quality or their overall experience regarding their issue?
Context: Not all metrics are actionable or have a clear 'why'. When self-service fails, did customers not find what they were looking for or not understand the article they read? Often times, we have to drill down on supporting or related metrics to understand the context of the data.
Value of the Knowledge Base
The knowledge base does not possess its own value but is an enabler of many other things that have great value, including:
The assisted model (improve accuracy of resolution, reduce time to relief, increase time to proficiency/upskilling)
Evolve Loop activities: mining of trends and patterns to improve products and services
Self-service engagements (improving customer success and productivity by empowering customers to research and self-solve)
Lead-gen, cross-sell, up-sell
Dispatch avoidance (field service, desk side service)
Improve time to resolve and accuracy of repair actions (field service, desk side support)
Improve field service capability and efficiency
Reduce customer effort
Prerequisites
Meaningful channel measurement requires you to consider definitions for your organization, access to data, and time within your organization.
Definitions
It is important to define, for your environment, what will be included in your Service Engagement Measures Spreadsheet. We have provided good, better, and best formulas for calculations in the spreadsheet, and an extensive glossary of terms, but judgment is required in terms of what makes sense for your environment! You may start out with whatever set of data you can get your hands on. Define it, and trend it over time, but the intent should be widening your scope to more thoroughly and accurately reflect the full customer experience.
For example, what you do or do not count as a self-service engagement will depend on your business and may change over time.
Data
Recommended formulas in the Service Engagement Measures Spreadsheet include multiple data sources. While some of these things can be mined from tools directly, you may need to do some detective work to find out where they live. Some members discovered that they needed to enable the capturing of the data first - and all members recommend using your organization's existing data models wherever possible. Don't reinvent the wheel!
Engagements
Self-Service: user sessions for self-service mechanisms (usually web-based)
Assisted: cases, tickets, or service requests
Cost Per Engagement
Self-Service: Total associated costs (may include salary, systems, overhead - based on how your company organizes their cost structure)
Assisted: Time spent on tasks and the volume of demand (again, based on how your company calculates cost per case)
Survey Data
Session surveys: self-service success
Customer Satisfaction
Customer Effort
Net Promoter Score (NPS)
Time
Due to data limitations, the most useful strategy is to trend against yourself. Filling out the measures spreadsheet one time will provide limited insight. Track the data over time (weekly, monthly, quarterly) against optimization efforts to assess what changes lead to meaningful positive change. We propose measuring monthly and presenting quarterly. Year-over-year analysis can account for seasonal behavior.
Service Engagement Measures Spreadsheet
The spreadsheet is divided by types of measures and offers good, better, and best methods to gather data.
There are three areas in which our measures are focused: Traffic & Success, Length & Cost, and Customer Experience. You don't have to tackle these all at once!
For each measurement, the goal is to use the most sophisticated approach you can from the good, better, and best guidance below - but don't let that stop you from using what you have available now to get started. Remember to get clear on your definitions first!
Guidance on the approaches are mostly offered for self-service engagements. (Use your existing measures for agent assisted engagements.)
Enter data in the yellow cells and the spreadsheet will do the math. Copy or download the template in Google Sheets here.
=Traffic & Success=
Number of Engagements
Volume of issues for which requestors pursue a resolution that we have visibility to. Often measured through attempts, sessions, sign-ons, searches and content views.
Good = number of sessions or views per month
Better = number of sessions per month that include at least one content view
Best = same as better and includes criteria for a meaningful content view (i.e. time on page)
Consortium Members: log in to see an example of how to use Google Analytics to define sessions containing specific pages.
% of total engagement demand (volume)
Calculated percent of all the demand through all channels that we can assess.
% channel success
Approximation of issues resolved in channel. Often measured by surveys (low response rate and response bias), an estimated percent of sessions/visits without opening a case in some period of time (24 hours - 7 days), or number of views divided by an average number of views/issue. The intent is to measure how often a requestor issue is resolved per self-service (the requestor is done with indication of success) and assisted (closed case with resolution offered).
Good = percent of sessions/visit without request for assistance OR number of views divided by an assumed avg number of views/issue. Note: this does not account for the significant % of issues that are abandoned (without resolution or an assist request). Some apply a % abandon rate.
Better = survey "were you successful?" percent "yes", apply confidence interval to ensure you have a large enough sample size. Correlate your results with "good".
Best = sophisticated clickstream analysis (percent of patterns that represent success or failure). This is an area of exploration and we are looking for examples of what this might look like!
Consortium Members: log in to see video example of how to measure self-service success using surveys.
Number of successful engagements
Calculated volume of completed engagements based on number of total engagements multiplied by % channel success.
% of total successful engagements
Calculated ratio of successful engagements in channel as a percent of total demand.
This calculation is a reflection of Customer Experience and indicates opportunities to improve content and mechanisms.
=Length & Cost=
Average Length of Successful Engagement
Customer time to resolve, requires assumptions based on your products and customers. Authentication can help with measuring length of engagement.
Average time of a sample of successful sessions (using clickstream analysis). Note: as you improve the accuracy of identifying successful sessions, this will improve the accuracy of your measures.
Consortium Members: log in to see video example for how to calculate time to mitigation for agent assisted and self-service.
Cost Per Successful Engagement
Total costs (may include salary, systems, overhead) associated with self-service (not easy) allocated based on time spent on tasks and the volume being served (cost/min and cost/engagement).
This is a great spot to use your organization's existing calculation for costs. The Consortium's position is that all KCS-related costs should be counted under Assisted Engagements since even without fueling self-service, KCS provides ongoing benefit to the organization.
Consortium Members: log in to see video example for how to calculate cost per engagement for agent assisted and self-service channels.
=Customer Experience=
Customer Satisfaction
Existing standards: survey. How satisfied were you with your experience?
Consortium Members: log in to see video example for transactional Customer Satisfaction surveys on closed cases.
Customer Effort
Survey results depends largely on customer expectations.
Good = Survey.
Better = Sophisticated click stream analysis.
Best = Journey mapping and tracking improvements in the journey over time.
Customer Loyalty
Use Net Promoter Score (NPS) or other existing standards.
Glossary of Terms
Definitions relevant to self-service measures and in the context of customer support.
See also: KCS Glossary of terms.
[A]
Abandoned
When people give up the pursuit of a resolution to an issue.
Analytics
Indicators of behavior, activities, performance (outcomes): search, Google, clickstream.
Article feedback
Article based - when a person views an article they are asked if the article was helpful.
Assisted
Requestors pursuing a resolution to an issue or seeking information (learning) from a person (human being), usually requesting help or information from a vendor about their products and/or services.
Assisted eligibility
An issue's eligibility for agent assistance. Some requests are not applicable for agent assistance.
Attempt
An action taken to pursue a resolution to an issue. By channel (self-service, forum, social, assisted). There may be many attempts for a given issue, each in different channels. There may be multiple attempts per session/visit. Each company will have to set criteria based on average requestor's behavior to assess attempts/session.
Automated alerts
Based on automated detection the notification of potential issues (could be machine-to-machine or machine-to-human).
Automated interactions
Use of bots or other digital capabilities to provide requestors with possible resolutions or information.
Automated recommendation
Use of digital capabilities to recommend articles or information based on pattern recognition (for anonymous users) or data about the requestor (for authenticated users).
Automatic detection
Automated detection of potential issues based on data from sensors (hardware) or programmatic analysis (software), or a combination of the two.
Automatic repair
Corrective action(s) taken by robotic or programmatic capabilities that resolve a requestors issue based on information provided by the requestor or the system. Examples: password reset, profile updates.
Average work time to resolve
See KCS Glossary of terms.
Avoidance
In the context of interactions (incidents/cases) - when a request for agent assistance has been started and the requestor has interacted with/used a suggested resolution and does not complete the request for assistance.
[B]
Browse
Navigating information using facets, a table of contents, a list of frequently asked questions, site map, or knowledge maps (not a search).
[C]
Case close without assistance
Requestor submitted a case and then solved it without agent assistance.
Case/incident/ticket
The record of a customer request for assistance that may include things like the requestor's name, company, contact information, entitlement information, issue description, severity of the issue and status of the request.
Channels
The different interaction modes, including but not limited to: online communities or forums, social media, chat, self-service, in product help, or assisted.
Clickstream
The activity and navigation pattern of people interacting with a web site or in a community.
Click through
The requestor viewed an article (in full) or piece of content from a list of possible articles and/or content.
Click through rate (CTR)
The percentage of searches that resulted in a user clicking on one or more of the results.
Community/forums
People with common interests asking questions, providing responses and sharing experiences, usually facilitated by a digital platform like Jive, or Lithium.
Confidence interval
In the context of statistics and assessing survey responses - a confidence interval is a type of interval estimate, computed from the statistics of the observed data, that might contain the true value of an unknown population parameter. Wikipedia. For example, if we asked customers in a self-service survey “did you find what you were looking for during your visit”. 60% of all respondents might say yes. Dependent on the statistics associated with responses and the population for whom the question is relevant we can calculate a margin of error. 60% of survey respondents said yes and we have 95% confidence interval of plus or minus 2% - we are 95% confident that customers feel they find what they are looking for 58%-62% of the time.
Cost avoidance/saving
See KCS Glossary of terms.
Cost per incident
See KCS Glossary of terms.
Cross-functional measures
See KCS Glossary of terms.
Crowd-sourced content
Content created by the users, peers of customers, typically in a forum or community. See https://en.wikipedia.org/wiki/Crowdsourcing.
Customer engagement network
All the ways customers get information about us, all the ways we interact with customers. A network of content and people. We want to connect people to content for known issues and people to people for new issues with ever-increasing relevance.
Customer loyalty
See KCS Glossary of terms.
Customer satisfaction
See KCS Glossary of terms.
[D]
Deflection
In the context of interactions (incidents/cases) - when a request for agent assistance has been started and the requestor has interacted with/used a suggested resolution and does not complete the request for assistance. Synonymous with avoidance.
Download
In the context of the web - retrieving digital content (article, document or executable file) by saving a copy of the content on the requestor's system.
[E]
Effort
In the context of people - the work and/or time it takes to accomplish a task. “Customer Effort” was made popular by the book “The Effortless Experience”, which explains that reducing customer effort results in increased customer loyalty (and renewals) more so than delightful experiences.
Elapsed time to resolution (customer view)
Elapsed time (min or days) from customer start of pursuing a resolution to resolution found. Not work minutes - see "Average work time to resolve".
Elapsed time to resolution (vendor view)
Elapsed time (min or days) from when the customer issue is known until a resolution is suggested.
Embedded help
In the context of self-service - enabling requesters to access to content in the user interface for the product or service.
Engagement
A customer touch point or interaction. Reactive is when the customer initiates the engagement, proactive is when the vendor initiates the engagement. Engagement is meant to capture all touch points, assisted, self-service, etc.
Entitlement (to view, read)
No authentication required, open, anonymous, gated, authentication/login required, entitled by contract.
Entitlement to contribute (create a case, respond to post)
People authorized to create requests or replies to requests.
[I]
Incident volume
See KCS Glossary of terms.
Interaction
Proactive or reactive pursuit of a resolution to an issue or information for learning in any of a number of different channels or mechanisms.
Issue
Anything that disrupts the customer’s ability to be successful, including but not limited to questions, needs, or problems with a product, service, policy or process. Issue drivers can be functional, social or emotional.
[J]
Journey mapping
A detailed description and/or images of the steps a person does to accomplish a task. Often used to assess the customer experience. See https://en.wikipedia.org/wiki/Custom...ourney_mapping
[K]
Knowledge use
People accessing and making use of knowledge articles. Although "knowledge consumption" is a commonly used phrase, we discourage its use as it is a misnomer because things that are consumed cannot be used again or by others.
Knowledge gap
People are looking for knowledge that does not exist or is not available in the channel in which they are looking (searching/browsing)
[M]
Meaningful view
Content viewed that meets specific criteria that implies use of the content (time on page, scroll to view significant portion of the content, bookmark, download)
[N]
NPS - Net Promoter Score
An indicator of customer loyalty.
[O]
Omnichannel
Enabling requestors to use many different channels for support (phone, web submit, social networks, web forums) and a strategy to provide a consistent and integrated experience across the channels. Different from multichannel, which is multiple channels available without a unified or integrated experience.
[P]
Peer-to-peer
Customers interacting with customers, partners interacting with partners, or employees interacting with employees in pursuit of a resolution to an issue.
Percentage first contact resolution
See KCS Glossary of terms.
Persona
An identity, in the context of journey mapping, communities and self-service, labeling a collection of people with common attributes, sometimes the role a person is in defines their persona (accounts receivable administrator, IT professionals, HR administrators, lawyers), sometimes the skill within in a domain (novice, experienced, expert) may define their persona. See https://en.wikipedia.org/wiki/Persona
[R]
Rank
The position a result or selected article appears within search results. A common measure of search performance combined with content health is average click-through rank.
Resolution capacity
See KCS Glossary of terms.
[S]
Satisfaction
In the context of customer satisfaction - an assessment of a requestor’s experience with a transaction(s) or interaction(s).
Search
In the context of self-service - an activity or effort to find or discover something specific by providing a program with words and phrases that will enable the program to identify relevant content that exists within a large, broad collection of content. Often includes advanced search and/or filtering options.
Self-service
Enabling requestors (customers or users) to interact with knowledge or capabilities that resolve their issue without interacting with a representative from a vendor. What you do or do not consider to count as self-service engagement will depend on your business and may change over time.
Self-service adoption
See KCS Glossary of terms.
Self-service success
See KCS Glossary of terms.
Self-solved or semi-assisted
After submitting a case, requestor resolved their own issue before being provided with a complete solution.
Session (self-service)
Requestor is active on in the self-service mechanism. In the context of self-service, the frequency of a person’s use of the self-service mechanism to pursue a resolution to one or more issues that has a distinct beginning and end. A session/visit has a duration or time spent as well as frequency. This needs be defined by each organization based on the products/services being supported and the nature of the customer engagements. Example AARP (low complexity) will have different criteria for # of issues/session as well as the criteria for what constitutes a successful engagement vs what PTC's criteria (high complexity). Advanced measures determine that session activity meets criteria to indicate meaningful engagement.
Social networks
Relationship-based connections between people and people or people and companies usually facilitated by a digital platform like Facebook, LinkedIn, etc.
Support cost as a percentage of total revenue
See KCS Glossary of terms.
Survey
A request to a target audience to collect feedback about performance and expectations. A key advantage to surveys is that the data is explicit, and conclusions drawn typically have a high degree of confidence, whereas evaluating behavior patterns through session data often requires inference and therefore comes with a lower degree of confidence. The challenge with surveys is collecting enough data to be statistically significant.
Session-based: when a requester begins or ends a self-service session they are asked questions about their experience during the session.
Interaction-based or event-driven: a survey sent after the conclusion of an interaction, such as the closure of a case.
Relationship survey: A questionnaire sent periodically (6 month or annual) to assess the customer's opinion of the relationship. The surveys often cover overall satisfaction, loyalty or effort.
[T]
Time on page/content/object
Number of seconds the user are "active" on a piece of content, the threshold for a meaningful use will vary based on the type of content and the nature of your environment (each org will have to calibrate the thresholds that are meaningful for them).
Time to close
See KCS Glossary of terms.
Time to resolution
See KCS Glossary of terms.
Touchpoint model
A visual representation of all the points of interaction with customers, also called a lifecycle model. Usually done at a high level of abstraction.
[U]
Unsolicited suggestions (searchless search)
In the context of self-service - offering helpful information based on a customer or user’s behavior and or text submission(s).
[V]
Vendor-sourced content
Content that is developed and provided by the company that is selling or providing the product or service.
View (content)
In the context of the web - opening, navigating to a page, often used as “page view” which implies the person read the content on a web page. If a "hover" provides sufficient information to resolve the issue it counts as a view (this will vary by implementation and complexity of the environment).
Project Acknowledgements
History and participants.
This project was inspired by an Open Space session convened at the Consortium for Service Innovation's 2019 Member Summit by Jennifer MacIntosh. Rosalie Girard made sure the work continued after the session adjourned.
During more than twenty phone calls and an in-person meeting, a committed group of Consortium members offered their experience, ideas, and passion for this work. Deep thanks to them for their participation, which culminated in this paper.
Kimaya Agarwal
Kris Anderson
Richard Bekolay
Monique Cadena
Justin Calhoun
Peter Case
Bonnie Chase
Matt Chinn
JC Coynel
Janine Deegan
Keri Detrick
Amy Dotson
Ross Edgecombe
Jeff Elser
Sara Feldman
John Fronius
Rosalie Girard
Kristin Hunter
David Kay
Aditya Kulkarni
Jennifer MacIntosh
Theresa Manzo
Patrick McBride
Sabrina Meditz
Sarup Paul
Laurel Poertner
Bit Rambusch
Christina Roosen
Brad Smith
Steve Springer
Devra Struzenberg
Dave Thorp
Heidi Wagstaff
Jacob Watts
Arnfinn Austefjord
Kelly Murray
Matt Seaman
Greg Oxton