{"id":7654,"date":"2018-12-11T09:00:47","date_gmt":"2018-12-11T07:00:47","guid":{"rendered":"https:\/\/www.testingtime.com\/?p=7654"},"modified":"2021-05-20T16:16:20","modified_gmt":"2021-05-20T14:16:20","slug":"make-ux-measurable","status":"publish","type":"post","link":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/","title":{"rendered":"Make UX measurable and strengthen your company\u2019s UX culture"},"content":{"rendered":"<h2>Table of contents<\/h2>\n<p><a href=\"#Introduction\">Introduction<\/a><\/p>\n<p><a href=\"#qualitativework\">From qualitative work via UX metrics to ROI<\/a><\/p>\n<ul>\n<li><a href=\"#BMP&quot;\">Before measurement: planning<\/a><\/li>\n<li><a href=\"#endresult\">Always start thinking from the end-result<\/a><\/li>\n<li><a href=\"#purpose\">What is the purpose of your metrics?<\/a><\/li>\n<\/ul>\n<p><a href=\"#STRM\">Select the right metrics<\/a><\/p>\n<ul>\n<li><a href=\"#SEQ\">SEQ \u2013 Single Ease Question<\/a><\/li>\n<li><a href=\"#SUS\">SUS \u2013 System Usability Scale<\/a><\/li>\n<li><a href=\"#SUPR\">SUPR-Q \u2013 Standardized User Experience Percentile Rank Questionnaire<\/a><\/li>\n<li><a href=\"#NASA\">NASA-TLX \u2013 Task Load Index<\/a><\/li>\n<li><a href=\"#NPS\">NPS \u2013 Net Promoter Score<\/a><\/li>\n<li><a href=\"#CES\">CES \u2013 Customer Effort Score<\/a><\/li>\n<li><a href=\"#forget\">Do not forget: the follow-up question<\/a><\/li>\n<\/ul>\n<p><a href=\"#alternative\">Alternative metrics<\/a><\/p>\n<ul>\n<li><a href=\"#TCR\">TCR \u2013 Task Completion Rate<\/a><\/li>\n<li><a href=\"#TCT\">TCT \u2013 Task Completion Time<\/a><\/li>\n<li><a href=\"#CR\">CR \u2013 Conversion Rate<\/a><\/li>\n<li><a href=\"#AOV\">AOV \u2013 Average Order Value<\/a><\/li>\n<li><a href=\"#CWA\">Classic web analytics (bounce rate, pages per visit&#8230;)<\/a><\/li>\n<li><a href=\"#TMs\">Technical metrics (load time, Google Lighthouse, WCAG Score&#8230;)<\/a><\/li>\n<\/ul>\n<p><a href=\"#OTCC\">Only the comparison counts<\/a><\/p>\n<p><a href=\"#Tips\">Tips for introducing metrics to the team<\/a><\/p>\n<ul>\n<li style=\"list-style-type: none;\">\n<ul>\n<li><a href=\"#Limit\">Limit yourself to a few metrics<\/a><\/li>\n<li><a href=\"#Plan\">Plan the presentation of metrics<\/a><\/li>\n<li><a href=\"#Provide\">Provide reading aids and manage expectations<\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><a href=\"#Summary\">Summary<\/a><\/p>\n<h2><a id=\"Introduction\"><\/a>Introduction<\/h2>\n<p class=\"p1\">User experience costs money. We cost money ourselves, as we do not work for free. Our workplace costs money. And the test subjects who take part in our research activities cost money. Even if they do not get paid because they are employed in our company \u2013 at any rate they cannot make any money while they are taking our user test.<\/p>\n<p>As a UX expert, you obviously know that this money is superbly well invested. A better UX leads to more satisfied employees, fewer mistakes, reduced support requirements and, ultimately, more revenue. But you first have to make that clear to some of your colleagues and, above all, to your managers. A few figures or metrics can help in this respect.<\/p>\n<p>In this guide, I\u2019ll give you some valuable tips on how to pick the right metrics, apply them correctly, and present the results in a meaningful way. In this way, you will demonstrate the value of your work, document your successes and better judge the progress of your UX-related efforts for yourself, helping your further development.<\/p>\n<p>The good news is that even if many UXers do not have much experience with metrics and statistics, the learning curve is not steep and you can easily start working with metrics without much effort.<\/p>\n<p>Continue reading online or download the eBook as a PDF:<\/p>\n\t\t\t<div class=\"teaser-post\" style=\"background: #868B9E;\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"left-side\" style=\"color: #ffffff;\">\n\t\t\t\t\t\t<div class=\"teaser-post-content\">\n\t\t\t\t\t\t\t<h2>Make UX measurable and strengthen your company&#8217;s UX culture<\/h2>\n\t\t\t\t\t\t\t<p>A better UX leads to more satisfied employees, fewer mistakes, reduced support requirements and, ultimately,\u00a0<strong>more revenue.\u00a0<\/strong>This guide teaches you how to demonstrate the value of your work and strengthen your company&#8217;s UX culture.<\/p>\n\t\t\t\t\t\t\t<p class=\"read-more-button\">\n\t\t\t\t\t\t\t\t<a href=\"https:\/\/resources.testingtime.com\/make-ux-measurable\">\n\t\t\t\t\t\t\t\t\tDownload now\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t<\/p>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div class=\"right-side\" >\n\t\t\t\t\t\t<div id=\"teaser-1-image\" class=\"teaser-post-image\">\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\n<h2><a id=\"qualitativework\"><\/a>From qualitative work via UX metrics to ROI<\/h2>\n<p>Qualitative work is always at the heart of your work as a UX expert. You want to know why people do something and what exactly they do \u2013 how many of them there are is not so important at first. And that is how it should be. Nevertheless, it can also be helpful for you if you include a few metrics that <a href=\"https:\/\/www.testingtime.com\/en\/blog\/quantifying-ux-tools-for-analysis-using-r\/\" target=\"_blank\" rel=\"noopener noreferrer\">take quantitative approaches in your methodological toolbox<\/a>. In most cases, this means hardly any extra work for you, because the metrics generate themselves, so to speak.<\/p>\n<p>In this connection, you can distinguish between two types of metrics:<\/p>\n<ul>\n<li>Metrics that you just need to record, such as the task completion rate\/success rate (could the test subject complete the task?) or the task completion time (the time that the test subject needed to complete the task).<\/li>\n<li>Metrics that you have to collect yourself and that reflect subjective satisfaction (you usually do that with a simple questionnaire).<\/li>\n<\/ul>\n<h3><a id=\"BMP\"><\/a>Before measurement: planning<\/h3>\n<p>But before you start, first plan what it is you want to find out. Don\u2019t just jump in and start measuring. Instead, first ask: why do I want to measure in the first place? What is the point of these results?<\/p>\n<p>One can measure everything; however, our goal is not to accumulate more <a href=\"https:\/\/www.testingtime.com\/en\/blog\/complete-guide-how-to-create-personas-based-on-data\/\" target=\"_blank\" rel=\"noopener noreferrer\">data<\/a> than anyone can deal with; rather, it is to generate valuable information as a possible basis for decisions.<\/p>\n<p>Imagine that a colleague comes with the request to measure the number of pages accessed per visit on your website. The best thing to do is to ask them (or ask yourself) why they want to measure this. They probably want to know if the content of your site is appealing to the visitors. Why does your colleague want to know this? Because only in this way can you convince the visitor that you have a good product. So you can then also ask: how do you know whether the visitor is convinced? If they submit an enquiry or callback request, or similar. Now you have arrived at the really interesting metric: the number of enquiries relative to the number of pages visited.<\/p>\n<p>As you would do with a usability test, you therefore draw up a hypothesis. In our example, this would be expressed as follows: the more pages a visitor accesses on our website, the more we can convince them to get in contact with us.<\/p>\n<p>So in future, your best course of action is to measure these two values together: the number of pages accessed by a visitor and whether they made an enquiry. This way, you get valuable information: if, for example, the number of visited pages is increasing rapidly but the number of enquiries is not, this may indicate that visitors are not finding what they are looking for. Or that they no longer find the pages convincing. And one thing stands out as a result: the qualitative aspects are still crucial \u2013 without them you cannot interpret numbers meaningfully.<\/p>\n<h3><a id=\"endresult\"><\/a>Always start thinking from the end-result<\/h3>\n<p>When planning your measurements, always think about the results that interest you rather than the metrics that some tool provides.<\/p>\n<p>The following is the best way to proceed:<\/p>\n<ul>\n<li>Define your objective. What do you want to achieve with the user? For example, perhaps you want them to leave their contact details on the website and, ultimately, become a customer.<\/li>\n<li>Define the behaviour. What user behaviour shows you that you have achieved your goal? They might, for example, download a white paper or arrange a consultation.<\/li>\n<li>Define the metric. How can you measure this behaviour? In our example, you could capture the submission of the contact form.<\/li>\n<\/ul>\n<p>If you identify several possible metrics using this method, select one of them. Collecting data is not difficult, thanks to <a href=\"https:\/\/www.testingtime.com\/en\/blog\/measuring-your-ux-and-usability\/\" target=\"_blank\" rel=\"noopener noreferrer\">modern tools<\/a>. Interpreting it, on the other hand, is still laborious. This is where your expertise is called for.<\/p>\n<p>Also think about the areas for which you want to collect the respective metrics: for the whole company or corporate brand? For a specific product? Or just for a single task that the user takes care of with the product?<\/p>\n<p>As the manufacturer of mobile devices, you could measure user satisfaction with your company. You could also find out how satisfied customers are with a special smartphone. Or you could collect information on user satisfaction with the camera on this smartphone. These are all meaningful and interesting metrics \u2013 but you do not have to measure all three always and everywhere.<\/p>\n<h3><a id=\"purpose\"><\/a>What is the purpose of your metrics?<\/h3>\n<p>In the next planning stage, you need to think about who will look at your metrics later. So, for once, it is not about the users of the product but the users of your metrics. So you\u2019re doing a sort of target audience analysis for the users of the metrics you want to collect in the future.<\/p>\n<p>You can generally distinguish between two basic user types here:<\/p>\n<ul>\n<li>Business users (managers, controllers, product managers, the marketing team&#8230;)<\/li>\n<li>User experience experts<\/li>\n<\/ul>\n<p>With User Group 1, the main business metrics are <a href=\"https:\/\/www.testingtime.com\/en\/blog\/important-ux-kpis\/\" target=\"_blank\" rel=\"noopener noreferrer\">return on investment<\/a> (ROI), i.e. how much money my measure is bringing, as well as the classic analytics values (bounce rate, conversion rate, pages per visit&#8230;).<\/p>\n<p>User Group 2 typically has an interest in user-centric metrics such as <a href=\"https:\/\/www.testingtime.com\/en\/blog\/tweaking-the-nps-for-maximum-insight\/\" target=\"_blank\" rel=\"noopener noreferrer\">NPS<\/a>, SUS, error rate, task completion time (in this case, user behaviour is the focus of attention). We will look at these metrics more closely in a moment.<\/p>\n<h2><a id=\"STRM\"><\/a>Select the right metrics<\/h2>\n<p>Which metrics are the right ones? There are a huge number of different metrics, so here are some tips on choosing the best ones for your specific case. Because the difference between the individual metrics is not that big. There is a strong correlation between virtually all of them. This means that the results are quite similar, whatever metric you use. But more on that later.<\/p>\n<p>The first rule is always: do not create your own metric. Rather, use one that already exists. There are two reasons for this:<\/p>\n<ul>\n<li>If you use a commonly used metric, you can compare your results with others.<\/li>\n<li>Developing metrics requires a lot of expertise.<\/li>\n<\/ul>\n<p>Especially if you want to find out about the attitudes, opinions and experiences of users, there are a few points to watch out for. Even if you first think, \u2018It costs nothing to ask\u2019, it is not so easy to get meaningful answers. Because lurking behind every question is the danger of getting a false answer.<\/p>\n<p>How exactly you frame the question plays a significant role. It is also important to consider when you ask the respective question. Moreover, the possible answers influence the answer you get. Finally, the context determines which answers are given; this includes the questions that you ask before or after a question.<\/p>\n<p>A very simple example follows to illustrate this. You ask test subjects two questions:<\/p>\n<ul>\n<li>How would you rate your overall happiness on a scale of 1 (very happy) to 7 (very unhappy)?<\/li>\n<li>Overall, how would you rate your happiness with your career on a scale of 1 (very happy) to 7 (very unhappy)?<\/li>\n<\/ul>\n<p>The two questions have an influence on each other: someone who rates themselves as happier overall will tend to rate themselves as happier in their career, too.<\/p>\n<p>But if you reverse the sequence of these questions, you get a different result: there is now a very strong correlation between the two questions. This means that someone who first rates themselves as happy in their career will rate themselves as happier in general.<\/p>\n<p>The reason for this is that, in the second case, when we answer the second question we are thinking first and foremost about our career. This is a well known psychological effect that everybody is susceptible to, even if they are aware of it.<\/p>\n<p>There are many such effects, and even experts cannot always think of everything in advance. As a result, professional questionnaires are always statistically validated and optimised to deliver as unbiased results as possible.<\/p>\n<p>In this context, you will often read about the terms validity, reliability and objectivity. A valid metric means that it actually measures what it claims to measure. Reliability, on the other hand, means that the same results reliably come out when the measurement is performed multiple times.<\/p>\n<p>Finally, objectivity indicates whether a measurement always gives the same results, no matter who performs the measurement and how.<\/p>\n<p>The following metrics are all valid, reliable and objective \u2013 and are therefore used by many colleagues.<\/p>\n<h3><a id=\"SEQ\"><\/a>SEQ \u2013 Single Ease Question<\/h3>\n<p>One of the simplest and most recommended metrics is the Single Ease Question (SEQ). Directly after the test subject has performed a task, you ask them: \u2018How easy was that?\u2019<\/p>\n<p>To answer, the test subject sees a scale of 1 to 7, labelled \u2018very difficult\u2019 on the left and \u2018very easy\u2019 on the right.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-7707\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/01\/Make-UX-measurable-Single-Ease-Question-SEQ-480x441.png\" alt=\"Make UX measurable Single Ease Question SEQ\" width=\"480\" height=\"441\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/01\/Make-UX-measurable-Single-Ease-Question-SEQ-328x302.png 328w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/01\/Make-UX-measurable-Single-Ease-Question-SEQ-141x130.png 141w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/01\/Make-UX-measurable-Single-Ease-Question-SEQ-480x441.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>Simple example of SEQ, implemented with Google Forms.<\/em><\/p>\n<p>The SEQ is most commonly posed in usability tests, after each task. For example, you can use this in the context of an investigation on the website, where you ask the user questions after they have completed a specific task such as ordering something.<\/p>\n<p>With SEQ, you have a quick and easy way of comparing the tasks that the user undertakes with the application being investigated.<\/p>\n<h3><a id=\"SUS\"><\/a>SUS \u2013 System Usability Scale<\/h3>\n<p>Unlike SEQ, you use the System Usability Scale (SUS) to ask the test subjects to rate how they found using the whole system. This means that you pose the question at the end, when the test subject has completed all of the tasks in the session.<\/p>\n<p>John Brooke himself, who invented the method, described it as \u2018quick and dirty\u2019. Many studies have proven that it is in fact not so \u2018dirty\u2019 at all and that it stands up to all the criteria for good questionnaires. For this reason, it has been very widely implemented since it was introduced in 1986.<\/p>\n<p>For the SUS, subjects should state to what extent they agree with 10 statements on a scale of 1 to 5. For example, ranging from 1 (\u2018I strongly disagree\u2019) to 5 (\u2018I strongly agree\u2019).<\/p>\n<p>The 10 statements are:<\/p>\n<ul>\n<li>I think that I would like to use this system more frequently.<\/li>\n<li>I found the system unnecessarily complex.<\/li>\n<li>I thought the system was easy to use.<\/li>\n<li>I think that I would need the support of a technical person to be able to use this system.<\/li>\n<li>I found the various functions in this system were well integrated.<\/li>\n<li>I thought there was too much inconsistency in this system.<\/li>\n<li>I would imagine that most people would learn to use this system very quickly.<\/li>\n<li>I found the system very cumbersome to use.<\/li>\n<li>I felt very confident using the system.<\/li>\n<li>I needed to learn a lot of things before I could get going with this system.<\/li>\n<\/ul>\n<p>For scoring, the answers add up to a possible 100, i.e. a system with optimal usability would have an SUS score of 100. A score of 68 is considered \u2018good\u2019.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-8797 size-medium\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/EN-TT-Whitepaper-UX-messen-Grafik-s.-13-480x529.png\" alt=\"make ux measurable sus typeform\" width=\"480\" height=\"529\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/EN-TT-Whitepaper-UX-messen-Grafik-s.-13-298x328.png 298w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/EN-TT-Whitepaper-UX-messen-Grafik-s.-13-118x130.png 118w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/EN-TT-Whitepaper-UX-messen-Grafik-s.-13-768x847.png 768w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/EN-TT-Whitepaper-UX-messen-Grafik-s.-13-480x529.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>An implementation of the SUS questionnaire can look like this (here with Typeform).<\/em><\/p>\n<h3><a id=\"SUPR\"><\/a>SUPR-Q \u2013 Standardized User Experience Percentile Rank Questionnaire<\/h3>\n<p>The Standardized User Experience Percentile Rank Questionnaire (SUPR-Q) is very similar to the SUS. In this case, there are just eight questions that are posed to the test subjects upon completion of all tasks on a website. The principle, however, is the same and the results are similar.<\/p>\n<p>On the other hand, you have to shell out 3,000 to 5,000 dollars per year if you want to implement SUPR-Q. Why should you do that? Because in return you get benchmarks enabling comparisons with other websites. This means that you can see where your website stands in terms of usability compared with other websites in your sector.<\/p>\n<p>Of course, collecting and updating these values involves work \u2013 and that\u2019s what the colleagues of Jeff Sauro, who developed the SUPR-Q, get paid for. If you have a large budget, that may be of interest to you. Here are some more details: SUPR-Q Product Description. Otherwise, simply stick to SUS, which you can use for free.<\/p>\n<h3><a id=\"NASA\"><\/a>NASA-TLX \u2013 Task Load Index<\/h3>\n<p>The next suggestion is not a joke. You really can work with a metric that was developed by NASA. As you can imagine, usability is of more vital significance to NASA employees than it is to us mere mortals. For them, it is not a matter of an order not going through, but of whether a billion-dollar rocket goes off course and burns out in the atmosphere. Or even whether people might die because of an operational error.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-7633\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-480x388.jpg\" alt=\"NASA TLX Task Load Index UX messbar machen\" width=\"480\" height=\"388\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-328x265.jpg 328w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-161x130.jpg 161w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-1536x1241.jpg 1536w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-768x621.jpg 768w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-1024x828.jpg 1024w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-Task-Load-Index-UX-messbar-machen-480x388.jpg 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>To control the International Space Station, a lot of interfaces are needed. TLX is used in their development at NASA. Source: https:\/\/www.nasa.gov\/mission_pages\/station\/research\/experiments\/2138.html<\/em><\/p>\n<p>UXers working in the healthcare and transport sectors and in the control of industrial equipment have also been measuring the complexity of individual tasks using TLX since the 1980s. But this metric is not so well suited to websites and consumer apps. Mainly because it is rather demanding to answer the questions. The NASA-TLX questionnaire consists of six questions, which the user must cross off on a non-numbered scale of \u2018very low\u2019 to \u2018very high\u2019.<\/p>\n<p>The questions are:<\/p>\n<ul>\n<li>How mentally demanding was the task?<\/li>\n<li>How physically demanding was the task?<\/li>\n<li>How hurried or rushed was the pace of the task?<\/li>\n<li>How successful were you in accomplishing what you were asked to do?<\/li>\n<li>How hard did you have to work to accomplish your level of performance?<\/li>\n<li>How insecure, discouraged, irritated, stressed, and annoyed were you?<\/li>\n<\/ul>\n<p>The test subject must then weight these individual areas. If you do want to work with this method: below you will find some links to further information and, most important of all, the link to NASA\u2019s free iOS app, with which you can do the data collection and also get the result of the analysis. It is very practical, and not rocket science.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-7636\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-App-UX-messbar-machen-480x853.png\" alt=\"NASA TLX App UX messbar machen\" width=\"480\" height=\"853\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-App-UX-messbar-machen-185x328.png 185w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-App-UX-messbar-machen-73x130.png 73w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-App-UX-messbar-machen-864x1536.png 864w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-App-UX-messbar-machen-768x1365.png 768w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/NASA-TLX-App-UX-messbar-machen-480x853.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>Example of a question from the NASA-TLX app.<\/em><\/p>\n<h3><a id=\"NPS\"><\/a>NPS \u2013 Net Promoter Score<\/h3>\n<p>The next metric is rather controversial. There are some colleagues who completely reject the Net Promoter Score (NPS). And they have good reason to do so. More on this later. First, I will explain to you how you calculate the NPS.<\/p>\n<h4>Measure the NPS<\/h4>\n<p>After the test, you ask the users a single question:<\/p>\n<ul>\n<li>\u2018How likely is it that you would recommend this system\/application\/website to a friend or colleague?\u2019<\/li>\n<\/ul>\n<p>The subject is then given possible answers on a scale of 0 (not likely at all) to 10 (highly likely). Whoever answers with a 9 or 10 is called a Promoter. Those who answer with a 7 or 8 are Passives and those with a score of 6 or less are Detractors.<\/p>\n<p>The NPS is calculated by subtracting the percentage of customers who are Detractors from the percentage of customers who are Promoters. For example, if you have 40% Promoters and 10% Detractors, your NPS is 30.<\/p>\n<p>In the worst case, the NPS is -100 \u2013 which means all of the respondents are Detractors. And in the best case it is 100 \u2013 which means all are promoters.<\/p>\n<p>The NPS was originally developed to measure customer satisfaction or loyalty to brands or products, but it is also usable for websites or apps.<\/p>\n<h4>NPS \u2013 pros and cons<\/h4>\n<p>The NPS has three major advantages:<\/p>\n<ul>\n<li>It is very easy to determine.<\/li>\n<li>Many managers know it, especially if they come from business\/marketing backgrounds.<\/li>\n<li>There is plenty of comparison data for companies from various sectors.<\/li>\n<\/ul>\n<p>But criticisms of NPS, especially those expressed by colleagues in the UX area, include: With NPS we are doing something that we never actually do as good UX researchers: we ask the test subjects about what they want to do in future. The problem here is that Such questions are very difficult to answer. When we are asked if we want to eat healthy food and do regular exercise in the coming month, many of us say yes. Significantly fewer of us actually do so, however. The same thing applies with many questions about future behaviour. As UX researchers, we therefore much prefer to ask about things that happened in the past. The respondents can still tell fibs or simply have a mistaken memory, but the error rate is much lower.<\/p>\n<p>A further major point of criticism is that NPS is based on a rather curious calculation method. The calculation is easy, but from a statistical perspective it is flawed. Because lots of information is lost even though it is actually there in the survey.<\/p>\n<p>That becomes most apparent with an example:<\/p>\n<ul>\n<li>Let\u2019s assume that a test gave a result of 0 Detractors, 75 Passives and 25 Promoters. This gives an NPS of 25. Not bad. Your colleague, on the other hand, has 40 Detractors, 0 Passives and 65 Promoters. Is the colleague\u2019s result better or worse than yours? The NPS is exactly the same, namely 25. This shows that when you calculate the NPS, some information gets lost, because it makes a big difference whether 65 of your test subjects find your product super or only 25.<\/li>\n<\/ul>\n<h4>NPS works \u2013 when you apply it correctly<\/h4>\n<p>Does that mean you should not touch NPS with a bargepole? Not necessarily, from my point of view. Because many of our 2. Selecting the right metrics 19 \/ 34 non-UX colleagues find the NPS super. And you can gain credit with them if you can show that you have used NPS for your tests. Therefore, here\u2019s my tip: if this applies in your case, then measure the NPS but do not work only with the NPS value but also with the raw data.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-8806 size-medium\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/make-ux-measurable-nps-score-480x175.png\" alt=\"make ux measurable nps score\" width=\"480\" height=\"175\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/make-ux-measurable-nps-score-328x120.png 328w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/make-ux-measurable-nps-score-262x96.png 262w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/make-ux-measurable-nps-score-480x175.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>The scores of all subjects taking part in the NPS as a histogram (bar chart).<\/em><\/p>\n<p>The raw data is always best presented in the form of a histogram. So you can see at a glance how the values of the NPS are calculated and whether the product under examination polarises opinion, or the evaluations are closer together. If, however, you find out in your analysis of the required metrics that there is no specific interest in NPS in the company, then I personally would not introduce it. As a rule, SUS is the preferred choice.<\/p>\n<h3><a id=\"CES\"><\/a>CES \u2013 Customer Effort Score<\/h3>\n<p>The Customer Effort Score (CES) is very similar to NPS. It measures the effort that the customer has invested in solving a task \u2013 such as in the execution of an order or submitting a support request. CES functions according to the same principle as NPS: the 2. Selecting the right metrics 20 \/ 34 Users are asked to answer a single question after interacting with the company or product.<\/p>\n<p>With CES, this is:<\/p>\n<ul>\n<li>\u2018The company made it easy for me to handle my issue.\u2019 The answer options range on a seven-term scale of \u2018strongly agree\u2019 to \u2018strongly disagree\u2019.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-8794 size-medium\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Screen-Shot-2019-02-01-at-11.20.37-AM-480x267.png\" alt=\"CES make UX measurable\" width=\"480\" height=\"267\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Screen-Shot-2019-02-01-at-11.20.37-AM-328x183.png 328w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Screen-Shot-2019-02-01-at-11.20.37-AM-233x130.png 233w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Screen-Shot-2019-02-01-at-11.20.37-AM-480x267.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>Example of determining CES with Typeform.<\/em><\/p>\n<p>For the analysis, I determine the average of the answers (1 stands for very simple, 7 for very difficult); this gives me the average grade, so to speak.<\/p>\n<p>Why should you use CES rather than NPS? The main reason is that the way the question is formulated switches the focus: with CES, I am not asking the user about possible future behaviour but, rather, about the user\u2019s specific recent experience with me. That is a sounder approach. And the creators of CES say that effort is the most important indicator of whether a customer will come back in future.<\/p>\n<h3><a id=\"forget\"><\/a>Do not forget: the follow-up question<\/h3>\n<p>Very important with all the focus on the subject of metrics: never lose sight of the qualitative aspects. If you collect metrics such as the NPS or CES on the website or in a survey independently of a usability test, then you should always ask a follow-up question:<\/p>\n<ul>\n<li>Could you briefly explain the reason for your assessment?<\/li>\n<\/ul>\n<p>Only by asking this question can you get a feel for what is working well, and what not so well. Otherwise you will only know that something is not right, but you will have no idea which problem your users are having specifically.<\/p>\n<h2><a id=\"alternative\"><\/a>Alternative metrics<\/h2>\n<p>Alongside the metrics that I have already presented, there are a few others that are more widely used. You can supplement your methodological toolbox, especially if colleagues collect these metrics anyway; this means you get other options without much additional effort.<\/p>\n<p>From a UX perspective, they are not of much use, as they go into great detail and do not say much without the context. For example, the simple statement that the average task completion time (TCT) for a particular task is 39 seconds is not enough on its own. For a purchase in a shop, this can be a fast time \u2013 entering payment data, etc. usually takes significantly longer. For unlocking a rental bike, however, 39 seconds is quite long \u2013 many apps do that in half the time.<\/p>\n<h3><a id=\"TCR\"><\/a>TCR \u2013 Task Completion Rate<\/h3>\n<p>The task completion rate (TCR) indicates the percentage of test subjects who were able to complete the task successfully. That is, you divide the number of successful test subjects by the total number of subjects.<\/p>\n<p>You can determine this value even for very small UX studies. Above all, it makes sense to identify at a glance the tasks that created the most problems.<\/p>\n<h3><a id=\"TCT\"><\/a>TCT \u2013 Task Completion Time<\/h3>\n<p>The task completion time (TCT) is the average time in minutes or seconds that test subjects took to complete the task. You must interpret the number carefully \u2013 it is clear that some tasks take longer than others. Most users manage to sign up for a newsletter in just a few seconds, whereas they take several minutes to complete an order that involves entering address and payment details.<\/p>\n<h3><a id=\"CR\"><\/a>CR \u2013 Conversion Rate<\/h3>\n<p>The conversion rate (CR) tells you what percentage of visitors did what you wanted them to. Such as how many visitors to your website subscribed to the newsletter. Or how many visitors to your landing page ultimately became buyers.<\/p>\n<h3><a id=\"AOV\"><\/a>AOV \u2013 Average Order Value<\/h3>\n<p>The average order value (AOV \u2013 average order value, expressed in euros, Swiss francs, etc.) indicates how much customers have purchased on average when placing an order. This value should also be interpreted with caution, because it depends on many factors that you partly cannot influence (for example, it is usually the case that more is ordered in a shop before Christmas).<\/p>\n<h3><a id=\"CWA\"><\/a>Classic web analytics (bounce rate, pages per visit&#8230;)<\/h3>\n<p>Finally, there are the classic metrics that Google Analytics and other tracking systems provide. On their own, these are not suited to measuring UX, because they depend on many parameters that we cannot control. That said, such values can serve as a clue to tell us where we should take a closer look \u2013 especially if they change suddenly.<\/p>\n<p>An example: suppose the bounce rate suddenly shoots up on your website. This means that many visitors leave your website without visiting any second page. They might come from Google, take a look at your home page, then disappear.<\/p>\n<p>This could be a bad sign: visitors find your site so untrustworthy that they leave immediately.<\/p>\n<p>But this could equally be a very good sign, meaning that the visitors find exactly what they are looking for on the first page. This might be the case if they find your telephone number, for example. The users want to get in touch by phone, so they are finished with their visit as soon as they have your number.<\/p>\n<h3><a id=\"TMs\"><\/a>Technical metrics (load time, Google Lighthouse, WCAG Score&#8230;)<\/h3>\n<p>Last but not least, there is another type of metric that I would like to share with you: in this case, they are values of a technical nature, but they also have a relevance for UX. This includes the load time, for example \u2013 because you must also have insight into this if you want a user-friendly website or application. If your site takes too long to load, this will annoy users. Perhaps they will even break off loading and go to the competition.<\/p>\n<p>Loading time and some other factors are taken into account with the score that Google Lighthouse provides. These are all technical factors that nevertheless play a role for UX. You can determine them with the built-in tools of Google Chrome, for example.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-7647\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-480x293.png\" alt=\"Google Lighthouse in Chrome UX messbar machen\" width=\"480\" height=\"293\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-328x200.png 328w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-213x130.png 213w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-1536x938.png 1536w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-768x469.png 768w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-1024x625.png 1024w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Google-Lighthouse-in-Chrome-UX-messbar-machen-480x293.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>Google Lighthouse is integrated into Chrome. It performs a comprehensive audit of the technical parameters of every website.<\/em><\/p>\n<p>And there are also accessibility scores that show how well your site complies with the Web Content Accessibility Guidelines (WCAG).<\/p>\n<h2><a id=\"OTCC\"><\/a>Only the comparison counts<\/h2>\n<p>Now we come to a point that is extremely important for all metrics: a single measurement says next to nothing, no matter which metric you choose.<\/p>\n<p>You only get meaningful results if you make comparisons. Therefore, the NPS is useful for all issues in the UX environment: there are a variety of comparative values for different industry sectors. That\u2019s why people also pay thousands of euros for SUPR-Q licences. The benchmarks they get for their fees really help them assess their own position.<\/p>\n<p>However, for most teams it is much easier and, above all, more cost-effective to collect their own benchmarks. They also have the enormous advantage that they are truly reliable. Because you know precisely how your own values have come about. And you can ensure that this is always the same for every single measurement.<\/p>\n<p>In general, you can employ two types of comparison:<\/p>\n<ul>\n<li>Comparison between different tasks \/ functions \/ products \/ companies<\/li>\n<li>Comparison of the same metric at different times<\/li>\n<\/ul>\n<p>And you should employ both of these comparisons.<\/p>\n<p>So you can see, for example, with what degree of ease or difficulty participants in your usability test have completed the respective tasks. So you compare the values arrived at with the SEQ with the individual tasks of the test. Or you compare the TCR, i.e. the proportion of users who were able to complete the respective task successfully.<\/p>\n<p>It is also very rewarding if you not only look at an application but also collect the same metrics in a test of a competitor\u2019s application. So you can see very clearly how you perform in comparison.<\/p>\n<p>It is equally interesting to compare the metrics you got at particular times. For example, you can see quite clearly whether a revision or a new feature will improve or worsen the application as a whole.<\/p>\n<h2><a id=\"Tips\"><\/a>Tips for introducing metrics to the team<\/h2>\n<p>Now I\u2019ve introduced you to a whole series of metrics that you can use. If you are still unsure which of them is best suited to your particular needs, here are a few more tips:<\/p>\n<h3><a id=\"Limit\"><\/a>Limit yourself to a few metrics<\/h3>\n<p>It might be tempting to measure as much as possible. Then you don\u2019t have to decide just yet. But this is not sensible, for a number of reasons. First, it is difficult to do several things properly at once. And second, we humans are not designed to keep an eye on many things at the same time. This is especially true with figures. The more metrics you gather, the more difficult you make it for yourself to keep an eye on their development and to make meaningful comparisons. And that\u2019s even more so for all your colleagues or bosses who are not so deeply involved with the material. You will then have to explain lots of things to them over and over.<\/p>\n<p>And there\u2019s another very important aspect to this: the more questions you ask, the fewer good responses you get. If answering the questions is voluntary (e.g. with an online survey), then the number of participants will reduce in line with the number of questions you ask. And even if all questions are answered \u2013 the respondents will get tired and are more likely to give false or incomplete answers the longer the questionnaire is.<\/p>\n<p>And if you conduct user tests with test subjects in the Uselab: users\u2019 time is far too valuable to waste with questions whose answers are not really essential.<\/p>\n<h3><a id=\"Plan\"><\/a>Plan the presentation of metrics<\/h3>\n<p>If you proceeded in the way I recommended above, then you will have conducted an audience analysis of those who should make use of your metrics later. So you know how much basic statistical knowledge you can assume on the part of your stakeholders.<\/p>\n<p>You should then keep the presentation more or less easy according to this level of knowledge. In general, few people can cope with raw numbers. The majority would find it easier if you provided the results in a diagram. You could write volumes, but as a rule of thumb, the bar chart is almost always the best representation for UX. This way you cannot go far wrong.<\/p>\n<p>Pie charts are hard to interpret if they have more than four or five pie slices. And it is even harder to compare pie charts.<\/p>\n<p>Scientists like boxplots. These are kind of like advanced bar charts with \u2018antennas\u2019.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-7650\" src=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Boxplot-Kastengrafik-UX-messbar-machen-480x362.png\" alt=\"Boxplot Kastengrafik UX messbar machen\" width=\"480\" height=\"362\" srcset=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Boxplot-Kastengrafik-UX-messbar-machen-328x247.png 328w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Boxplot-Kastengrafik-UX-messbar-machen-172x130.png 172w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Boxplot-Kastengrafik-UX-messbar-machen-768x579.png 768w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Boxplot-Kastengrafik-UX-messbar-machen-1024x772.png 1024w, https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/Boxplot-Kastengrafik-UX-messbar-machen-480x362.png 480w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><\/p>\n<p><em>A boxplot. The antennas indicate the minimum and maximum values, the box 50 percent of the values.<\/em><\/p>\n<p>This presentation makes it possible to get a good impression of the distribution of the measured values at a glance. Since reading these diagrams requires some practice, I advise against using boxplots in presentations intended for broad audiences.<\/p>\n<h3><a id=\"Provide\"><\/a>Provide reading aids and manage expectations<\/h3>\n<p>Especially when you start working with metrics, it is important to keep an eye on whether everyone understands what you\u2019re measuring. For this reason, it is not a good idea simply to distribute the first results in the form of a report; it is much better to arrange for a short workshop. This will give you an opportunity to explain why you are measuring, and what you want to achieve by measuring.<\/p>\n<p>In addition, you can get feedback on how well the stakeholders are getting to grips with the metrics and their presentation. You can also find out what other questions they have that you might be able to answer with the same metrics.<\/p>\n<p>Another important point is that you can manage their expectations somewhat. Because metrics often create the expectation that the metrics will improve. You have some leverage here: on the one hand, you can use the metrics to demonstrate the value of work done by the UX team. But on the other, you also have to be prepared for critical questions if individual metrics do not develop as desired.<\/p>\n<p>And now to the final point: with metrics, you must be careful not to put too much focus on numbers. All the lovely numbers can only be interpreted meaningfully if you continue to consider the qualitative aspects \u2013 these form the basis for any UX optimisation.<\/p>\n<h2><a id=\"Summary\"><\/a>Summary<\/h2>\n<p>Metrics offer you the ability to become more professional: with metrics, you can give your own work an empirical basis. That is to say, you do not need to rely on your gut feeling to judge the severity of a problem, for example, or to estimate how much you can improve an application by taking a certain course of action.<\/p>\n<p>In addition, metrics can help to communicate the value of UX internally. This will increase your worth as a UX team and can ensure that you are taken more and more seriously, are brought into projects earlier and \u2013 last but not least \u2013 are given a bigger budget.<\/p>\n<p>I therefore recommend that you use SEQ with every usability test from now on; in other words, to ask the following question after each task: \u2018How easy was that?\u2019 Plus, you can record the task completion rate and task completion time \u2013 it only takes a little time, and you will soon have gathered a few metrics with which you can then make benchmark comparisons.<\/p>\n<p>In the medium term, I would introduce a further, higher-level metric, i.e. one with which you can get an overall assessment. My favourite here is SUS, but if your company is already familiar with NPS, simply use it.<\/p>\n<p>Download the eBook so you always have a reference work:<\/p>\n\t\t\t<div class=\"teaser-post\" style=\"background: #868B9E;\">\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"left-side\" style=\"color: #ffffff;\">\n\t\t\t\t\t\t<div class=\"teaser-post-content\">\n\t\t\t\t\t\t\t<h2>Make UX measurable and strengthen your company&#8217;s UX culture<\/h2>\n\t\t\t\t\t\t\t<p>A better UX leads to more satisfied employees, fewer mistakes, reduced support requirements and, ultimately,\u00a0<strong>more revenue.\u00a0<\/strong>This guide teaches you how to demonstrate the value of your work and strengthen your company&#8217;s UX culture.<\/p>\n\t\t\t\t\t\t\t<p class=\"read-more-button\">\n\t\t\t\t\t\t\t\t<a href=\"https:\/\/resources.testingtime.com\/make-ux-measurable\">\n\t\t\t\t\t\t\t\t\tDownload now\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t<\/p>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t<div class=\"right-side\" >\n\t\t\t\t\t\t<div id=\"teaser-2-image\" class=\"teaser-post-image\">\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t\n<p>What are you waiting for? Go for it!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Table of contents Introduction From qualitative work via UX metrics to ROI Before measurement: planning Always start thinking from the end-result What is the purpose of your metrics? Select the right metrics SEQ \u2013 Single Ease Question SUS \u2013 System Usability Scale SUPR-Q \u2013 Standardized User Experience Percentile Rank Questionnaire NASA-TLX \u2013 Task Load Index [&hellip;]<\/p>\n","protected":false},"author":18,"featured_media":7713,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"om_disable_all_campaigns":false,"footnotes":""},"categories":[8988],"tags":[],"class_list":["post-7654","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-measuring-ux"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v20.5 (Yoast SEO v20.5) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Make UX measurable and strengthen your company\u2019s UX culture - TestingTime<\/title>\n<meta name=\"description\" content=\"How can you make UX measurable? By choosing the right metrics, planning how you present them, providing reading aids, and managing expectations.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Make UX measurable and strengthen your company\u2019s UX culture\" \/>\n<meta property=\"og:description\" content=\"How can you make UX measurable? By choosing the right metrics, planning how you present them, providing reading aids, and managing expectations.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\" \/>\n<meta property=\"og:site_name\" content=\"TestingTime\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/testingtime\" \/>\n<meta property=\"article:published_time\" content=\"2018-12-11T07:00:47+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2021-05-20T14:16:20+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/measurable-UX-metrics.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1920\" \/>\n\t<meta property=\"og:image:height\" content=\"1080\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Jens Jacobsen\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@testingtime\" \/>\n<meta name=\"twitter:site\" content=\"@testingtime\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jens Jacobsen\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"28 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\"},\"author\":{\"name\":\"Jens Jacobsen\",\"@id\":\"https:\/\/www.testingtime.com\/en\/#\/schema\/person\/90868b931ec18959ebc63669c01b5b51\"},\"headline\":\"Make UX measurable and strengthen your company\u2019s UX culture\",\"datePublished\":\"2018-12-11T07:00:47+00:00\",\"dateModified\":\"2021-05-20T14:16:20+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\"},\"wordCount\":5666,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/#organization\"},\"articleSection\":[\"Measuring UX\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\",\"url\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\",\"name\":\"Make UX measurable and strengthen your company\u2019s UX culture - TestingTime\",\"isPartOf\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/#website\"},\"datePublished\":\"2018-12-11T07:00:47+00:00\",\"dateModified\":\"2021-05-20T14:16:20+00:00\",\"description\":\"How can you make UX measurable? By choosing the right metrics, planning how you present them, providing reading aids, and managing expectations.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"TestingTime\",\"item\":\"https:\/\/www.testingtime.com\/en\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Make UX measurable and strengthen your company\u2019s UX culture\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.testingtime.com\/en\/#website\",\"url\":\"https:\/\/www.testingtime.com\/en\/\",\"name\":\"TestingTime\",\"description\":\"Wir rekrutieren Testpersonen\",\"publisher\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.testingtime.com\/en\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.testingtime.com\/en\/#organization\",\"name\":\"TestingTime\",\"url\":\"https:\/\/www.testingtime.com\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.testingtime.com\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.testingtime.com\/app\/uploads\/2017\/04\/logo.svg\",\"contentUrl\":\"https:\/\/www.testingtime.com\/app\/uploads\/2017\/04\/logo.svg\",\"width\":1,\"height\":1,\"caption\":\"TestingTime\"},\"image\":{\"@id\":\"https:\/\/www.testingtime.com\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/testingtime\",\"https:\/\/twitter.com\/testingtime\",\"https:\/\/www.instagram.com\/testingtime\/\",\"https:\/\/www.linkedin.com\/company-beta\/9231506\/\",\"https:\/\/www.youtube.com\/channel\/UCpnMUgCz5FiiCUXU-U8ub1w\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.testingtime.com\/en\/#\/schema\/person\/90868b931ec18959ebc63669c01b5b51\",\"name\":\"Jens Jacobsen\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.testingtime.com\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/05\/jens-jacobsen-foto.256x256-130x130.jpg\",\"contentUrl\":\"https:\/\/www.testingtime.com\/app\/uploads\/2018\/05\/jens-jacobsen-foto.256x256-130x130.jpg\",\"caption\":\"Jens Jacobsen\"},\"description\":\"Jens ist langj\u00e4hriger UX-Berater f\u00fcr Web- und App-Projekte, sowie Autor des erfolgreichen Ratgebers \u201ePraxisbuch Usability und UX\u201c. Zudem teilt er seine Leidenschaft f\u00fcr die Usability, UX und User Research regelm\u00e4ssig an Konferenzen, Corporate Trainings und \u00fcber seinen Blog benutzerfreun.de.\",\"sameAs\":[\"http:\/\/benutzerfreun.de\"],\"url\":\"https:\/\/www.testingtime.com\/en\/blog\/author\/jens-jacobson\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Make UX measurable and strengthen your company\u2019s UX culture - TestingTime","description":"How can you make UX measurable? By choosing the right metrics, planning how you present them, providing reading aids, and managing expectations.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/","og_locale":"en_US","og_type":"article","og_title":"Make UX measurable and strengthen your company\u2019s UX culture","og_description":"How can you make UX measurable? By choosing the right metrics, planning how you present them, providing reading aids, and managing expectations.","og_url":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/","og_site_name":"TestingTime","article_publisher":"https:\/\/www.facebook.com\/testingtime","article_published_time":"2018-12-11T07:00:47+00:00","article_modified_time":"2021-05-20T14:16:20+00:00","og_image":[{"width":1920,"height":1080,"url":"https:\/\/www.testingtime.com\/app\/uploads\/2018\/12\/measurable-UX-metrics.jpg","type":"image\/jpeg"}],"author":"Jens Jacobsen","twitter_card":"summary_large_image","twitter_creator":"@testingtime","twitter_site":"@testingtime","twitter_misc":{"Written by":"Jens Jacobsen","Est. reading time":"28 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#article","isPartOf":{"@id":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/"},"author":{"name":"Jens Jacobsen","@id":"https:\/\/www.testingtime.com\/en\/#\/schema\/person\/90868b931ec18959ebc63669c01b5b51"},"headline":"Make UX measurable and strengthen your company\u2019s UX culture","datePublished":"2018-12-11T07:00:47+00:00","dateModified":"2021-05-20T14:16:20+00:00","mainEntityOfPage":{"@id":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/"},"wordCount":5666,"commentCount":0,"publisher":{"@id":"https:\/\/www.testingtime.com\/en\/#organization"},"articleSection":["Measuring UX"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/","url":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/","name":"Make UX measurable and strengthen your company\u2019s UX culture - TestingTime","isPartOf":{"@id":"https:\/\/www.testingtime.com\/en\/#website"},"datePublished":"2018-12-11T07:00:47+00:00","dateModified":"2021-05-20T14:16:20+00:00","description":"How can you make UX measurable? By choosing the right metrics, planning how you present them, providing reading aids, and managing expectations.","breadcrumb":{"@id":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.testingtime.com\/en\/blog\/make-ux-measurable\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"TestingTime","item":"https:\/\/www.testingtime.com\/en\/"},{"@type":"ListItem","position":2,"name":"Make UX measurable and strengthen your company\u2019s UX culture"}]},{"@type":"WebSite","@id":"https:\/\/www.testingtime.com\/en\/#website","url":"https:\/\/www.testingtime.com\/en\/","name":"TestingTime","description":"Wir rekrutieren Testpersonen","publisher":{"@id":"https:\/\/www.testingtime.com\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.testingtime.com\/en\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.testingtime.com\/en\/#organization","name":"TestingTime","url":"https:\/\/www.testingtime.com\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.testingtime.com\/en\/#\/schema\/logo\/image\/","url":"https:\/\/www.testingtime.com\/app\/uploads\/2017\/04\/logo.svg","contentUrl":"https:\/\/www.testingtime.com\/app\/uploads\/2017\/04\/logo.svg","width":1,"height":1,"caption":"TestingTime"},"image":{"@id":"https:\/\/www.testingtime.com\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/testingtime","https:\/\/twitter.com\/testingtime","https:\/\/www.instagram.com\/testingtime\/","https:\/\/www.linkedin.com\/company-beta\/9231506\/","https:\/\/www.youtube.com\/channel\/UCpnMUgCz5FiiCUXU-U8ub1w"]},{"@type":"Person","@id":"https:\/\/www.testingtime.com\/en\/#\/schema\/person\/90868b931ec18959ebc63669c01b5b51","name":"Jens Jacobsen","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.testingtime.com\/en\/#\/schema\/person\/image\/","url":"https:\/\/www.testingtime.com\/app\/uploads\/2018\/05\/jens-jacobsen-foto.256x256-130x130.jpg","contentUrl":"https:\/\/www.testingtime.com\/app\/uploads\/2018\/05\/jens-jacobsen-foto.256x256-130x130.jpg","caption":"Jens Jacobsen"},"description":"Jens ist langj\u00e4hriger UX-Berater f\u00fcr Web- und App-Projekte, sowie Autor des erfolgreichen Ratgebers \u201ePraxisbuch Usability und UX\u201c. Zudem teilt er seine Leidenschaft f\u00fcr die Usability, UX und User Research regelm\u00e4ssig an Konferenzen, Corporate Trainings und \u00fcber seinen Blog benutzerfreun.de.","sameAs":["http:\/\/benutzerfreun.de"],"url":"https:\/\/www.testingtime.com\/en\/blog\/author\/jens-jacobson\/"}]}},"_links":{"self":[{"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/posts\/7654","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/users\/18"}],"replies":[{"embeddable":true,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/comments?post=7654"}],"version-history":[{"count":42,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/posts\/7654\/revisions"}],"predecessor-version":[{"id":13073,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/posts\/7654\/revisions\/13073"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/media\/7713"}],"wp:attachment":[{"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/media?parent=7654"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/categories?post=7654"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.testingtime.com\/en\/wp-json\/wp\/v2\/tags?post=7654"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}