How We Test Elevator Touchless Experience in Post COVID-19

Oryx Elevators and Construction
17 min readFeb 28, 2021

--

The main risk of a new product or service creation is that it won’t find its audience. Users may not understand or accept that they must change their habits.

Testing is the way to improve your concept significantly before you start implementation. Those few hours can save you tons of money.

During the COVID-19 outbreak for several months in China, the market has experienced a huge demand of change in some product usability. Due to the pandemic, and its high infection rate, many services and products that involved a “touching” experience or a surface that was a point of interaction of many users, had been avoided by most of the population. Therefore, as a result of the consumers’ fears and a fast response of the brands, many touchless services were emerged in the market to help people prevent infection in short-term and long-term.

Regarding to the elevators, there has been a fear of touching the elevators buttons, as they could turn into potential high infection spots. As a result, people have adopted many other ways of taking the elevator such as using other object to touch the buttons e.g. keys, phone, pen, toothpicks etc. or even using other protection techniques like using tissues to touch the surface of the buttons.

Schindler China has observed such phenomenon and sees a new market opportunity in creating some touchless experience for the users. After analyzing the different potential technologies, Schindler China chose to move forward with the QR code scanning technology. They quickly built a prototype and were curious about how users would react with the designed touchless experience. They have some concerns:

  1. Is touchless elevator still needed after COVID-19 outbreak?
  2. Is the designed touchless experience good enough for people to use?

To answer these questions, our team helped Schindler China conducting a usability test in 3 days, to unlock the most hidden behavior insights about the new era of consumers and develop a vast research about the the post-pandemic consumerism.

The Sprint Stakeholders

Design Sprint masters from Design Sprint Shanghai: who defined the goal of usability testing, designed the testing process, executed the tests with users, analyzed and wrote the testing report.

R&D innovation manager from Schindler China: who wanted to take Design Sprint as a new approach to test their product quickly and make improvements based on users’ feedback.

R&D engineers from Schindler China: who developed the elevator prototype based on their assumptions about the target users and their needs.

As a service company, we rely on a strong customer orientation as the basis for design and delivery of all products and services as well as the continuous optimization of our internal processes.

STEP 1: Defining Testing Goals

Goal alignment session

Since Schindler China has already developed a prototype with touchless experience, our team directly dived into the Test part of Design Sprint.

Before designing the testing process, we made a session with the development team in order to understand better the goals of the company and the previous process conducted by the engineering team.

Moreover, we wanted to take the advantage of this session and invest time to explain to them what the process of testing was going to be about. We made a special emphasis on the idea that a proper UX testing is made by observing and getting the right data.

The methodology

Observation

What people say, what people do, and what people say they do are entirely different things.

What does this mean?

Social pressure, personal ego or many other reasons are why asking people to give an opinion about something is a bad idea. As an expert UX researcher, it’s essential to understand that in order to conduct a good UX research it is necessary to disrupt the tester’s environment.

Regard it as when visiting animals, what’s the difference between a zoo and a safari? How should your UX test be?

Ethnography

Ethnographic studies tap the social aspect of product design. They observe the challenges people deal with as they interact with and struggle with their environments, or with their existing tools.

Inevitably, the prototype is built based on some hypotheses about the target users and their needs. These hypotheses are solved and responded during the test, by collecting the right data and insights. Sounds reasonable right?

CAUTION! Trying to uncover too many hypotheses in a unique test may lead you to a disaster!

A second essential point to clarify before the test is the question to answer, or the hypotheses to work on. It’s common to work with many of them in the early stage of a UX or product design. At this point of the test, it’s important to pick up the most relevant one.

Usability test

A usability test is intended to determine the extent an interface facilitates a user’s ability to complete routine tasks.

The project aimed to get a deep understanding of user experience of our prototype and get future design guidelines based on representative users´ feedback. The test is conducted with a group of potential users. Users are guided into an open question based conversation as well as asked to complete a series of routine tasks. Sessions are recorded and analyzed to identify potential areas for improvement to the touchless system in the elevator.

About the usability test, we wanted to validate how easy or convenient it is to use our product. In this sense, we would ask to team to imagine every situation the user could experience in order to uncover all the potential critical points and changes.

Once the basics of the methodology were clearly explained and commonly understood by the team we jumped into the goal alignment activities, which helped us understanding 2 main dimensions of the UX assessment:

  1. Ethnography: Who are our users and what are they trying to do?
  2. Usability test: Can people use the thing we’ve designed to solve their problem?

Activity 1, goal alignment

What is the goal of this test? What do other stakeholders need to know about the process?

Following the Design Sprint principle of working together alone we asked the team to think about the question above. Afterwards we shared our thoughts and clustered all the ideas into 4 main topics:

Usability

Does our product solve the user’s issue? How easy is it to use? How intuitive is it? Does it respond well to the different scenarios that may occur?

Behavior change reason

What is it exactly that drives users to change the perception and behavior about touchless elevators? Is it fear? Is it social pressure?

Value in time

Will the issue remain after the COVID-19 is gone? Will people see valuable any kind of solution in the following years?

Target user type

Even though the user spectrum of elevators is wide, is there any specific target that suffers from this issue? Is it more relevant for some kind of people Should the solution be different for different people?

Activity 2, hypotheses clustering

To raise the consciousness of our hidden hypotheses, we asked the R&D engineers to talk about some facts, observations and thoughts about the elevator experience during the COVID-19 outbreak. We listed all the hypotheses we could think of, and prioritized them based on relevance and certainty.

The hypotheses with high relevance and low certainty had higher priority in our test to verify.

Afterwards we clustered them into 4 different groups:

There is no evidence: there is nothing to indicate that the problems really exist or that our work will have the desired results.

There is soft evidence. This may be in form of anecdotes, or word of mouth feedback collected from customers in focus groups or in small sample surveys.

There is presumed evidence. This may be in form of articles in publications or other media. It may be data that are generally true but that may or may not apply in the current case.

There is hard evidence. The company has done reliable research and actual data exist that describe the problems, or provide verifiable measurements about the current and desired state.

Activity 3, test scenario brainstorming

Which are the tasks that we want our user to complete with our product? What would you ask to an all knowing customer?

All participants in the session had brainstormed many test scenarios. And we would choose those which are relevant to the test goals and can bring us insights about our prototype. The well-designed test scenarios can help us get answers from the users.

STEP 2: Designing Testing Process

After meeting with R&D engineers, our team started to design the testing process with the goals and hypotheses from STEP 1.

To start with, we made a usability test plan. The purpose of the plan is to clarify and document what we are going to do, how we are going to conduct the test, what metrics we are going to capture, number of participants we are going to test, and what scenarios we will use.

The elements of a usability test plan:

  1. Scope
  2. Purpose
  3. Script
  4. Scenarios
  5. Schedule & Location
  6. Sessions
  7. Equipment
  8. Participants
  9. Qualitative data
  10. Quantitative metrics
  11. Roles

Scope

The test will cover the usability test of the QR code touchless elevator device, from the value proposition to the usability of the product.

Purpose

The purpose of this test is to validate the product-market fit. These are the different perspectives that are going to drive the test:

  • Behaviour change reason: What is exactly that drives customer to change the perception and behaviour about elevators? Is it fear? Is it social pressure? Is the market ready or even looking for a new experience in the elevators usage?
  • Usability: Does our product solve the users’ issue? How easy is to use it? How intuitive is it? Does it respond well to the different scenarios that may occur?

Script

The test facilitator wrote a very comprehensive script to conduct the whole testing process. The script includes:

  • introduction: introduce the team, the purpose of the test
  • warm-up: let the user to do some self-introduction
  • ethnography questions: ask for some information, facts and opinions from the participants about the touchless service and COVID-19 outbreak
  • test setup: move to the elevator prototype and introduce it
  • test completion: ask the participants to finish some tasks under different scenarios
  • follow-up questions: ask about their feelings and feedback about the prototype
  • wrap-up: appreciate for participating the test and close the whole session

The questions we asked the participants were well-considered and aimed not to get them biased by our prototype. We wanted to get their real reactions and thoughts about our prototype. In this way, we would know if the prototype is what they really need and value.

Scenarios

The participants would go through 11 different scenarios to test the usability of the product in the different potential situations. These scenarios will reveal important insights about the blind spots of the product design as well as clarify the potential design changes that may have to be taken in the future.

Schedule and location

The prototype is located in the Schindler China office. All testers would be welcomed at the lobby in the office and then move to the prototype area to finish the test session.

The schedule includes a 3-day testing process and a post test survey after the prototype has been used for 1 week.

Sessions

The test session lasts for 40 minutes. It includes:

  • 00:00–00:05 Presentation and warm up. Place: Lobby
  • 00:05–00:15 Ethnography open questions. Place: Lobby
  • 00:15–00:35 Task, scenario completion. Place: Prototype area.
  • 00:35–00:40 Wrap-up and satisfaction survey. Place: Lobby

Equipment

This is the list of the equipment that were used during the test

Test recording:

  • Security camera video recording
  • Voice recording
  • Timer

Test facilities:

  • QR scanning app as the touchless prototype
  • Printed QR card as the comparison group of the touchless prototype

Scenario completion gadgets:

  • Phone
  • Heavy bags

Participants

We will have a number of 20 testers in total, divided into 3 different days. All of them will go through the same testing questions and scenarios.

Each tester will need to sign up a consent form and the data will keep anonymous.

Qualitative data

In order to measure the qualitative data, we will use two different processes.

1. Open questions: open questions were made in order to avoid any biased data. Moreover, the tester would be encouraged to talk about personal experiences and anecdotes around the COVID-19 and their behaviour change during the last months. .

2. Roadmap: qualitative data would be recorded at different levels as thoughts, impressions, feelings and reactions in a form of a roadmap. This way we would collect the user experience data per every scenario and we will be able to detect the improvement points of the product based on the user experience recorded.

Quantitative metrics

  1. Effectiveness: The accuracy and completeness with which users achieve specified goals
  2. Efficiency: The resources expended in relation to the accuracy and completeness with which users achieve goals.
  3. Satisfaction: The comfort and acceptability of use.

Roles

During the test there will be 2–3 people involved:

Facilitator: the facilitator person will be conducting the test by generating the conversation flow with the tester and making the person feel comfortable.

Observers: the observer(s) will be people in shade collecting all the data and actively listening and observing the behaviour of the tester.

The collaboration between the different roles is essential, as the most important insights come from qualitative measurements and the afterword reading and interpretation of it.

Moreover, after every test, the team will take 5 minutes to reflect the fresh ideas and thoughts about each tester, by collecting the most significant information about the tester in the following 3 areas: Behaviours, needs and goals.

STEP 3: Lean UX Testing

Even the best design won’t ensure the perfection.

3,2,1 action!

After defining all the aspects of the test, surprise surprise… it didn’t work as expected! What a tragedy!

Well, welcome to day to day life of an innovation strategist. Plans are necessary, they work as a guidance, but they will never work at a 100%. But hey, as in many other aspects of design and innovation, the most important thing is to analyze the problem and iterate as fast as possible. And that’s exactly what we did:

UX test day 1

The first day was very exciting for us. We had so much fresh experience about the whole testing.

After several participants, we started to see some interesting patterns. For instance,

  • some would complain that the sensors in touchless elevator are not sensitive,
  • some would prefer to use a card or key to call the elevator rather than to use a mobile phone to call it,
  • some would show frustrated emotions when they found difficulty in using the touchless experience,
  • some would follow others to use the touchless experience,
  • and some were not afraid of touching the elevator surfaces since they thought the COVID-19 outbreak in China was almost gone.

During the test, we also found that the templates we prepared to take notes were not easy enough to use. In the end of the day, we took some time to do reflections and decided to simplify our templates for the convenience. Instead of taking pages of notes, we would just use 1 page with both sides to write all information and test results about one participant.

UX test day 2

With the second day, we didn’t get much surprise during the day because we just saw repeated patterns we got from the first day. Although there would emerge some interesting insights from specific participants, they were not strong enough to show persuasive patterns.

This was a pivot of our testing process. We felt we could not get much if we continued the existing process with the rest of the participants. Instead, we tried to dig deeper into the patterns we observed from the finished tests. During the retrospective, we figured out 2 situations we wanted to verify again:

  • In the first situation, we wanted to ask the participants to imagine themselves in Europe (at that time Europe were in serious situation of COVID-19) rather than in China. We wanted to raise some pressure and panic from the participants and see how they would react to elevators under such serious situation
  • In the second situation, we wanted to ask the participants to finish all test scenarios with friends together. We wanted to expose the participants to an environment with some social interactions and see how they would react to elevators with others together.

With the new situations we came up with, we got very excited about the third day. And such pivot gave us a chance to dig deeper from psychology perspective.

UX test day 3

During the third day, we got some fresh insights from the 2 situations we came up as the pivot of our test.

For the first situation that we asked participants to imagine pandemic situation in Europe, the participants would interact with the touchless service more often than those in the first day.

For the second situation that we asked participants to finish tasks with their friends, the participants didn’t show much preference on touchless experience even if they were exposed under social interactions.

After finishing the tests for all three days, we got much qualitative and quantitative data from the testing process. We were ready to analyze and get conclusions to share with all stakeholders.

What We Learned

Analysis

With the data we collected during the 3-day testing, we started to perform qualitative analysis and quantitative analysis.

For the qualitative data, we organized the behaviors, comments and pain points from the participants. Some patterns were already revealed and captured during the testing process, and we finally summarized them into insightful statements with sufficient evidence.

For the quantitative data, we collected and calculated some metrics like Effectiveness, Efficiency and System Usability Scale.

Effectiveness: The accuracy and completeness with which users achieve specified goals

  1. Completion Rate: the completion rate is calculated by assigning a binary value of ‘1’ if the test participant manages to complete a task and ‘0’ if he/she does not.
  2. Number of errors: Another measurement involves counting the number of errors the participant makes when attempting to complete a task. Errors can be unintended actions, slips, mistakes or omissions that a user makes while attempting a task

Efficiency: The resources expended in relation to the accuracy and completeness with which users achieve goals.

  1. Time based efficiency: Efficiency is measured in terms of task time. that is, the time (in seconds and/or minutes) the participant takes to successfully complete a task.
  2. Overall Relative Efficiency: The overall relative efficiency uses the ratio of the time taken by the users who successfully completed the task in relation to the total time taken by all users.

System Usability Scale(SUS): it allows participants to give subjective assessments of usability of the prototype. The industry average SUS score is 68.

Report

Our team took some time to write the test report to share with the stakeholders in Schindler. The test report was revised for several times until we got a comprehensive version.

The outline of the test report is as the following:

With such test report, we could get everyone on the same page with the situations. It could help the decision makers to choose the next direction to forward.

Reflections

Knowing when to pivot your testing methods

Like we mentioned previously, in user experience testing, small data is enough to show patterns. More testing doesn’t provide much value to us.

When there are already enough cases to show some patterns of users’ behaviors, we can consider to pivot our testing methods to dig deeper into the patterns if we have enough test resources (including time, participants, budget, etc.). Also we could consider doing A/B testing with controlled experiment.

Reveal the hypotheses and limitations of our testing methods

Always understand that our prototype could be built on some hypotheses of target users and their needs, and our testing could be performed under some limitations due to the budget, the environment or other factors.

We should reveal them to the stakeholders to avoid misunderstanding and misleading. In this way, we could help reduce the risk of product failures in early phase.

Social pressure as never before

Even in all the testing cases we will find a social biased, during in times of crisis we will find this phenomena increased exponentially.

We are social animals, and this means that our actions and words are presented in a way that makes us look good among others, even though they might be inaccurate. This is so deep-rooted in our behavior that we even disdainfully label those who don’t follow these norms as anti-social. For instance, a middle-aged elevator user who wants to get an elevator in with their colleagues, would never touch the button of the elevator with the hand. Touching the button is popularly seen as a highly irresponsible attitude, because it puts the social health in risk, and hence even if there is not a usability issue and the user is finding not difficult to navigate through the elevator, they might still complain about it.

A skilled researcher would take the efforts to re-frame the questions in a way as to boost their social desirability (e.g: If you could design this better for your dad/mom, what would you do?).

Wrap-up

Finally!

We feel this usability test project very impressive especially under the pandemic situation. And we are curious about more Design Sprint stories during this special situation. Your ideas and opinions are welcomed about these questions:

  • which are the main differences or points to take into account when designing in such a high uncertainty times?
  • what difficulties Design Sprint masters could face and how to resolve them while social distancing is necessary?
  • what opportunities we could take about the post-pandemic consumerism?

Thanks for reading!

--

--

Oryx Elevators and Construction
Oryx Elevators and Construction

Written by Oryx Elevators and Construction

0 Followers

Oryx Elevators & Construction LLC has installed nearly 1000 elevators in Oman and provide excellent service support with several annual maintenance contracts.

No responses yet