User Research through Interviews
Analysis of research through Affinity Mapping
Creation of chat flows and development of POC prototype on Voiceflow
User Testing sessions with internal stakeholders
Refinement of Figma prototype as per feedback
1 UX Researcher, 2 UX Designers & 1 Developer
Research Phase -
January 2022 - May 2022
Design and Validation Phase -
August 2022 - December 2022
HQ is a security management platform that allows users to view, remotely monitor their building, and respond to notifications. Queries related to the HQ application increased over 600 times with the onset of COVID-19. The aim of the project is to create a conversational interface for the HQ application such that the customers can obtain timely aid in case of any queries.
Research showed that Test Mode and Service Dispatch were the two categories where users required most assistance.
The scope of this project focuses on helping users solve basic queries related to Test Mode and Service Dispatch.
Here are some key insights:
Add Context - User needs to be either logged in or logged out for the chat flow to work.
Language of the Bot - More humane and natural language needed to make the user more comfortable while using a chatbot.
Input type/Flow - Certain flow changes that match how users would like to interact with a chatbot.
Error Handling - More intent recognition and allowing better input interactions for the users.
Smart Interactions - Auto detect / suggest locations for better recognition.
Affirmations & Confirmations - Appropriate formatting, confirmations based on the provided input making the user believe that they are being listened to.
HQ is a security management platform that allows users to view, remotely monitor their building, and respond to notifications. With the help of this application, the users can monitor security data from multiple locations in one place.
Bottom up approach - In this approach, we follow an intent first process where the conversation is broken down into its intents and the interfaces are designed based on these intents. We would need to take a bottom up approach for this project as the end users have clear, repetitive but distinct needs.
Olivia is a conversational bot that acts as a 24-hour, 365-day customer service center. She answers technical, administrative and security questions. Olivia is designed to redirect the customer to already published content as well as allows users to carry out certain tasks via the bot itself.
Interviews were conducted with 6 customers of Securitas Technology. Below are high level insights gathered from the interviews.
I would prefer a conversational bot if it can help me address my problem.
I would rather use a chat on my browser window than keep a call waiting on my phone.
Getting the right help at the right time with little scope for confusion is my priority.
Insights from the affinity mapping exercise were useful, but too broad to proceed with. We decided to go back to our initial research to help narrow down on 1-2 clear problem areas that we could implement in a chatbot. The CallRail application helped us in this regard.
The following are some of the key terms which were used when calls were abandoned mid-way without reaching a resolution stage.
Filtering out the general terms (outlined circles) we can identify the key jobs that users might have needed help with (filled circles). Hence Service, Test and Accounting are some of the key jobs identified.
Interfacing these findings with our user interview results, we were able to narrow down to Service Dispatch and Test Mode as our MVP for the chatbot.
They are basic tasks, hence all users need to execute them at some point. There is a large user base to be reached if we are able to implement them.
Edge cases in these two scenarios are rare, hence the likelihood of users being able to do these tasks without needing intervention from a customer service representative is high.
Listening to calls on the CallRail application helped us understand the general flow of conversations when users spoke to customer service representatives. It is shown on the left.
Since this order of conversation is someting that the users were familiar with, we decided to use it as a standard to base chatflows on.
The above flows were created on Voiceflow, which allowed us to test out a more natural conversation flow rather than sticking to a tree-like structure. Users are able to interact with the system in a more natural way and can use open text or buttons to communicate with the bot. The basic flows for both intents are described below.
Testing was conducted with the internal team at Securitas Technology with stakeholders who had experience working on the current conversational offerings from the company as well as members from the design team.
Testing was conducted over Zoom and users were asked to complete the following 3 tasks.
At each stage during the conversation with the chatbot, users were encouraged to share their opinions on how the conversation was going, whether or not they felt that the chatbot was able to understand them, and how they felt that the chatbot should be proceeding.
Insights gathered during the test sessions were analyzed and the changes identified were implemented in a Figma prototype.
The Key Takeaway from the testing session was to make the chat flows more detailed, with a higher focus on UX Writing and the way the chatbot interacted with the user.
A well written chat prompt was comprehended better and made the user more receptive to what the chatbot was saying.
This can be expected to have a positive effect on the task completion rate and the conversation abandonment rate of the chatbot.
The team presented the project poster at the Fall 2022 Capstone Show at Luddy School of Informatics, Computing and Engineering, Indianapolis.
At the end of the night, the team won the award for the Best Research Project.