CLOSE ✕
Get in Touch
Thank you for your interest! Please fill out the form below if you would like to work together.

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form

Securita: Conversational Interface Design

Securitas Technology

UX Research | UX Writing | Chatbots

Iphone Mockup of Securita project

My contribution

User Research through Interviews
Analysis of research through Affinity Mapping
Creation of chat flows and development of POC prototype on Voiceflow
User Testing sessions with internal stakeholders
Refinement of Figma prototype as per feedback

The team

1 UX Researcher, 2 UX Designers & 1 Developer

Timeline

Research Phase -
January 2022 - May 2022

Design and Validation Phase -
August 2022 - December 2022

Project Contraints

  • Introduction to CUI - Designing for a conversational interface is completely different from a traditional design cycle experience.
  • Limitations during Design Phase - Conceptualized chat flows could not be implemented fully as certain aspects required API and server access.
  • Limitations during Validation Phase - Due to the above constraint, we had to test with internal stakeholders rather than end users as all functionalities were not working fully.

Project overview

Project Background

HQ is a security management platform that allows users to view, remotely monitor their building, and respond to notifications. Queries related to the HQ application increased over 600 times with the onset of COVID-19. The aim of the project is to create a conversational interface for the HQ application such that the customers can obtain timely aid in case of any queries.




Our focus for the project

Research showed that Test Mode and Service Dispatch were the two categories where users required most assistance.

The scope of this project focuses on helping users solve basic queries related to Test Mode and Service Dispatch.

Key features

Auto-suggestion of location from HQ account
In-App Widgets for Date and Time
Smart Bot Responses based on existing knowledge
Detailed Responses to user queries
Confirming information before taking any action

Product validation

Validation was conducted with internal stakeholders on the Conversational and Design teams.

Here are some key insights:

Add Context - User needs to be either logged in or logged out for the chat flow to work.

Language of the Bot - More humane and natural language needed to make the user more comfortable while using a chatbot.

Input type/Flow - Certain flow changes that match how users would like to interact with a chatbot.

Error Handling - More intent recognition and allowing better input interactions for the users.

Smart Interactions - Auto detect / suggest locations for better recognition.

Affirmations & Confirmations - Appropriate formatting, confirmations based on the provided input making the user believe that they are being listened to.

Impact Created

Potential for reducing customer service calls by up to 600 times as compared to current scenario

How did we achieve this?

See our design process

Research phase

domain research

about hQ:

HQ is a security management platform that allows users to view, remotely monitor their building, and respond to notifications. With the help of this application, the users can monitor security data from multiple locations in one place. 

Other tools explored:

CallRail is an application where customer service calls are recorded and stored for future review.
We were able to listen to customer calls and understand common queries and resolution flow.
Drift Tableau is a chatbot system where a tree-like structure defines the finite set of options provided to the end-users. Current marketing chatbots are frequently Drift-based.

our approach to CUI:

Bottom up approach - In this approach, we follow an intent first process where the conversation is broken down into its intents and the interfaces are designed based on these intents. We would need to take a bottom up approach for this project as the end users have clear, repetitive but distinct needs.

Key competitor in CUI for security:

ADT olivia

Olivia is a conversational bot that acts as a 24-hour, 365-day customer service center. She answers technical, administrative and security questions. Olivia is designed to redirect the customer to already published content as well as allows users to carry out certain tasks via the bot itself.

Impact of ADT olivia

  • +10K Monthly conversations
  • 24/7 support
  • 95% interactions end with a resolution

user research insights

Interviews were conducted with 6 customers of Securitas Technology. Below are high level insights gathered from the interviews.

What they said:

I would prefer a conversational bot if it can help me address my problem.
I would rather use a chat on my browser window than keep a call waiting on my phone.
Getting the right help at the right time with little scope for confusion is my priority.

analysis phase

interview coding and affinity mapping

key findings:

  • Affinity Mapping helped identify 6 broad categories for usage patterns within HQ application.
  • The need to be able to put the security system on test, alarm and ticket status check and report generation, were the most commonly used features

defining a user profile

Scoping down for initial phase

Insights from the affinity mapping exercise were useful, but too broad to proceed with. We decided to go back to our initial research to help narrow down on 1-2 clear problem areas that we could implement in a chatbot. The CallRail application helped us in this regard.

The following are some of the key terms which were used when calls were abandoned mid-way without reaching a resolution stage.

Filtering out the general terms (outlined circles) we can identify the key jobs that users might have needed help with (filled circles). Hence Service, Test and Accounting are some of the key jobs identified.

Interfacing these findings with our user interview results, we were able to narrow down to Service Dispatch and Test Mode as our MVP for the chatbot.

why test mode and service dispatch

They are basic tasks, hence all users need to execute them at some point. There is a large user base to be reached if we are able to implement them.
Edge cases in these two scenarios are rare, hence the likelihood of users being able to do these tasks without needing intervention from a customer service representative is high.

ideation phase

understanding conversational flows

Listening to calls on the CallRail application helped us understand the general flow of conversations when users spoke to customer service representatives. It is shown on the left.

Since this order of conversation is someting that the users were familiar with, we decided to use it as a standard to base chatflows on.

below are the chat flows for our main offerings, service dispatch and test mode:

voiceflow prototype

The above flows were created on Voiceflow, which allowed us to test out a more natural conversation flow rather than sticking to a tree-like structure. Users are able to interact with the system in a more natural way and can use open text or buttons to communicate with the bot. The basic flows for both intents are described below.

service dispatch

Test mode

the above flows were tested with 5 internal stakeholders as a part of the testing phase.

testing and redesign

test sessions

Testing was conducted with the internal team at Securitas Technology with stakeholders who had experience working on the current conversational offerings from the company as well as members from the design team.

Testing protocol

Testing was conducted over Zoom and users were asked to complete the following 3 tasks.

  • Putting a system on test mode
  • creating a new service ticket
  • checking status of an existing ticket

At each stage during the conversation with the chatbot, users were encouraged to share their opinions on how the conversation was going, whether or not they felt that the chatbot was able to understand them, and how they felt that the chatbot should be proceeding.

Results and redesign

Insights gathered during the test sessions were analyzed and the changes identified were implemented in a Figma prototype.

The Key Takeaway from the testing session was to make the chat flows more detailed, with a higher focus on UX Writing and the way the chatbot interacted with the user.

A well written chat prompt was comprehended better and made the user more receptive to what the chatbot was saying.

This can be expected to have a positive effect on the task completion rate and the conversation abandonment rate of the chatbot.

mapping flows for each scenario

Test Mode

Service dispatch

Check status

Additional Artifacts

Capstone presentation

The team presented the project poster at the Fall 2022 Capstone Show at Luddy School of Informatics, Computing and Engineering, Indianapolis.

At the end of the night, the team won the award for the Best Research Project.

Back to top