Red Cross Alexa Skill

Advised on product design of Alexa Skill at American Red Cross

I. Context & Background

During the Covid-19 pandemic I got in touch with the Innovation Lab at American Red Cross, who had made a commitment to state and federal authorities to expand their free product offerings as a service to the public. I joined the team as a volunteer product consultant focused on emerging technologies. The goal of my 6-month engagement was to improve the performance of the Red Cross “First Aid” Amazon Alexa skill, which dispenses first aid advice through Alexa’s conversational AI interface.

Users called on the skill to learn about topics and take quizzes related to health and safety. However, some areas of experience were verbose and confusing, causing users to abandon their interactions. Users were leaving the platform and migrating to competitors’ offerings. Business goals of the Alexa skill were unclear and the team needed help finding a direction. The American Red Cross needed someone to help to improve its AI offerings and boost engagement.

I had experience building chatbots in my previous role at Transamerica, so I knew what to look for. There are certain elements that all chatbots possess, regardless of their underlying software architecture. For example, every chatbot’s content structure is written in a series of short dialogs which mirrors what happens in any real life conversation: it’s a decision tree with several hundred to several thousand permutations that the user can trigger in a myriad of ways. In human interactions, there are perhaps a few hundred permutations (of a reasonable statistical likelihood) that a person may say in a simple conversation as a response to what someone else has said. In the case of a chatbot, these permutations are reduced substantially in number. There’s a very limited number of things they can say because they still need to be manually programmed, although that’s beginning to change. That’s why people think chatbots are dumb and dislike talking to them. Alexa skills are essentially voice-activated chatbot dialogs that are fed through Amazon’s proprietary text-to-voice software.

I came aboard to polish the Alexa experience at Red Cross

II. Objectives

The first thing I noticed after joining the team was the overall project goals of the Alexa skill were ambiguous. Yes, the organization had built and launched an Alexa skill like many of its peers. Indeed, the Alexa skill functioned, capable of providing basic responses to users, but it wasn’t very easy to use. Near the end of my investigation, I got the feeling that the team fell into a very real trap that: they built an Alexa skill because everyone else was doing it.

Keeping up with the Jones’s is as American as apple pie. But in a world where we are all copying each other for the purpose of scoring points, or looking good, the scene starts to become a bizarre mash up of self-gratifying behavior often for the sake of staying relevant. I’m not saying that’s what happened here, although I think there is always at risk of that happening in any large organization. My foundational research also revealed that the previous product manager walked out on the job (and took any semblance of a product vision with him). 

As Dr. Product, my goal was to rescue this suffering patient who had been neglected, abused, and almost killed. I devised the following protocols to resuscitate poor Alexa: I made recommendations on enhancing the skill’s usability, increase completion rates, audit the content for clarity, make the product more effective in training seminars, decrease the dead-ends, identify KPIs, improve the quiz content, make the navigation easier to use, and improve the overall experience.

Interesting concept. Would anyone use it?

Basically, they said to me, “We have this thing. We don’t know what it does. Make it better.” Sure, I can help with that!

My personal objectives were:

  1. Understand user challenges
  2. Improve user experience
  3. Develop a compelling use case
  4. Recommend areas of improvement
  5. Hand off recommendations to offshore development team

III. Methodology

User Testing

I bought an Alexa speaker and got to work. This was my first time ever using this product, and I thought it would be a fitting way to put myself in the shoes of our average user. We did not have budget to do any sort of real-world user testing, so I would have to simulate it through my own effort.

The setup process was a chore. It was difficult to get started with the Alexa skill, and it was also unclear how this was supposed to work. Not a good sign. After a while, I did get the Red Cross skill to turn on, but once it happened, things got worse. The instructions given to me were verbose and clunky. I listened to Alexa list endless menu options and selections that I was supposed to state: “Available options are test content, quizzes, vocabulary…” What? Wasn’t this supposed to be a conversation?

Content and navigation were sore spots for me. It was difficult for me to access parts of the experience without having built the product myself. It was difficult to move between different subjects (for example, bee stings versus broken bones) and it was also difficult to find one’s way around the experience. This validated reports we received from users, many of whom stopped using it in favor of rivals like WebMD. According to my quick and dirty self-reported user testing, we already were failing.

Improve UX

To improve navigational efficiency, I journey-mapped the user flows and developed a hierarchical taxonomy of commands that could be used at any point in the experience to navigate home, to the next topic, or wherever the user wanted to go. I built dialog flows and made content improvements such as incorporating commonly-used words and phrases into the dialogs. I made suggestions for improving the way customers interacted with the product by allowing them to quickly get to subjects in fewer steps. This was the result of countless hours of testing and refinement.

Use Case

The Alexa skill was somewhat of a zombie at the beginning. It had no home in the organization and did not serve a specific business goal. After interviewing subject matter experts who were involved in driving traffic to the company’s website, I asked whether the Alexa skill was being monetized. Unsurprisingly, it was not. I suggested offering touch points in the experience where users could request additional information on the Red Cross website if they did not receive an answer to their first aid questions while using Alexa. For example, if a user asked for the signs of infection, that is definitely something that Alexa can describe, but there may be a benefit to augmenting that information with visuals. I incorporated a command in the software called “show me” where a user could request Alexa to pull up Red Cross’s website on the specific subject matter at hand. This began driving traffic to the website and improved the size of the sales funnel.

Recommend Improvement Areas

Ultimately, the areas of improvement that I identified during my tenure were (1) improve the onboarding process, (2) improve inter and intra-module navigation, (3) allow users to easily navigate between Alexa and web experiences, (4) embrace common vernacular in dialog interactions, and (5) broaden content base to improve search outcomes.

Recommend Improvement Areas

To finish the project, I wrote a series of epics and user stories which captured my findings and joined numerous sprint planning meetings where I explained my recommendations to business analysts, front-end developers, back-end developers, and software architects. 

Defining an Alexa skill product vision is no easy task

IV. Results

When I arrived at Red Cross, the Alexa Skill had good bones, but needed a strategic audit to unlock its full potential. After clarifying our goals, recommending changes, and mapping customer journeys for the development team, I left the experience in a drastically improved state. My achievements consisted of:

  • Revised 160+ pieces of content and quiz questions
  • Improved navigation through information architecture
  • Revised delay and pause cadence in Orbita bot builder
    Fallback rate reduced 10%
  • Improved user satisfaction validated in future surveys
  • Clear use case defined

What started as an experiment at Red Cross was now good enough to be promoted by the marketing department for a mass deployment to Red Cross’s member base.

V. Reflection

It’s never too late to understand your users. When I joined my first Red Cross meeting, I had little understanding of what the team’s goal actually was—because it wasn’t clear. As product manager, my role was to help define the product vision and steer the product in the direction of user needs. Working with voice assistants is unique because cognitive overload can occur very quickly; therefore, it’s important to state things simply and clearly. Providing too much qualifying content before the next step can be confusing for users. Language (all content) should always be pushing the user forward. Watch out for redundancies and be very selective about what information is given upfront.

Ultimately, I gained a lot of satisfaction applying chatbot skills to another emerging area of technology, and was surprised to see how well my skills translated. I look forward to similar opportunities in the future.

View Other Projects