After publishing our Animal Sounds skill we focused on the development of the home emergency call service which we are planning on publishing in exactly 40 days. Today we are celebrating halftime.

So far, we have worked out the voice user interface for our skill and based on that derived the intent structure for our MVP. Several approaches are suitable for the development of such voice user interfaces. Next to a simple draft of a dialogue between the user and Alexa flowcharts are excellent for visualizing dependencies and different threaded conversatios. Step by step, we developed the design for the voice interaction in our skill with the help of (we can highly recommend since it is free and offers similar collaboration opportunities as Google Drive).

Just like the “chat”-visualization it is useful to first outline a happy path. Exclusively commands requested or expected by the user are added to the happy path. The user is able to reach his or her desired result as quickly as possible (for example the setup of a skill). Based on this happy path, threads of the voice user interface can be expanded horizontally or vertically in order to process false, unexpected or no input accordingly. It is important to euip the skill for every possible case and to avoid a one way path which does not allow the user to progress or in the worst case presents him or her with a generic error message (e.g. “Invalid response” or the like).

Next to the success path we decided on three further options which can be incorporated into every stage of the skill:

  1. No user input:: If the user is silent, we utilize Alexa Reprompt and repeat the question / answer to the user in a shortened and altered version.
  2. User says „help“: Alexa users can at any point ask for help. This will redirect them to HelpIntent where the user can receive advice on the use of the skill. It is recommended to offer the user individual support, depending on where in the skill he or she currently is.
  3. An unexpected input by the user:: If the user provides unexpected input, we offer appropriate help.

Mistakes can be made, especially during the entry of the emergency contact’s phone number. This is why the user entry is repeated and the skill offers a yes/no question whether or not the entry is correct.

Depending on the depth of conversation and possible sources of errors in case there are more paths than A or B the user can choose from, the complexity rises.

Even for our MVP which – as the name gives away – has a limited range of functions, is based on a huge flow chart (see below).

The intent scheme for our skill is in its final development stage. Here too we are trying to keep complexity at a minimum which will be possible thanks to the Amazon dialogue interface. The dialogue interface allows to show different layers of a conversation within an intent and stores intermediate states.

We will continue on – our next step next to finetuning the intent scheme is to reduce the error rate caused by erroneous entries. We are thinking of validating the emergency contact’s phone number. Furthermore we are doing tests with representatives of the main target group. We will introduce the service and are looking to gather feedback in order to stress test our MVP under realistic circumstances.



Worauf warten Sie? Lassen Sie uns
noch heute über Ihre Ideen sprechen!