An experiment became a skill which within a few weeks managed to be listed amongst the “Top Activated Skills”
of the category “Education and Reference” (update!) of all categories.
One of the SSML (Speech Synthesis Markup Language) features brings the animal in Alexa to life. The
Thanks to our Animal Sounds Skill Alexa now knows the individual sounds of more than 20 animals. Technically we might have over engineered the skill, however the technical setup of an Alexa is optimized more efficiently with a real project.
The animals, including the sound files for the cards (the visual log in the Alexa app once Alexa answers a request) are managed online in a separate content management system. This allows more animal sounds to be added continuously without changing the source code.
Sound files and images are hosted on Amazon S3. It is important for the sound files to be in the correct format. Thanks to the good documentation in the Amazon developer portal it is easy to find instructions for the correct coding of audio files. For this purpose, we have programmed a simple could application which automatically encodes sound files Alexa compatible and saved it in our S3 bucket.
Now were are good to go: We have the animals, sound files and the pictures for the cards. After activating the skill, the user can ask for example: “How does the cow go?”. Alexa will answer: “The cow goes… “ and play the audio file of the corresponding animal.
In case the API cannot provide an animal, for example if a certain animal sound should not be present in our database the user will receive the sentence “I do not know this animal” as feedback. So we do not miss out on these requests we have set up a slack channel which receives these requests with the help of a webhook bots. Based on these messages we can expand the choice of available animal sounds.
The challenge for the implementation: The voice interaction model. It had been unexpectedly complex even with a skill as simple as that to anticipate beforehand how the user would utilize the skill. In the beginning we opted for the easy path by only allowing requests in which the articles were used correctly (note: The German language has three different articles).
Since Alexa developers can continuously publish new versions of their functions we were able to fix bugs quickly. Now definite and indefinite articles are accepted. Regardless of the wrong use of an article, Alexa’s response will remain the same.
What’s next? We kept the design of the cards simple. We would like to give them a revamp and are thinking about adding a short factsheet and thus increasing the added value for the user.
Now we are all set for testing. Here you can find the link to the Animal Sounds skill. Have fun! 😉
Baseline Music consumption is the number one daily use cases on every smart speaker. Over the past years...READ MORE
How to extend your steaming audience to voice Today is the launch of the latest Amazon Originals series “Bibi...READ MORE
A well designed testing process is vital for developing successful voice applications. The benefit of a proper process...READ MORE