This Misty Skill demonstrates how a user can send commands to 1 or more Misty robots anywhere in the world. This is a proof-of-concept prototype – not a fully tested solution. Sample commands include: FLASH_LEDS, MOVE_HEAD, MOVE_ARMS, DISPLAY_TEXT, SPEAK_TEXT, LOOK_AROUND and SING_SONG. It uses the Azure Cognitive Services – Speech Service to translate text to speech / audio files.
Refer to the installation and configuration instructions in the Git Repo for more details. Feel free to contact me and I will gladly help you with the installation and configuration of the different components. As you will see, it is a bit more complex than a basic Misty Skill and it is my first Misty Skill / project .
To learn more about Azure DevOpts, I decided to store the source code in an Azure DevOpts Public Repository. If it does not work well then I will move it to GitHub.
Below is a link to the README on the public git repo:
Any feedback on this project would be greatly appreciated.