Home Community Blog Buy Now
Blog

Misty Community Forum

Localisation of speech recognition with misty without 3rd party service

Hi Misty Community ,

i am new to misty and i’m a student currently doing it for research & development for my project.Currently i’m still in research and planning phase on language and method to implement it .

My objective are to implement NLP model into the Misty’s 410 processor locally without using 3rd part service like Dialogflow through REST for speech recognition which task misty to do certain action based on the speech it recognized, i would like the community to able to provide suggestion and feedback on the idea for implementation.

Some method i had look around in this forum was
Run a external processor (e.g Raspberry PI) with a model and send it back to Misty via serial port with UART.

Hope to hear my suggestion from you guys. Thanks !

Welcome to the forums! I can probably help a little bit here. You’re probably already aware, but you really have two problems to solve. The first problem is the speech-to-text problem, which is easy to solve with an external service. If you’re interested, the robot already supports Sphinx, which can provide some grammatical analysis, though it takes a bit of tuning to get it right. Internally (and soon to be released), we also run another system called DeepSpeech, which does a fairly good job of converting the speech content into text.

From an NLP perspective, you’ll need some core tools to get moving, specifically tools to perform tokenization and part of speech analysis. I can’t speak to the JS side, but within C#, I’m currently using OpenNLP and SharpWordNet to break down language content into something I can further normalize and classify. From there, you’ll need to determine the set of algorithms necessary to get from syntax to intent, but I think that’s at the heart of the research component of the project.

Good luck, and I look forward to seeing how it goes for you!

1 Like

Thanks morgan for the replies and help. Currently i’m exploring vosk api which recommended by Sphinx it their new library and worked offline. I do have a question on hardware part too , is it recommend to have the arduino backpack for serial communication and is it possible to directly solder TX RX directly from RPI to Misty TX RX pin ? Thanks

You don’t necessarily need an Arduino to do serial communication. If you open the backpack itself, there’s a perfboard that you can work with, either by soldering on headers, or soldering directly to the comm traces. I played with vosk quite a while ago and had trouble with it, but I’d guess that it has improved quite a bit since then. Interested in your experiences as you use it.

1 Like

Hi morgan, it is possible to share more how do u guys implemented Sphinx and DeepSearch onto misty robot ? Thanks