Home Community Blog Buy Now
Blog

Misty Community Forum

Loading Machine Learning / Deep Learning Models onto Misty

Does Misty Robotics have it on the roadmap to provide an API to allow loading of Machine Learning / Deep Learning Models onto Misty? For example, using the recently released Microsoft Lobe application (https://lobe.ai/), I would like to train a model to identify if a person is wearing a mask or not (just as an example). The tool allows the user to upload images, train a model, play / test with new images and then export it to run on industry standard platforms and work in apps or devices. This is an amazing tool and I think more AI companies will release similar tools. I think loading models onto Misty will allow developers to create very cool and powerful features/products with Misty. Will loading models onto Misty to use with the Qualcomm Snapdragon Neural Processing Engine be available in the future?

Thanks for any information.

Wes

1 Like

Hi Wes,

This is a great idea! While Misty doesn’t natively support this feature, there are a couple of work-arounds which may be helpful in the meantime. For starters, it appears that Lobe offers there own .NET API which can be used in any .NET application. As such, it might be possible to integrate Lobe models using Misty’s .NET SDK.

In addition, you could run a Lobe model (or any other machine learning model) on an external processor and then communicate this information back to Misty via serial. Here’s an example of a Lobe application running on Raspberry Pi which might be helpful.

Lastly, Misty recently introduced native object recognition in build 1.18. This build gives Misty the ability to recognize and track over 70 different objects! While masks aren’t currently supported, you could recognize and track a person, for example. This feature is still in early development, so it will likely improve dramatically over time.

I hope this is helpful! Please feel free to reach out at any time if you have any other questions.

Best,
Jackson

Lobe looks pretty cool. Our roadmap has been all upended this year by COVID-19 so I’m not sure when we will put in a generic model runner. The easiest is actually TFLite and it does look like Lobe can output TFLite models which is really nice. The main obstacle in enabling a generic TFLite model runner is defining a generic enough interface so it works for every possible model type across 2 SOMs.

So, will this be a feature in the future? Most likely yes, but we don’t know yet when we’ll get to it. In the meantime I would suggest running this offboard and calling into it (look like Lobe even supports doing REST endpoints).

Great - thanks for the update.