Home Community Blog Buy Now
Blog

Misty Community Forum

2020.09.01 Release Notes

2020.09.01 Release Notes

Greetings, Misty developers!

Thanks, as always, for being an early developer for the Misty II platform. We can’t wait to see what you build!

Installing the Update

Misty receives software upgrades as over-the-air (OTA) system updates. This update will be available within the next 24-48 hours (the precise timing varies by region).

Misty automatically checks for new updates each time she boots up. As long as her battery has enough charge, she automatically installs any updates that are available. If your robot doesn’t start to install this update the next time she boots up:

  • Check that Misty is connected to power or sitting on her wireless charging station.
  • Try connecting Misty to the Command Center to make sure she’s still on your Wi-Fi network.

If Misty is charging and connected to the internet, you can check whether an update is available in your region by connecting Misty to the Command Center and looking at the System Updates section.

Note: Misty reboots once during a system update. During an update, Misty ignores all commands except Halt and Stop. If Misty starts installing an update while charging, do not remove the power source until the update is finished and Misty’s eyes are fully open.

If you have issues with a system update or need technical assistance for other reasons, for the quickest response you can:

  • Post a message to the Support category here in the Community forums.
  • Contact the Misty support team through the chat embedded in this site, or by emailing support@mistyrobotics.com.

Release Contents

  • Misty II - Updates
    • Window IoT Core OS version: 10.0.17134 or higher - No updates
    • Android OS: (8.1) - No updates
  • Robot Version: 1.18.8.10610
  • MC/RT Version: 1.18.8.114
  • Sensory Services App Version: 1.18.8
  • Web-based Tools - Updated
  • Misty JavaScript Extension for VSC - No updates.
  • Misty App - No updates.
  • Documentation - No updates.

New Features

We have 2 new features that are still in alpha [but we wanted to give you a sneak peek], stay tuned for more information on them including documentation

Added new object detection models.

  • These can detect ~80 different objects
  • To get started with the object detector first subscribe to the ObjectDetection event and use the new StartObjectDetector and StopObjectDetector commands in Misty’s REST API and JavaScript / .NET SDKs to start or stop the detector.

Added new person pose estimator.

  • To get started with the pose estimator first subscribe to the PoseEstimation event and use the new StartPoseEstimation and StopPoseEstimation commands in Misty’s REST API and JavaScript / .NET SDKs to start or stop the pose estimator.

Added new get volume command - this will allow you to get the current volume through API/SDK

Bug Fixes & Improvements

  • Correcting condition that suppressed the tally light from turning on during speech capture
  • Fixing WebRTC VAD crash if hearing while stopping
  • Fixed a crash that would occur when running FaceRec and calling another command that uses the CV service
  • Duration parameter on MoveHead commands in C# skills is now passed through correctly
  • Voice activity model improvements

Known Issues

  • No known issues.

For a comprehensive list of the issues we’re tracking, see the Known Issues section of the Community forums.

6 Likes

Can you provide some documentation and sample code for the new object detection and person pose estimator? More information is needed to help understand when and how to use these features.

1 Like

So what does the Person Pose Estimator do? how is it useful?

What are the 80 different objects detected by the new object detection feature?

REST calls:

  • api/objects/detection/start with MinimumConfidence as a parameter between 0.0 - 1.0
  • api/objects/detection/stop

Subscribe to ObjectDetection to get a message every time one of the objects in the list is detected.

The ObjectDetection message has the following info (and a few other fields that are less relevant to most people)::

  • Id - unique identifier, will try to maintain same id for same object while it’s in view (this is a simpler tracker so don’t expect miracles here)
  • LabelId - numeric id related to the object label (e.g. person = 1, bicyle = 2, etc)
  • Confidence - confidence level of this detection between 0.0 - 1.0
  • Pitch - pitch angle of the center of the detected object from the camera center
  • Yaw - yaw angle of the center of the detected object from the camera center
  • Description - string description of the detected object (essentially one of the labels below)

Here is the list of objects the system can detect (this is based off of the COCO database):

???
person
bicycle
car
motorcycle
airplane
bus
train
truck
boat
traffic light
fire hydrant
???
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
???
backpack
umbrella
???
???
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
???
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
couch
potted plant
bed
???
dining table
???
???
toilet
???
tv
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
???
book
clock
vase
scissors
teddy bear
hair drier
toothbrush

1 Like

“COCO database” as in https://cocodataset.org/ ?

1 Like

Yes, that’s the one

Great - thanks for the information.