Hi, @Elite! Thanks for posting your questions here.
Misty does support audio localization and, to a certain degree, noise detection. The
SourceTrackDataMessage event type has a
DegreeOfArrivalNoise attribute, the value of which indicates the approximate angle (relative to the front of Misty’s head) that the robot detects noise. You can try this out in the Audio Localization section of the Command Center (the red line on that graph is the visualization for the
Misty must be actively recording audio to raise
SourceTrackDataMessage events. Right now you don’t have a large measure of control over things like mic sensitivity, and you will probably detect noises that you don’t want to “wake” the robot (for example, fan noise from the head), and I expect you would need to explore a variety of filtering mechanisms to see if any get you the results you need.
Misty does not have a dedicated light sensor, and there is no support at the API level for motion sensing. However, this is something you could invent yourself by processing the images that Misty captures with her RGB camera. We have prototyped a .NET skill internally that has Misty take pictures in quick succession; we use an image processing library to compare the pixels across these photos, and when the difference measurement exceeds a certain threshold, that serves as a trigger for “motion detected”.
You could also use image processing to derive a measurement for the level of brightness in a picture. This kind of image processing can be done on the robot in a .NET skill, or you can have Misty send the images to an external device (PC, Rasberry Pi, cloud service, etc.), process them off-board, and return the relevant data to your skill.
If you do choose the microcontroller route, Misty’s Arduino-compatible backpack has a Qwiic connector that ought to be easy to set up with any of the Qwiic sensors from SparkFun – looks like there may be a few interesting options for light detection