Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Misty can't "see" an object in front of her that is raised off the ground

Misty’s forward detection is working well if an object in front of her is on the floor.
In running Misty’s explore script, however, I discovered that Misty proceeds forward as it moves towards a guest bed that is raised 7" off the floor. The front of Misty raises off the floor as it’s treads move forward but her chest has to stop.
I’ve also been able to verify that this happens when an object as as little as 2" off the floor. In this case it hits the front of Misty’s treads and Misty can’t proceed any further.
I’ve decided not to test how Misty would behave if an object was at head height.

These are good questions around the limitations of the range time-of-flight sensors. At a certain height, we don’t expect raised objects to be within the range that the time-of-flights can “see”. I’m checking with members of the team closer to this system for more exact details about what that height is, and will get back to you when I learn more.

One possible workaround for this could be to supplement readings from the ToF sensors with data from depth pictures. There may still be blind spots depending on the positioning of Misty’s head, but it could give you greater coverage of raised obstacles.

Regarding this quote:

Do you mean that the 2" raised obstacle is triggering Misty’s bump sensors, thus preventing her from driving forward? If not, would you mind sharing a picture of Misty in this situation?

No. I mean that the front edge of Misty is physically touching the platform that is 2" off the ground. This prevents her from moving forward.
I think what I’m reporting is clear enough without a picture.

@BoulderAl Would you be able to send us a picture of the obstacles Misty is encountering to help@mistyrobotics.com?

1 Like

I will do this.

1 Like

The front TOFs are 1" off the ground, and have about a +/- 12.5 degree angle in the field of view. You can use that angle to calculate the approximate cone of view of the sensor, telling you how close to the ground you should just start seeing things at “x” distance from the robot. Understand there are slight manufacturing tolerances so be a bit forgiving on that.
Having said that, the shape of the object, and other factors such as material, color, room lighting, dust on the cover, etc… can all have some impact in whether the sensor sees the obstacle. At mimimum, enough of the obstacle has to be in view for it to get enough returned pings for the sensor to decide something is actually there. Unfortunately there is no hard-fast rule to tell you the sensor will absolutely see it or it absolutely won’t. There are too many variables for such a rule to exist.
I would expect it to see the face of an object that is only 2" off the ground provided there is enough of a flat face to the object. If it is a round bar, or the face is only 1/4" tall…eh that gets sketchier. In those cases I’d think it would probably see those, but would not report them as a valid object, with the settings for signal and sigma we have established.

3 Likes

Thanks @BoulderAl! We appreciate it!

It’s extremely helpful for us as we discover some edge cases so we can continue to improve!

You’re welcome, @Chris.
I remember having several discussions with Dan back in the day on ways that Misty could express her personality while wandering around the room. We never considered this potential problem.
In the family room next to the guest bedroom there are at least 4 pieces of furniture that would be an issue for Misty. One is an entertainment center (~3" off the ground), another an end table (~9" off the ground), another a couch(also ~9"), the third is part of the stride mechanism (as low as ~8"), and the fourth is a table (~11" off the ground).
Has anyone done much experimenting with Misty’s wander capabilities. If so how did you handle this if you/Misty ran into them?

@BoulderAl I have run the wander skill, but usually leave Misty in a wide open space, and have “soft” things around to stop movement. When actively testing obstacle detection, Misty regularly needs to be rescued from furniture that sits off the ground.

@johnathan This is a great idea. I tried it out. Couple questions:

Question 1: TakeDepthPicture returns an Image which is a 76800x1 array. The image has height x width 240x320. I have pointed at a couple of things, taken depth measurements. I am having trouble figuring out the mapping. Does it go horizontal or vertical? What corner does it start in? I can’t find a picture description, probably not looking in the right place.

Question 2: Is there an existing tool for viewing these depth images? I am using the API Explorer -> Navigation -> TakeDepthPicture. I am presently staging Misty, taking a depth picture, exporting the returned data to Excel to try and “see” what Misty sees. This is not fast (or fun).

Question 3: The Image has lots of NaNs. I have never achieved >50% good values (not-NaN). Is this normal? Are there actions I can take to get better readings?

As always, any insight is appreciated.

I updated my driveObstacle code with a TakeDepthPicture function, but am not using it to avoid furniture (yet). The function takes a depth sensor picture, iterates on each depth element, and if the data is good, uses it to average to ONE value. Yes, there are many ways to improve on this. The code is here:

1 Like

Great questions, @MorningR!

More likely is that the DepthImage docs are overly terse :wink: I’ve answered what I could below.

The first value in your image list represents the top left pixel in the depth image. The following values fill up the top row of pixels from left to right, until you get to the 321st value, which is the first pixel in the second row.

When converted in this manner, the depth image has the same orientation as the visible images you take with Misty’s fisheye camera. If you convert the image list to a two-dimensional, 240x320 array, then the first list in the matrix is the top row of pixels, the next list is the next list of pixels, and so on.

I’m neither a Python or a computer vision expert, but a quick bit of Googling surfaced a few useful options for converting the image list to a 2D array and displaying it as a grayscale image. Note that I had to replace the “NaN” values with “0” values before this worked.

import matplotlib.pyplot as plt

# Converts list l to matrix with height n
def toMatrix(l, n):
    matrix = []
    for i in range (0, len(l), n):
        matrix.append(l[i:i+n])
    return matrix

# Replace 'NaN's with a value of 0, then assign to variable
image = [<your depth image data>]

imageMatrix = toMatrix(image, 320)

# Shows grayscale image in new window
plt.gray()
plt.show(imageMatrix)

This created the image below. (You could probably customize this to produce an image that’s much more useful.) See how it compares to the visible image taken by the wide angle camera in the depth sensor:

Figure_1 OccipitalVisibleImage

We don’t support an official tool for this right now. You could add it to the wishlist, but it may be some time before it is prioritized on the roadmap – though the APIs for creating a tool are out there, and I can picture this being a project that the community could build and support.

I don’t actually know the answer to this, but I’ll touch base with members of the team who work more closely with Misty’s depth sensor and get more information to you when I can!

2 Likes

Thanks, @MorningR. Thank you for letting me know that you’ve noticed this and are trying to improve Misty’s ability to move around a room.

1 Like

Just following up. You’ll get fewer NaN values in well lit environments, when the sensor isn’t too close to the object it’s reading. We don’t have official metrics on what “too close” means in this context, but anecdotally I’ve seen pretty good results at the .5 meter range and beyond. The sensor also doesn’t like shiny / reflective surfaces. Pretty sure the black blob in the upper right of the grayscale image I shared is my shiny metal doorknob.

1 Like

@johnathan Thanks for the prompt and great answers! This has given me some actionable next steps to investigate - when I can grab some time :slight_smile:

1 Like

Lighting doesn’t matter as much since the sensor provides its own illumination. Minimum working distance is 0.56m.

Dark surfaces will absorb the laser more and thus provide much lower return. For example we have a black leather couch in our office and it consistently has holes in the depth return.

Shiny surfaces have the opposite effect but similar results. Shiny surface will reflect too much of the laser light back saturating the sensor and making the data useless.

If you point your Misty at a blank wall from about 1 meter out you should get high coverage. If you don’t, let us know. The calibration might have been affected.

2 Likes