Robot Not a Dev? Pre-order Now
Not a Dev?

Misty Community Forum

Can a skill mimic an instinct?

I’d like to create a Skill that mimic’s an instinct.

I know human infants are born with a single instinctive fear: the fear of falling.

So, my first try will be a background Skill that makes Misty instinctively afraid of falling off the "edge". (Also see the warning in the docs about Misty falling off a table.)

Lets start by defining the “edge” as: Encountering a strip of yellow and black tape often used to highlight the edge of a dangerous step or curb.

I believe these are the first questions that need answers. (As they are answered I intend to update this original post, with due credit to those who provide the answers.)

— The Skill must automatically start when misty.DriveTime(linearVelocity, angularVelocity, time); is called.

Q: what other events, if any, cause a stationary Misty to begin moving?

— The Skill must remain active until

  • misty.stop() is called,
  • motion ends when DriveTime.time expires
  • the meta TimeoutInSeconds value is exceeded
  • by default, the on-robot skill times out after 300 seconds

Q: Do any other events cause a Misty in motion to stop?

— The Skill can only be programmed with the On-Robot Javascript API, because the REST API adds an unpredictable delay. (Such a delay sunk the billion-dollar Mars rover project) Complication: The On-Robot API is not yet stable.

— Need a certain way to determine Misty’s true speed and stopping distance, because it seems logical the relative speed requested by misty.DriveTime.linearVelocity varies in terms of actual cm/sec according to slope, surface … maybe other factors as well.

Q: Is there a way to continuously monitor the rotational speed of Misty’s drive sprockets? That would translate unambiguously into cm/sec.

Q: Is there a way to reliably predict deceleration? Only option I see so far is creating a table generated from lots of trial and error.

I’d like to attach a text JSON and JS file for the skill as it grows. But I can’t yet, so for now I’ll start an outline at the bottom of this post

Final related question: Can anyone save me time by posting how I can edit this post to include the link to the person who deserves credit for an answer as it is added to the original post?

I’ve been a beginner on a lot of forums on a lot of languages over several decades, I’ve grown to hate the Indian burial approach … i.e. The need for an archeological dig to sift through many dozens of comments, suggestions, as well as both the correct and helpful and the time-wasting wrong ones. Therefore, I going to try to cull the best answers into the original post as it gets fully answered. Who knows. Maybe I can succeed. And if not, its easy to fail quietly and sink to the bottom of the unindexed heap as it grows.

META JSON FILE

{
“Name”: “Fear of Falling”,
“UniqueId”: “f34a3aa0-8341-4047-8b54-59d658620ecf”,
“Description”: “Fear The Edge”,
“StartupRules: [“Manual”, “Robot”],
“Language”: “javascript”,
“BroadcastMode”: “verbose”,
“TimeoutInSeconds”: 300,
“CleanupOnCancel”: false,
“SkillStorageLifetime”: “Reboot”,
“WriteToLog”: false,
“Parameters”: {}” /*as they are needed */
}

CODE JS FILE

JavaScript methods
calls to the misty object.
misty.method (parameters);
other code permitted by the final On-Robert API

1 Like

Misty does have edge detection sensors on her undercarriage. You could really define edge as anything that those sensors define as an edge. It would likely be easier than redefining it to something like tape.

You can edit the title and body of a topic by clicking on the pencil relative to the area you’d like to edit:

Re: the “Fear of Falling” skill, it may be useful to know that there is currently work underway for a “native” obstacle avoidance system that will always run in the background on Misty robots. This system is in early development and may change, but here’s a summary of how it’s currently planned to work:

The hazards system automatically puts Misty into a “hazard” state when she detects she is about to drive off a cliff, run into an object, or when one of her bump sensors is activated. The nature of the “hazard” state (i.e., which bump sensor or time-of-flight sensor detected the obstacle or cliff) determines which drive commands Misty will not execute.

For example, if Misty’s front center time-of-flight sensor detects an object that is close enough to put that sensor into a “hazard” state, then Misty cannot execute commands that move her forward (in the direction of the detected object). However, she can still execute drive commands that move her backward (away from the detected object). As Misty backs away from the object, the distance returned by her front-center time-of-flight sensor grows. When the distance is large enough, that sensor is brought out of the “hazard” state, and Misty can execute drive commands that move her forward once again.

1 Like

Hi Johnathan,

I should have written that question better.

I saw how to edit my post. But when I do edit it, I wanted to add the link for the person to whom credit is do for an answer. Like the link at the top of your reply, i.e. Your name w/ a little shield.

So that if someone clicks on the contributor’s name in my edited post, her or his basic info pops up, like it does when I click on the link in the person’s reply. Not just type the person’s name in text.

I can see the html code when I hover over the link on top of the reply … and then type it in using the chain icon of entry window when I edit my post. I was just hoping for a simple cut and past way. But I’ll figure it out … later.

Hi Ben,

Thank you. That gives me a great place to start digging.

But I’m not a trusting soul. Therefore, my first reaction is I most likely will not rely on what those “sensors define as an edge”, unless, and until, I know exactly what they use as a definition of an edge.

You also say “on her undercarriage” … which I hope you understand begs the question of how far over the edge she will be before she realizes there is an edge. So I was thinking more in terms of looking forward.

Plus … my goal is not to “get it done”. So while the existing sensors might be faster … they don’t help my real goal which is to develop an instinctive understanding of the Misty you’ve created. My goal being more learning by doing with this project just a fun vehicle with which to start.

(Grin … for what, if anything, it’s worth … impatience by those who know, with those who don’t yet, is very common. Realizing that can help those who seek to lead succeed. At least I think.)

Oh, sure - you can do this by typing an @ sign in front of contributor’s username. As you type, a list of users will appear whose usernames match the characters you have entered. You can either choose the user from this list, or type the full username yourself.

1 Like

Here are some ideas of “instincts” that could/should raise a flag for Misty to be extra-aware and maybe draw upon some “be in careful info-gathering mode” Skill scripts:

  • human voice or human voices that seem “angry”
  • placed in a new environment for the first time
  • something hot (high temperature) is nearby
  • the floor is vibrating
  • Misty finds herself bouncing up and down, or tossed up into the air (and, hopefully , caught, like a baby)
  • proximity to new pet
1 Like

Hi Johnathan,
Thank you.
I’m going to add that the the Meta topics as a short question already answered with credit to you … grin … assuming I’m able to do that with your suggestion.

See you in the future.

Yep. that is of interest. And I look forward to its completion.

But for a beginner, it is way too much stuff to start with. So, for purpose of this question, my next little steps are two:

I realized I don’t know how/if I can get Misty’s On Robot API to start a skill … So it can register with and listen to whatever sensors fire events that I can use to keep an eye out for yellow tape.

And of course, what, if any, current sensors fire events that I can interpret to see if Misty can “see” a yellow and black stripe tape.

See you in the future.