Robot Not a Dev? Pre-order Now
Not a Dev?

Misty Community Forum

Misty Action Editor?

Hey, everyone! I’ve been participating in the Misty II field trial these past few weeks, but amidst life and sickness kicking my butt, I haven’t been able to participate with the community as much as I would have liked to. That changes today!

In all of my experience with social robotics platforms that depend somewhat significantly on animation to display emotion and to enhance interactivity, an external application or extension, either fanmade or company-issued, exists to enable the creation of animation in a visual workspace. Creating fluid motion for a robot that needs to look natural in most home/office settings to make people comfortable is tough when it has to be meticulously tweaked and re-uploaded from a command line, IDE extension, or skill runner.
That’s not to say the method doesn’t work, especially with Misty being in the grassroots position it is, but I feel we could really benefit from some kind of tool to make animation easy (and to remove some tedium!)

I make this topic because I think it’d be a fairly easy thing to discuss and at least begin work on. So, I ask- what kind of functionality would you need in a Misty animation/action editor? Here’s a quick bulleted list I brainstormed:

  • An easy to modify and interact with timeline that makes the inputting of timing and the creation of keyframes easy.
  • A viewport that features a Misty 3D model that can be rotated and viewed performing the animation created in realtime.
  • Exporting the animation should be as easy as clicking a button. I think the program would save the created animation as a Javascript file ready for Misty to run from the get-go.
  • Actions would ideally be able to be tested in-app. Input Misty’s IP, save it, and then press the ‘run’ button to start the action on a physical robot.

Some tangential things I think would be useful in such an application, but not are necessarily a must for a first version:

  • Incorporated editing and timing for images and emotions on Misty’s display
  • Implementing a direct way to test an animation in-time with music

What do you think? Is this something worth pursuing? What kind of possibilities do you see for a tool like this or something similar?


Cool idea. A feature I think would be interesting in a tool like this is the ability to import Misty’s occupancy grid to work with your movements and animations in the context of her mapped environment :slight_smile:

1 Like

AWESOME. Glad to have you back, @apt

Thanks for having me! I’m really happy to be here.

re: johnathan - I agree, that would be interesting. The occupancy grid is something I haven’t experimented with as of yet, but the functionality in tandem with animation sounds useful. One possible case- “I’m in this kind of environment and have these kinds of feelings about it, so these are the kind of animations I should use until I leave.”


I want to upvote this :slight_smile:

@apt We’ve actually played with a few of these ideas internally, though a great many moons ago. Internally, there’s architecture to directly support keyframe animation, though we’ve never exposed it to users and it is in disrepair.

We’ve played a bit with a record and playback concept where you’d unlock the robot’s motors and backdrive the head and arms to specific positions, then take a snapshot, similar to working with claymation. I can imagine an interesting skill where you repetitively pose Misty to get frames, giving her voice commands to capture, playback, and undo poses that you’d eventually export as script snippets that you could share or reuse.

From a pure 3D editor perspective, we had done some early work with Maya to compose animations, but it was a huge pain. It would up being easier for us to work in naive webGL.

Have fun building; I’m excited to see what comes out!

1 Like