Robot Not a Dev? Pre-order Now
Not a Dev?

Virtual Reality As a Training Tool for Learning By Demonstration


#1

A great tool for training robots is learning by demonstration (Robot learning by demonstration - Scholarpedia). To drastically oversimplify, basically, the robot observes, mimics, or participates with a teacher performing a task multiple times so it can learn which variables matter and which don’t to achieve task success. The learning by demonstration process provides exemplars to the robot for training, and then the robot can then perform the task on it’s own in novel scenarios.

Keeping the robot safe from harm during demonstrations is critical, especially when the robot is expensive. In the past, tools such as interactive simulations and kinesthetic teaching (https://www.cc.gatech.edu/~ksubrama/files/HRIFinalBK.pdf) have been used during LfD, so the robot remains safe while demonstrations occur.

Virtual reality has recently become popular as an interactive medium through which humans and robots can interact safely. Basically, like an immersive simulation, the demonstrations are provided in a virtual space, and the robot learns from the rich contextual information provided, while it’s physical body remains safe from harm.

https://www.forbes.com/sites/andreamorris/2018/02/01/robots-learn-to-move-like-humans-using-virtual-reality/

If you could interact with your Misty robot through virtual reality, what would you teach it? Which activities would you want to do in VR with your Misty robot? Are there any activities or skills that you think would be easier to teach your Misty robot in a virtual world than they would be to teach in the real world?

Through the use of VR, you wouldn’t have to be co-located with your Misty robot to interact with it. Which types of long-distance interaction with your Misty robot through VR are appealing to you? …games? …chatting via avatars? …storytelling? If your Misty robot wanted to use VR to interact with other Misty robots or humans outside your home, would you allow it?


#2

Amazing stuff! What do you think, @dan @Vlad @slava @Patrick ? How feasible is it for a robot maker/developer to leverage simulations for training?


#3

I think @michael hit all the major points - Simulations are great because they allow you to test things that might break your robot. Particularly for techniques such as reinforcement learning, where lots of potentially catastrophic failures may take place, and even for LfD, if the robot explores for new solutions. And simulations + VR would be a really great way to improve the interaction between a human and the simulated robot. One of the attractions of kinesthetic teaching is that naive users can easily do it, you just physically move the robot to perform the task you want. Generally you lose that ability in simulation, but with a simulated robot in VR you might be able to recapture that experience.


#4

Before we do virtual reality we need to have “skills framework” and more or less tuned hardware. When I show turn 90° I would expect 90° +/- 10… We are not there yet. But developing skills in VR before they available on robot - might be quite useful. On other hand teach by example … do not see it feasible… yet. May be -Face/object recognition, navigation, face expressions/ mood recognition - can be good candidates…


#5

What’s great about a simulated or virtual environment is that you don’t have to wait for hardware to be ready before you can start teaching and training your robot. As long as the virtual tools exist for training, the robot can learn, adapt, and evolve, and when it’s hardware finally does become ready, the robot begins it’s physical life in a much more intelligent state (i.e. knowing much more). Getting a virtual or simulated robot to move ideally is a much simpler task than getting the actual hardware to operate correctly and flawlessly.

That’s part of the reason why I love physics-based (i.e. dynamic) simulations so much. They give the robot an accurate model of the world so what it learns virtually is directly applicable to the real world (provided that you have a accurate model of your robot).


#6

This isn’t using learning by demonstration, but on the topic of using simulations to train/educate robots, some interesting emergent behaviors arose from recent work performed by google in the area of using simulations to allow robots to independently learn new skills.

“The simulated robots — a cheetah, ant and hopper — acquired transferable skills by being set loose to ‘explore’ a simulated 3D environment, trying out a series of randomized behaviors to learn a wide variety of skills. …human designers simply do not know or cannot imagine some skills agents can acquire”


#7

In this article, we see how a robot can be controlled by the human in the real-world, via a virtual reality interface that gives the human a first-person view from the robot’s perspective and allows them to control the robot by moving their appendages.

If such software were coupled with a system for learning by demonstration, the human could easily give exemplars or train the robot by virtually “stepping into it’s skin.”


#8

I have been working with Unity and Unreal engine, I was asking about a 3D model of Misty, for something similar. 1) using the SLAM on Misty, can I interface with the 3D engine to render the walls and other obstructions to Misty getting around. 2) after doing that I would love to do some simulation and test how the virtual world coordinates map to the physical impact of Misty moving around. 3) playback the navigation in the virtual world, simultaneous providing commands to Misty to do in the physical world. I expect to have a delay from the simulation to Misty executing commands - and I am not sure yet how fine the control of Misty moving you get from the API - don’t have my Misty yet, but I have been looking at the various development docs


#9

It would fun to provide the input commands that you are sending to the hardware to one virtual robot, to see the ideal or expected output response of the robot, like a kinematic simulation in response to hardware commands. Then, instrument your robot with fiducials or retro-reflective markers and use cameras or a motion capture system to determine the actual response to the commands as exhibited by the robot & retarget/project those trajectories onto a second virtual robot, which you then render in the same virtual world. You would then have a visual representation of how closely your Misty robot is matching what you’ve commanded. There will be discrepancies (due to physics, timing, etc.), some of which you could correct out, but the ultimate goal would be for the global locations & orientations of the two robots to match and the orientations of it’s head and arms should match as a function of time.

A long time ago, prior to having a SLAM algorithm running on the robot, we actually implemented a very coarse version of such a system in-house using the ARtoolkit & openGL. It allowed us to check the accuracy of orientation commands in our locomotion system and it gave us a measure of global positioning for the robot. We were able to minimize the impact of timing lag in our system by representing our virtual robot not as a complicated 3-D model of Misty, but rather as a set of primitive shapes (e.g. circles, rectangles), which will render faster than a collection of vertices and normals in a more complex arrangement. It was purely kinematic, but I’ve been wanting to make a full dynamic model of Misty for quite some time now.

The benefit of a system like you describe is that, once your coordinates and frames are well-aligned, you could select points in your virtual world (e.g. by virtually “touching” them) and transform them into commands for your Misty robot to drive to virtual locations, which would match locations in the physical world. To phrase it differently, your virtual world would be both a scale-accurate display of the real-world and an input device for controlling your robot. Very cool!