Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

Eyes/faces/images and moving to customization and programming learning expression

Just curious where or if you would want us to upload alternate eyes/faces/images.
I was trying to play with animated gifs/pngs for a wink or rolling of the eyes. (And the
originals I also have in gimp, I just used the existing eyes as the template for similarity
reasons.
And perhaps there is a better way to do the animated faces.

2 Likes

Example eyes, and I just had it wink once (versus repeated loop).

2 Likes

I think a javascript (three.js) or similar for animation of the face and personalizing it.
I was just thinking this may already be in the plans/development. I will start with three.js
or python (but if you would prefer me to work in a specific flavor I am okay with that also
I develop on Linux for tools)

User ‘XYZ’ profile for “MistyI ABC” (due to my daughters love of ‘personalizing’
a avatars style/color/etc.)
- eye iris color [selector]
- eye iris size
- eye shape [round, elipse
- pupil [size]
- sparkle/flare [size],[location] - twinkle [enabled]
- eyebrow color
- eyebrow style (shapes [line, unibrow, the long tear shape])
- eyebrow size (thin to thick, unibrow, etc)
- eye lashes ([yes|no], [thickness],[length])
- eye lids

Then for each piece animation versus replacement of image would allow for
more dynamic expression, and more personalization.

Then I want to work on speech recognition/response. (that is how my 6 year old
prefers to interact with technology or touch screen).

1 Like

Hey @markwdalton - We’re actually developing a very similar system already, trying to exactly allow for the sorts of personalization your daughter likes. Any ideas you have would be greatly appreciated.

What we’ve got now is a system of layers and assets, so, for example, you’d have an iris layer on which you’d put your colored iris asset, a pupil layer with a pupil assets, etc etc. Layers are ordered, as usual, so you can get the nice ‘behind/in front’ effect.

Animations happen in one of two ways. Each layer can actually hold an animated sprite and loop it (or ping-pong, etc). We’re also allowing for the specification of particular transforms to be applied to each layer (translation, rotation, scale, skew, etc), and taking care of doing all the math for intervening steps. Each of these animations can be triggered programatically (i.e, by issuing a ‘blink’ command).

There’s some other really cool stuff in the works that, alas, I don’t think I can talk about right now. But hopefully we’ll be able to get it out to you soon!

Thanks I was thinking you may have thought of this already. I am glad, I am not a graphics artist
but I am only okay at Gimp and Blender (no expert). I do mostly scientific computing (C, C++, Fortran, MPI, cuda - Crays/clusters), basic robotics (teaching kids), and some AI, GA, GP, ML, DL as a hobby (took some of Andrew Ngs coursera course to get up to date).

Before I thought, you (Misty) may already started work on this, I was thinking with something like (I could not find a ‘face’ example) but video games have things like that of adjusting the face, styles, colors. I just started looking at this today, so I am sure your research/ideas are better. (I just wanted more color in the eyes and movement so I just made a quick-n-dirty modification).

But a free example is for a human:
https://threejs.org/examples/#webgl_morphtargets_human (three.js examples)
- I was thinking of a app from the web and/or tablet/phone that would do this for the eyes.

I am not particular about Javascript (I prefer C or python), but consistency is important (in the user experience, development, upgrades, security and support).

I do agree you would not decide the personality, it should be learned. (with some potential parental overrides/retraining… but for 18+ anything is fine, as long as it is clear the ‘owner’ not Misty is responsible for the behavior and words - think issues of sexism, racism, etc).

2 Likes

I’m glad to hear we’ve anticipated your wants, and only sorry that we don’t have anything like that three.js (which is a great example) for you now. But, for me, the ability to personalize and modify your robot in this sort of fashion is key to developing the unique bond we’re trying to get here. Other than eyes, what other aspects of the robot would you want to edit? We’ve thought of letting users change the sounds, physical movements (of head and arms and mobility), and lights. And in the other thread we’re talking about modifying personality characteristics. And, of course, you can attach/change the hardware. Is there anything else?

1 Like

Yes three.js is just a example, it is not complete or ideal the way it was laid out. I want it to be programmatically modified in addition which it can do. If you are doing ‘per-user’ profile for personality the same may be true of the ‘look’. Yes, the sounds (beeps, hums, notifications) along with the voice (think skylanders imaginators). It is about a balance between allowing customization of ‘style’ mixed with the learning aspects.

I was thinking more the style of humans/animals… you are born with eye color, eye shape, voice with some choices and relearning/reshaping you can do with voice, eyebrows, etc… Expression of eyes, and physical movements, lights, arms are more learned. So I want the ability to ‘roll the eyes’ or ‘twinkle’, ‘wink’, voice inflection, etc. but those I think as more learned ways to express.

I would still think there are ‘modes’, but perhaps that is not the plan.
- Mode: Learning/expressing personality
- Mode: Robot education - education/control program -learning to program
- Mode: Demo example behaviors that it can learn (like the dance like movement, etc).

For the Personality mode, I do not think of movement as really ‘forced’ programming
unless it is in a ‘demo’ or ‘education/control program’ modes.

And for Misty I, the lights I do not see the lights as ‘forced programming’. And for
accessories, I am not sure yet, as you/we would have to build them and attach then
type of control depending on the ‘device’. (I will post alternate arms styles on the other thread with the example arm).

It sounds like @station (Matt) may have valuable opinions about three.js or other models (based on his profile). It is out of my normal area, but I am happy to contribute in anyway you want. I will play with three.js (just to start out), but if there is another start of what you want I am happy to move to that when it is in progress.

It’s a tough call actually, since I’m not sure exactly how Misty is going to display it’s content. I get that using images for now works, but I don’t think it’s a great long term solution. If Misty is running a Window 10 IoT core app then also not sure where threejs fits in there since its specific to the web browsers’s implementation of WebGL. If we are talking about using threejs to make a web interface tool that can export the resulting animation as a image sequence or gif, then I’m still more inclined to use Unity instead as it has a more powerful UI system, OS level access for writing files, and can be compiled as a runtime binary on all major OSes. There are other options of course, something like Spine that has a great animation interface but then spits out JSON and asset files that can then be used in an optimized realtime manor would not only keep things light weight, but dynamic. However that would cost for most users (I got a license a long time ago when it was a kickstarter). @Dan what does that look like on your end so far? Is Misty going to be on the same hardware / internal OS, will it be able to run 3D things in OpenGL or WebGL? will it always need a UI to create content and then transfer the files over to be displayed?

2 Likes

@station thank you! I saw your profile and figured you would know much more on
the three.js or other solutions. (I am NOT a MS Windows person, and just played
with Javascript on web pages).

I think we just want:
1. Basic customization for users for the basics (just to help the user feel the ‘ownership’
sort of like the birth/inheritance or making it ‘mine’ feeling). As these are not ‘learned’
things either way.
- eye color
- basic shape
- default pupil size
- eyebrow color
- eyebrow style (shapes [line, unibrow, the long tear shape])
- eyebrow size (thin to thick, unibrow, etc)
- eye lashes ([yes|no], [thickness],[length])
- eye lids
2. The rest would all be from ‘personality’ from the API changes in (not web)
- eye movement
- eye lid
- eyebrow movement/bends
- pupil size
- width of opening of the eye (surprise versus squint)
- wink, etc.

2 Likes

Y’all’re right in line with the stuff we’ve been talking about, which is terribly encouraging, so please keep it up. To @station’s question - there’s some internal stuff we’re working on that will really drive the animation/movement/behavior of the ‘eyes’, so users won’t need to actively create that content themselves (although, they always could, if they wanted - we’ll support as many standard filetypes as we can). But, really we’re thinking more in line with @markwdalton, in that there are some high-level features of the eyes (color, shape, base asset, etc) and we’ll want a nice user-tool to change/edit those features, and how they’re used during expressions.

My personal thinking is that this tool should be as easy to use as possible, so you don’t need a special license, or lots of experience in animation, to use it. Something web-based would likely fit the bill, but this it outside of my area of expertise, so perhaps a custom Unity application would be more appropriate, I don’t know.

3 Likes