Robot Not a Dev? Buy Now
Not a Dev?

Misty Community Forum

How similar/dissimilar are virtual characters to robots?

3 Likes

Whoa! How hard would it be to beat a game character that could iteratively improve its abilities, based on machine learning? Yikes!

I was just on a hike today and I couldn’t help but think about how using a similar approach we couldn’t come up with an emergent system for traversing complex terrain, testing thousands of variations on mobile actuators based on biomimetic design and see if we couldn’t identify a set of “legs” or something else that could be created. I’m sure groups have tried something similar in the past but I’m thinking the tools out there now might allow for some faster approaches that could yield something useful.

Most previous such experiments of trying to “evolve” some new type of locomotion by training over a data over and over end up looking very similar to things already found in nature, but crappier. It turns out millions of years of evolution have already done a good enough job devising locomotion systems in nature. Newer trends tend to try to fully characterize and emulate some of the natural locomotion systems.

Also our simulation tech is not good enough to truly realistically simulate a complex locomotion system going over a complex terrain. That combined with the fact most of the money and interest these days is in autonomous cars and drones, means the latest simulation tools are aimed at these markets:

The environment and the physics are more constrained and better able to be simulated with sufficient accuracy to provide good training.

4 Likes

@donna Here is a chess AI that iteratively improves its abilities, and here is its ELO rating over time.

2 Likes

I suppose if I’d played over 4 million games, I might get pretty good, too. :wink:

Given how nearly infinitely better a machine-learning character would be than a human, how would game designers handle playability? I’d imagine the ML character would have to dumb down its abilities hugely, but ideally it would do it in a way that was actually responsive to the human player’s abilities. That way, the ML character could “teach” the human over time.

Still, does knowing the game can always beat you (if it wanted to) take the fun out of it?

1 Like

you could just freeze the network (stop learning) at several different checkpoints. That way you would have an “easy”, “medium”, and “hard” AI. The more fine grained you did this, you could in theory make a ladder of AI enemies where you, the player, are always losing 45% of games.

This points out another problem with current reinforcement learning; the amount of steps is usually very high (e.g. 4 million games).

Knowing whether an AI or another human can beat you doesn’t take the fun out of games, even for highly competitive games. It’s hard to explain but there’s a thrill to getting better, to defeating others for the first time, and the game itself.

1 Like