Take a look at our newest post, and let us know whether you think we’re on-target or not:
Also feel free to suggest topics for other posts you’d like to see us research and write.
And if you have something you’d like to write, we’d love to publish your work!
The blog post is an interesting read.
I have previously given thought to what I consider the three categorical technical obstacles to developing the Rosie-level multipurpose robot. I’d like to quickly explain the three items as I see them and try to put them in perspective with the four challenges indicated in your post.
Categorically, I see the three technical challenges as:
- Power source
Intelligence - Fictional robots like Rosie, C-3PO and Data from Star Trek, etc, demonstrate human-like intelligence that may be referred to as general artificial intelligence, common sense intelligence, or concept-based intelligence. It’s evident we aren’t there yet…we are a quantum step away. But I believe machines should (and will) be able to demonstrate this level of intelligence. If so, I think the first two challenges indicated in the blog post (Human-friendly interaction, environment mapping) will be solved as a consequence.
Strength/dexterity - If consumer robots are going to rival humans in manual tasks, they will have to match us mechanically especially in the “manual” (as in ‘hands’) regard. Human-hand capability is a holy grail in robotics. Ideally, the general purpose robot can wash dishes, paint walls, operate tools and appliances, and all tasks that require both manual strength and precision. Of course it’s not all about the hands…the robot must be able to climb stairs and use and balance the rest of its body as needed (shoveling snow…). I think that the intelligence breakthroughs will provide support for this capability (algorithms that provide appropriate feedback to the body’s actuators).
Power source - So if a robot is going to be multipurpose workhorse, what power supply is appropriate? Not a gas engine…indoors. There is no battery solution I know of that has enough power and would be reasonably-priced. And AC power, provided by a power cord, will probably cause problems.
Those are the primary technical challenges as I see them. I am interested in others’ thoughts: did I miss any, etc?
The blog post mentions Privacy and Security, which are important concerns. I never thought about these types of challenges because they are more legal than technical (although they will be addressed by technology). I imagine there will be more legal-type challenges. How about this one: an autonomous robot makes a reasonable decision and somebody inadvertently gets hurt as a result. Who is responsible?
Finally, the blog post mentions “Multifunctionality” as one of the four challenges. Just my opinion, but I think that point is too general and needs to be decomposed into more specific challenges…
Thanks for the post. Looking forward to reading more on the topic.
Thanks, @ben! Having a lot of fun sharing ideas.
So pleased you gave it a read, @Joe. And good points.
No argument that intelligence can cover both human interaction and more specific tasks such as mapping. There is just such a range of challenges is each of those two areas that, to us, they both merited calling out. In truth, this may simply be because as a robot company, these are both areas we (and other robot companies) have to think about all the time.
While pure strength is not an issue for some robots – as many are dangerously strong – I do think general dexterity is a challenge we should have called out more strongly, as you note. There are definitely some really dextrous robots out there, but it’s still very hard to do dexterity without an enormous cost that quickly takes robots out of the consumer price range. But, to be fair, there are also a LOT of companies doing good work in this area, so maybe it’s fair to assume that consumer-affordable dexterity is on its way?
“Power source” is an interesting one. I think our assumption has been that stationary chargers are sufficient to most use cases, as long as a charging cycle is of a manageable length relative to the usage pattern.
In the “privacy, security, and general legal matters” area, we, too, think that the legal challenges involved with robots are only beginning to be looked at. In our Chief Robotics Officer post, we bring up a few examples of robots in the workplace creating interesting challenges for both Legal departments and Human Resources. Yours is a good one.
@Joe couldn’t agree with you more regarding “hands” and power source. While I think we will have great success with early adopters in the next few years, it’s clear we (nor anyone else) can’t truly “cross the chasm” of consumer-grade robots without hands and, to your point, the power requirements portend a significant wait as well.
Thanks, @Donna and @tenwall. Yeah, I guess I was thinking about the “end game” with some of the things I mentioned. It makes sense that for the more immediate term, the current power sources should be sufficient for the robots we expect. I can understand your point as to why SLAM is singled out, again thinking about the more immediate consumer market. I will check out the Chief Robotics Officer post; haven’t read that yet…
I’m going to chime in here and draw a distinction between different types of intelligence - In humans there are many theories that posit different types of intelligence:
And there’s no reason to suspect (or enforce) that robot’s only have one. So, yes, solving ‘intelligence’ in the general sense will get us both Human-Friendly Interaction and Environment mapping, but it’s not a monolithic problem.
In a further shout-out to Star Trek’s Data - he was super-intelligent, but had tons of trouble with emotions and interaction, especially at the beginning.
@dan, that’s a great diagram.
I agree with the ideas re: emotion. Right, I don’t think that we should assume that if we ever get machines to achieve cerebral intelligence we should assume they will have emotions and exhibit every other behavior common to biological entities. I’m not a psychologist or anything close to that but I have to believe that a lot of our behavior is driven by factors that nature has built into us.
I wonder if all of the sci-fi movies based on sentient machines being driven to wipe out humanity are missing a key question…is there any reason to believe that just because a machine has become aware that it will have a built-in drive to survive?
And I agree that solving ‘intelligence’ is not a monolithic problem. But I do believe that if we really crack the nut on the fundamental algorithm of representing and manipulating concepts, that capability can be applied to most of the boxes in the diagram.
Thanks for posting that image.
Representing and manipulating concepts is one thing, but understanding them is something else. I’m always put in mind of STRIPS systems [Strips - Wikipedia], which are great for letting computers handle concepts. Tell the system what things exist, how they can relate, what can be done, etc etc, and it can figure out how to get from one state to another state, i.e, plan.
But there’s no understanding in the system, nothing that ties (or grounds) the abstract concepts that the system is manipulating to things (objects, relations) in the real world. You can, in fact, design totally fantastic worlds that obey their own (logically consistent) rules, and a STRIPS system will happily plan in it.
Which is why I think robots are so much more interesting than computers. By being embodied and present in the physical world, they might be able to start to understand concepts. And by ‘understand’ I mean build their own representation of a concept, not just use one given by a human programmer.
This is, perhaps, a purely philosophical distinction, but I think the self-discovering of concepts via embodied robots will enable systems to better represent and manipulate and interact with the physical world, including humans.
An alternate set of top challenges:
In the article that @Dan posted, for challenge #10 (who is responsible for a robot’s actions?) it feels like they’ve equated robots to artificial intelligence. The answer lies in that ambiguous, personal, and subjective territory of defining a robot, but for me personally, I do not believe that artificial intelligence equals robot.
@Michael, I completely agree with you: robot <> intelligence .
Several years ago a Volkswagen factory worker was killed by a robot (obviously a legacy industrial robot, not something autonomous). Maybe it was just my perception but I felt many of the news stories used wording that implied intent on the part of the robot, most likely to sensationalize the story. Although it’s subtle, there’s a big perceptional difference between the headlines “worker killed by robot” and “robot kills worker”.
There’s actually a fairly sizeable debate going on in the EU right now about robots and responsibility, and whether or not robots should be granted ‘legal personhood’