Reality Gets Closer to Dream of Humanoid Robots

Article By : Andrew Beaulieu

In many ways, humanoid robots are the complement of application-specific robots. Humanoid robots have come a long way in recent decades, but a number of important questions and challenges remain.

Looking at the trajectory of robotics over many decades, what we can see is a clear bifurcation in the field.

On one side has been a significant push for application-specific robotics—ones that are designed to do just one function extremely well. This can be seen in many applications, including manufacturing, military, and industrial. Most of the robotics that are being used and deployed at scale are of this type.

On the other side, we see the development of humanoid robots—robots that closely resemble humans in appearance and functionality. In many ways, humanoid robots are the complement of application-specific robots, aiming to be good at a variety of different tasks and environments but not excel in any one specifically.

Humanoid robots have come a long way in recent decades, but a number of important questions and challenges remain.

Why humanoid?

To truly understand the field of humanoid robotics, we have to start off by questioning our motives for, and the benefits of, such a technology. Robots are able to outperform humans in many aspects, so what’s the benefit of making a robot that mimics humans?

The answer to this question largely lies in the intended function of a humanoid robot: to be a general-purpose machine. In many ways, the purpose of humanoid robots isn’t necessarily to perform tasks that humans can’t; it is to perform tasks that are too dangerous, dull, or dirty for humans.

With this in mind, to be able to execute all the functions a human can, a robot must be able to seamlessly navigate the environment that humans live in. The interesting thing here is that the human environment was already created by humans to meet the constraints and abilities of the human body. Consider, for example, that human environments often have staircases, not because stairs are the optimal way to ascend or descend a structure but because they are the convenient way for a human to ascend or descend a structure.

So to make a general-purpose machine that can exist within our human-tailored environment, that machine must be able to function like a human.

Humans are hard to emulate

Despite the want and need for robots that can emulate human actions, achieving this in practice can be extremely difficult. One of the biggest challenges is that humans are designed quite well, and we’re finding this in all layers of robotics.

From a mechanics perspective, for example, bipedal locomotion (walking on two legs) is an extremely physically demanding task. In response, the human body has evolved and adapted such that the power density of human joints in areas like the knees is very high. Matching this power density and the unique motion requirements of bipedal locomotion with motors has historically been difficult in the design of humanoid robotics. Another example is the human ankle joint, which is quite complicated and important to gait stability. It, too, has been hard to replicate.

Transcending purely mechanical traits, the human body navigates its environment by relying on an extremely intricate and impressive sensory system. Understanding the environment the same way humans do requires the development of technologies that can closely imitate human systems, such as auditory, tactile, and visual. This, however, is only what’s necessary to obtain sensory data from the world. In practice, the human body can also communicate this sensory information throughout the body through the use of a nervous system optimized over millions of years of evolution. Endowing a robot with the same function as the human sensory and nervous system has been an ongoing challenge in the field for decades.

Finally, putting together the control and planning systems that integrate the sensory, communications, and mechanics has posed a significant challenge. Not only does a robot need to be able to interpret its environment and move its joints in concert, but it needs a way of merging these functions to successfully and autonomously navigate the world. Mimicking the human body’s ability to do this requires a highly advanced robotics software stack ranging all the way from the algorithmic level to the driver and low-level firmware levels.

Humanoid tech: hardware

Fortunately, the robotics industry has benefited from improvements in a number of hardware technologies over the last few years.

Specifically, there has been a significant push for and adoption of new sensory technologies for humanoid robots. Devices like inertial measurement units (IMUs) have seen tremendous improvement and have allowed robots to emulate certain aspects of the human nervous system. Here, humanoid robots can leverage an IMU’s integrated accelerometers and gyroscopes to better estimate their multi-axis orientation in space and better inform the robotic control algorithm.

Further into sensory, perception hardware has gotten much better and can now start to rival the human visual system. Humanoid robots today can benefit from great progress in depth cameras, RGB cameras, LiDAR, and radar, which are more performant, power-efficient, and cheaper than ever before. The result is the ability to generate 3D maps of environments and detect objects through the RGB images generated by on-device cameras. Additionally, tactile sensory has started to allow us to incorporate touch-like sensing, especially as we combine it with compliance and softness to improve contact and environmental sensing, much like a human’s skin.

In the same way, motors and other actuation devices have also now reached a point where they can withstand the forces of the human body and mimic human motion adequately. Power and torque densities have been increasing, and some great innovations in gearboxes have allowed for lower backlash.

Humanoid tech: software

Now, with the improvements in robotic hardware, many of the remaining open challenges facing humanoid robotics lie in the realm of software.

Some of the biggest open questions on the software side exist in the worlds of perception and motion planning, where there’s much progress being made but still a significant gap between the state of the art and something that resembles deployable robotic general intelligence. Like hardware, however, the software side is benefiting from improvements in existing technologies and great contributions from the research community.

For example, researchers today are applying novel techniques in machine learning and artificial intelligence to understand and adapt to environments in new and improved ways. Machine learning represents a paradigm shift in robotics because it gives us a tool to be able to ingest all sorts of sensory data without necessarily needing a principled approach to using that data. For example, reinforcement learning approaches enable the training of a robotic system in an environment with a reward system such that the robot doesn’t need to be taught explicitly how to achieve the objective.

Machine learning is doing a lot for the development of humanoid robots, and it’s understood that this trend will be a driving force in the field for years to come.

One of the greatest challenges in the field of robotics is the pursuit of humanoid robots.

Designing systems that can match the performance of the human body’s mechanical, sensory, and control systems is a daunting challenge, yet improvements in both hardware and software are slowly bridging the gap between what we desire and what we can achieve.

In the future, continued improvements in the field may finally bring a day when humanoid robots are a widespread reality.


This article was originally published on EE Times.

Andrew Beaulieu is a manager and senior robotics engineer at Toyota Research Institute.


Subscribe to Newsletter

Leave a comment