Are you hooked on the Apple series, Sunny? We are. It’s a dramatic, cultural commentary on humanlike robotic systems wrapped in narrative about a mysterious tragedy. It stars Rashida Jones, and, in the title role, Sunny the robot.
While past shows that featured robots were set solidly in the science fiction realm, this one … well … it just isn’t. We don’t have to willingly suspend disbelief to buy into this show. We just have to understand that actual humanoid robots and robotic tools are advancing rapidly. And that with the emergence of large language model technologies (LLMs), they’re closer to what we’ve imagined robots could be than ever before.
In the real world, we believe humanoid robots represent one of the most exciting emerging applications of generative AI (gen AI). Specifically, we see their potential to break through some of the most persistent and longstanding barriers to inclusion faced by people with disabilities and/or neurodiversity, sooner rather than later. That’s going to be game-changing—for individuals, for employers, and for society.
Current offerings are no Sunny, but they can already offer physical assistance, support for social interactions, and cognitive aid. Physically, for example, they can help individuals navigate work or home environments with ease (holding doors, assisting with movements from a wheelchair to bed, for example). They can serve as practice partners for interaction for neurodiverse people, or for people who have anxiety, or perhaps for people with hearing loss, running increasingly responsive role-play scenarios in a private, low-stress environment. They can also manage schedules, guide stress-relief exercises, or act as companions to alleviate loneliness. And the more they’re able to learn, the more they will improve.
Are you enjoying this article? Read more like this, plus SSIR’s full archive of content, when you subscribe.
But there’s still a serious gap between these new technologies’ very real promise and their abilities to deliver. Through Accenture’s 2024 Technology Vision survey, we are able to look at 5,042 responses specific to those who identify as having a disability or are neurodiverse. Among these respondents, 39 percent feel frustrated by technology’s inability to accurately understand their intentions. That compares with 28 precent of non-disabled individuals. About half of persons with disabilities (49 percent) believe that technology places too much responsibility on them to adapt, rather than adapting to their specific requirements. This is the case, even though more than half (52 percent) of people with disabilities see the potential of generative AI to enhance their performance in areas like creativity, relationship building, and idea generation, according to our research.
The taproot (and ironic) issue is a lack of inclusivity in the development of existing and emerging AI technologies.
It’s not too late to course-correct. If ever there were a time to take the disability community’s mantra, “Nothing about us without us,” to heart, now is that time. Inclusive research and design matter. One of us, coauthor Laurie Henneborn, notes that as a member of this community and as a business executive, she can’t emphasize this message enough.
For gen AI-powered tools to do what we expect and want them to do, they need to have flexible, customizable interaction capabilities. To have that, they need to reflect input from the broadest possible scope of humanity.
Inclusivity as an Attribute, Not an Add-On
This means that designers should think of inclusivity as an essential design attribute, rather than as an after-the-fact modification. It needs to come in the form of built-in features that make it possible—or easier—for persons with a variety of apparent and non-apparent disabilities and neurodiverse individuals to get from “A” to “B,” whether that journey is a physical distance, a communication, or a desired outcome from a function on a laptop.
Translated into practice, it means asking these individuals to bring their experiences, insights, and perspectives to bear when developing and scoping applications of new technologies. In doing so, the design process de facto recognizes potential exclusion, learns from diversity and solves for specific needs before
extending solutions. By default, it sidesteps the trap of technoableism, a term that refers to the normative assumptions about ability that often guide technological design processes and implementation.
Google’s Project Euphonia offers an example of just this sort of training. It is a pioneering effort aimed at enhancing speech recognition for people with speech impairments, with an eye toward increasing their ability to communicate as well as their independence.
Here’s how the initiative is set up: Euphonia is open to anyone who wants to offer input. It asks participants to record as many as 300 phases on their own time. Then, it uses the recordings to train its LLM tools to recognize an increasingly wider range of pronunciations and cadences. Although the primary data collection program is focused on English, at this writing, Project Euphonia has also started pilot programs in French, Hindi, Japanese, and Spanish.
Here’s one example of how Euphonia is making a positive difference: It is helping former NFL player Tim Shaw, who has amyotrophic lateral sclerosis (ALS, also known as Lou Gehrig’s disease), regain his ability to communicate with others. It can interpret his directions and cue tasks accordingly. It can also “read back” what he tells it in a voice that is strikingly similar to the voice he once had. (That tailored solution is possible in part because there were many examples of his voice recorded during his years as a professional football player.)
Imagine the potential for this application in humanoid robots in educational settings
for neurodiverse students, where they’re already assisting in teaching social skills
and providing consistent, reliable interactions. One possible scenario: A humanoid robot that learns individual students’ speech patterns and can quickly and accurately transcribe their answers to test questions or help them write an essay if they have difficulty using a keyboard.
Or picture this application supporting a variety of roles in health care, interpreting different accents and dialects, translating in real-time to make it easier for, say, first responders to understand patients. Already, humanoid robots are dispensing medications and assisting with physical rehabilitation activities. They’re also working in a growing number of customer service roles, where they handle basic inquiries and guide visitors in large public spaces like airports and museums. Enhanced speech recognition and speech generation could multiply their efficacy, as they could enhance services for more people with different speech patterns and communication needs.
Broadening the lens beyond this sort of interaction, we’re also seeing gen AI accelerate the evolution of humanoid robots. Since gen AI-enabled humanoid robots learn from diverse datasets, they’re increasingly able to adapt to diverse environments and tasks, improving their effectiveness in the shifting situations of the real world. In that same vein, gen AI is facilitating the creation of virtual environments for complete testing and refining of robotic behaviors. It’s also supporting personalization, improving a robot’s ability to be increasingly relevant to an individual’s own circumstances.
The key will be to close the gap between making an experience inclusive and designing an inclusive experience.
In the past, people generally framed the idea of robots as being apart—as distinct entities created to help people. Now, we can begin to consider them more as an extension of a person. If someone cannot walk or finds it challenging in other ways to appear in person, perhaps they can tele-operate the humanoid robot to execute activities (think, inspecting a factory floor). Robots will increasingly be able to serve as people’s legs, hands, eyes, and voices.
Make It Happen Now
There are many avenues to transformative inclusion.
For example, if your organization is developing generative AI robotics, turn first to some established and respected guidelines, such as those offered by the Partnership on Employment and Accessible Technologies (PEAT). PEAT’s “Inclusive Design Principles,” which, as the site says, are “not a set of ‘how-to’s, but rather a framework
that can be used alongside established accessibility guidelines while developing products to move beyond compliance.”
Consider these guidelines, along with responsible AI guardrails, even if your organization isn’t designing robots but is procuring them instead. Ask a diverse group of employees to test them, perhaps against PEAT design principles, and let you know what they think. Try before you buy and see what can be done to improve accessibility. And we recommend organizations establish guiding principles for responsible AI use
that cover topics like transparency, safety, security, and fairness.
If you can invest in emerging gen AI innovations, support those that make inclusion a non-negotiable part of design. Look for early-stage applications and ask for the organization’s policies about responsible design. You should have explicit conversations with a potential grantee with an eye toward aligning on a set of design principles that suits all parties. Support those who are identifying and breaking barriers. For just one quick example, consider the recently developed robotic sensor that incorporates artificial intelligence techniques to read braille about twice as fast as most people can. This would help people with disabilities related to sight contribute more easily in large environments by helping people who don’t have challenges with sight read what’s written in braille.
And personally? First, encourage discourse. Three years ago, Laurie wrote in the Harvard Business Review about how important it is to make it safe for employees to disclose their disabilities at work (in part by following the example of leaders who disclose their own challenges). The safer people feel about letting their colleagues and bosses know about their challenges, the more leaders will understand what’s needed to help. In this case, by making sure that their voices are heard by those designing or procuring generative AI tools like humanoid robots.
Second, remember that humanoid robots have limits and when you use them yourself, use them accordingly. For example, if you are neurodiverse, and you’re asking an AI tool any questions about your treatment plans, take the information you receive as a conversation starter. As one colleague of ours noted: “AI is there to help you cross new boundaries, but you don’t go it alone … you take the information back to your health team and discuss what you’re learning. As a brain injury patient, sometimes things don’t make sense [to me] or my comprehension of the data isn’t quite right. As I learn the limits of the AI, I learn when to take questions to my health team.”
Also, remember that your feedback matters. Your impressions of your personal interactions with humanoid robots should be important to the organization offering these tools; it should be taken to heart. If a feedback channel isn’t formally available, raise the issue.
People use technology to overcome limitations and do more; we always have. What we haven’t always done is seek to overcome limitations and do more with the explicit intent to level playing fields at scale for neurodiverse individuals and people with disabilities. Let’s do that now.
Support SSIR’s coverage of cross-sector solutions to global challenges.
Help us further the reach of innovative ideas. Donate today.
Read more stories by Adam Burden & Laurie Henneborn.