I Helped Build a Robot That Teaches Sign Language to Children With Autism. This Is What I Learned.

A child learns sign language from a robot

Autism spectrum disorder (ASD) is one of the fastest-growing developmental disabilities in the world today. According to the World Health Organization, about 1 in 160 children has an ASD. In the United States, the CDC estimates that 1 in 54 children is diagnosed with an autism spectrum disorder (ASD).

ASD is characterized by difficulties with social interaction and communication, as well as restricted and repetitive patterns of behavior. Communication challenges are one of the core features of ASD. An estimated 30-50% of individuals with autism never develop functional speech capabilities.

For these individuals, augmentative and alternative communication (AAC) methods can be crucial. Sign language is one of the most widely used forms of AAC. Studies have shown that learning sign language can lead to gains in expressive communication, speech development, and social interaction for children with ASD.

However, learning to sign is not easy. It requires intensive one-on-one instruction, frequent repetition, and a supportive learning environment. These needs can strain the time and resources of therapists and caregivers. Inconsistencies in teaching methods and emotional fatigue can also be barriers. As a result, many children do not receive adequate sign language instruction.

In recent years, social robots have emerged as promising tools to supplement autism interventions. Robots offer several advantages:

  1. They can perform repetitive tasks tirelessly and consistently
  2. Their appearance and behavior can be designed to be appealing and non-threatening
  3. They can be programmed to adapt to individual needs and skill levels
  4. They can collect detailed performance data to track progress

A 2019 meta-analysis in the journal Frontiers in Robotics and AI reviewed 27 studies on the use of social robots for children with ASD. The authors concluded that "robots are effective in eliciting desired target behaviors in children with ASD, including increased attention, interaction, and communication."

Inspired by this potential, our team at Futurice set out to create a robot that could help children with autism learn sign language. Our goal was to develop an engaging, adaptive, and data-driven learning companion that could supplement existing therapies.

Designing an Autism-Friendly Learning Robot

We began by adapting an InMoov humanoid robot that had been built by one of our teams. The InMoov is an open-source design that is freely available online. Its human-like appearance and dexterous hands made it well-suited for teaching sign language.

However, we knew that significant modifications would be needed to make the robot appropriate and effective for children with autism. We worked closely with autism specialists from a local healthcare district to develop a set of design guidelines:

  1. Simplicity: Minimize distracting details and focus on essential features
  2. Consistency: Provide a structured and predictable interaction flow
  3. Positive reinforcement: Offer frequent encouragement and rewards for progress
  4. Modularity: Allow for adjustment of complexity based on individual ability
  5. Customization: Adapt to each child‘s unique preferences and sensitivities

Based on these principles, we redesigned the robot‘s appearance, behavior, and learning environment over a period of 4.5 months. Key modifications included:

  • Simplified hands: We replaced the original hands with a streamlined design optimized for signing
  • Attention-directing lights: LED strips were added to the arms to provide visual cues
  • Integrated display: A screen was embedded in the robot‘s chest to show symbols and text prompts
  • Modulated voice: The speech synthesis was adjusted to be calmer and clearer
  • Remote operation: An adjacent room allowed therapists to control the robot discreetly

The InMoov robot modified with simplified hands, light strips, display, and cameras

The modified InMoov robot with optimized hands, light strips, chest display, and cameras.

Under the hood, the robot was powered by a Raspberry Pi 4 Model B single-board computer. Robot Operating System (ROS) was used to control the robot‘s movements and behaviors. The hand servos were controlled by an Arduino Mega board.

Computer vision techniques, including facial recognition and motion tracking, allowed the robot to detect and respond to the child‘s actions. Personalized profiles were used to store each child‘s progress and preferences.

Throughout the development process, we had to balance the clinical requirements with the technical constraints. Even small changes, like adjusting the volume, could involve significant re-engineering. Clear communication and rapid iteration were essential to finding viable solutions.

Ethical considerations also had to be addressed at every turn, such as:

  • Ensuring physical safety through rigorous testing and fail-safes
  • Securing sensitive personal data through encryption and access controls
  • Fostering natural and appropriate social behaviors through careful scripting
  • Providing transparency about the robot‘s capabilities and limitations
  • Designing an inclusive experience for diverse needs and backgrounds

By proactively navigating these concerns, we aimed to create a beneficial tool that would respect the rights and dignity of a vulnerable population.

Putting the Robot to the Test

After months of development, we were ready to pilot the robot with actual users. We conducted a study with 10 children, aged 6-12, who had been diagnosed with ASD. Each child had one 15-30 minute session with the robot, guided by a speech therapist and observed by their caregiver.

In each session, the robot demonstrated nine basic signs, such as "hello", "goodbye", "more", and "all done". The signs were introduced in three blocks:

  1. Signs only
  2. Signs with lights
  3. Signs with symbols

We measured three key outcomes:

  1. Imitation accuracy (manually coded from video)
  2. Visual attention (automatically tracked by camera)
  3. Engagement and enjoyment (rated by therapist and caregiver)

Two of our team members monitored the sessions from the control room, making real-time adjustments as needed. While each session followed a consistent sequence, the pacing and prompts were adapted to each child‘s needs.

The results were promising. Seven of the 10 children successfully imitated at least one sign. On average, the children visually attended to the robot for 70% of the session. Ratings of engagement and enjoyment were high, with a few exceptions.

Measure Mean Range
Imitation accuracy 31.5% 0-78%
Visual attention 70.3% 45-88%
Engagement (1-5) 3.9 2-5
Enjoyment (1-5) 4.1 3-5

Pilot study results (N=10). Imitation accuracy = % of signs correctly imitated. Visual attention = % of time visually focused on robot. Engagement and enjoyment rated on 1-5 scale by therapist and caregiver.

Feedback from the therapist and caregivers was also informative. The robot‘s signing was generally seen as clear and well-executed. The lights and symbols were perceived as helpful learning aids. However, some noted that the robot could be initially intimidating and suggested a more welcoming startup sequence.

Perhaps the most striking finding was the diversity of the children‘s individual responses. Some were immediately captivated, while others needed time to warm up. Some quickly picked up the signs, while others struggled with certain motor patterns. A few children even wanted to touch or hug the robot.

This wide range of reactions reiterated the importance of flexibility and customization. What works for one child may not work for another. The robot‘s ability to adapt on the fly is thus one of its most crucial features.

The Future of Robotic Sign Language Tutors

Our initial study demonstrated the feasibility and promise of robot-assisted sign language instruction for children with ASD. At the same time, it opened up many avenues for further exploration and improvement:

  • Assessing comprehension: Can children understand and use the signs in context?
  • Long-term learning: How much can children learn over extended periods of use?
  • Spontaneous signing: Can the robot encourage children to initiate signing on their own?
  • Generalization: Do the skills transfer to human interaction partners?
  • Home use: Can the robot be used effectively in the home environment?

It is important to emphasize that the robot is not meant to replace human therapists. Rather, it is a tool to augment and enrich their work. By handling repetitive drills and basic instruction, the robot could free up therapists to focus on more complex skills and personal connections.

In the long run, technologies like this could expand access to specialized interventions. A child could potentially practice with the robot every day, not just during weekly therapy sessions. Tele-operation could allow therapists to work with children remotely. Automatic data tracking could provide rich insights into each child‘s unique learning patterns.

However, realizing this vision will require ongoing research and responsible innovation. As we continue to develop these technologies, we must prioritize the voices and values of the autism community. Individuals with ASD and their families should be active partners in the design process.

We must also remain mindful of potential risks and limitations. Social robots should enhance human connections, not replace them. Technology-based interventions should be held to the same standards of safety, efficacy, and ethics as traditional therapies. Equitable access and culturally competent design should be explicit goals.

As a technologist working in this space, I am inspired by the potential for inclusive innovation. By bringing together the best of clinical expertise, human-centered design, and cutting-edge engineering, we can create a future where every individual can learn, communicate, and thrive.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *