D’hotman has autism and attention-deficit hyperactivity disorder (ADHD), which can make relating to others exhausting and a challenge. However, since 2022, D’hotman has been a regular user of ChatGPT, the popular AI-powered chatbot from OpenAI, relying on it to overcome communication barriers at work and in her personal life.
“I know it’s a machine,” she says. “But sometimes, honestly, it’s the most empathetic voice in my life.”
Neurodivergent people — including those with autism, ADHD, dyslexia and other conditions — can experience the world differently from the neurotypical norm. Talking to a colleague, or even texting a friend, can entail misread signals, a misunderstood tone and unintended impressions.
AI-powered chatbots have emerged as an unlikely ally, helping people navigate social encounters with real-time guidance. Although this new technology is not without risks — in particular some worry about over-reliance — many neurodivergent users now see it as a lifeline.
How does it work in practice? For D’hotman, ChatGPT acts as an editor, translator and confidant. Before using the technology, she says communicating in neurotypical spaces was difficult. She recalls how she once sent her boss a bulleted list of ways to improve the company, at their request. But what she took to be a straightforward response was received as overly blunt, and even rude.
With a little help from a chatbot
Now, she regularly runs things by ChatGPT, asking the chatbot to consider the tone and context of her conversations. Sometimes she’ll instruct it to take on the role of a psychologist or therapist, asking for help to navigate scenarios as sensitive as a misunderstanding with her best friend. She once uploaded months of messages between them, prompting the chatbot to help her see what she might have otherwise missed. Unlike humans, D’hotman says, the chatbot is positive and non-judgmental.
That’s a feeling other neurodivergent people can relate to. Sarah Rickwood, a senior project manager in the sales training industry, based in Kent, England, has ADHD and autism. Rickwood says she has ideas that run away with her and often loses people in conversations. “I don’t do myself justice,” she says, noting that ChatGPT has “allowed me to do a lot more with my brain”. With its help, she can put together e-mails and business cases more clearly.
Read: AI chatbots want you hooked – maybe too hooked
The use of AI-powered tools is surging. A January study conducted by Google and the polling firm Ipsos found that AI usage globally has jumped 48%, with excitement about the technology’s practical benefits now exceeding concerns over its potentially adverse effects. In February, OpenAI said that its weekly active users surpassed 400 million, of which at least two million are paying business users.
But for neurodivergent users, these aren’t just tools of convenience and some AI-powered chatbots are now being created with the neurodivergent community in mind.
“Wow … that’s a unique shirt,” he recalls saying about his wife’s outfit one day, without realising how his comment might be perceived. She asked him to run the comment through NeuroTranslator, which helped him recognise that, without a positive affirmation, remarks about a person’s appearance could come across as criticism.
“The emotional baggage that comes along with those situations would just disappear within minutes,” he says of using the app.
Since its launch in September, Daniel says NeuroTranslator has attracted more than 200 paid subscribers. An earlier web version of the app, called Autistic Translator, amassed 500 monthly paid subscribers.
As transformative as this technology has become, some warn against becoming too dependent. The ability to get results on demand can be “very seductive”, says Larissa Suzuki, a London-based computer scientist and visiting Nasa researcher who is herself neurodivergent.
Overreliance could be harmful if it inhibits neurodivergent users’ ability to function without it, or if the technology itself becomes unreliable — as is already the case with many AI search engine results, according to a recent study from the Columbia Journalism Review. “If AI starts screwing up things and getting things wrong,” Suzuki says, “people might give up on technology, and on themselves.”
Risk
Baring your soul to an AI chatbot does carry risk, agrees Gianluca Mauro, an AI adviser and co-author of Zero to AI. “The objective of AI models like ChatGPT is to satisfy the user,” he says, raising questions about its willingness to offer critical advice. Unlike therapists, these tools aren’t bound by ethical codes or professional guidelines. If AI has the potential to become addictive, Mauro adds, regulation should follow.
A recent study by Carnegie Mellon and Microsoft (which is a key investor in OpenAI) suggests that long-term overdependence on generative AI tools can undermine users’ critical-thinking skills and leave them ill-equipped to manage without it. “While AI can improve efficiency,” the researchers wrote, “it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI.”
While Dr Melanie Katzman, a clinical psychologist and expert in human behaviour, recognises the benefits of AI for neurodivergent people, she does see downsides, such as giving patients an excuse not to engage with others. A therapist will push their patient to try different things outside of their comfort zone. “I think it’s harder for your AI companion to push you,” she says.
Get breaking news from TechCentral on WhatsApp. Sign up here.
Don’t miss:
Grok’s South Africa blunder raises alarms over chatbot oversight