Why I Would Like to Build an AI Wellbeing Assistant (and Why Psychologists Should Help)

Why I Would Like to Build an AI Wellbeing Assistant (and Why Psychologists Should Help)

I am a consultant psychologist and therapist specialising in adult neurodevelopmental conditions. Over many years of practice, I’ve seen the same pattern: people do not only need therapy; they need practical, everyday scaffolding that meets them where they are, in the middle of busy, unpredictable lives. They
need tools that adapt to their ways of thinking, sensing and organising, without pathologising those differences. And they need those tools to be available when the world is not: between sessions, at odd hours, when a task suddenly grows complicated, or when a small step becomes hard to begin.

That is why I would like to develop an AI wellbeing assistant that is fine-tuned to adapt to the needs of people with neurodevelopmental conditions—and still useful to anyone. It should not try to be a therapist; not a replacement for clinical care. It should be a practical assistant designed to help people start, continue and complete the routines, reflections and communications that support wellbeing. In this post I explain why psychologists should be involved in building systems like this, why such systems cannot be therapists, and where they can play a helpful role in the lives of neurodivergent adults.

The gap I want to close Services are stretched. Waiting lists are long. Appointments are brief. Even when
care is available, there is a large space between those touchpoints where people are left to navigate on their own. That space is where many important things happen: preparing for a meeting with a GP; breaking down an administrative task; noticing a pattern in sleep or focus; drafting a message that needs to be clear and respectful of one’s own needs; deciding what is “enough” for today.

For many neurodivergent adults, the standard advice to “just plan better,” “use a calendar,” or “try a different app” is not helpful. The issue is rarely a lack of effort. It is often a mismatch between how tools are built and how a person’s attention, motivation, sensory experience, and tolerance for uncertainty work. A supportive tool needs to be consistent, literal when asked, flexible when needed, and predictable in the way it delivers structure.

An AI assistant, when designed carefully, could help close this gap by offering:

  • Clear, stepwise guidance that respects the user’s preferred level of detail.
  • Predictable structure that reduces surprise and keeps a steady pace.
  • Flexible, plain-language explanations that avoid ambiguity.
  • The ability to adapt to the person’s patterns over time.

This is not about replacing human help. It is about offering steady, well-designed prompts and plans that can sit in the background of daily life and become available the moment they are needed.

What an AI wellbeing assistant is—and is not

By “AI wellbeing assistant,” I mean a conversational tool that uses a large language model to generate and adjust text-based support. One that can help organise tasks, shape routines, prepare communication, prompt reflection, and translate complex information into clearer steps. It should remember user-stated preferences within a session and follow a chosen structure (“always give me three steps, then ask if I
want more”). It should be able to follow instructions to avoid figurative language and to present information in checklists, timelines, or scripts, depending on the user’s preference.

However, we must remember that an AI wellbeing assistant is not a person. It does not form a therapeutic relationship. It does not hold professional responsibility. It does not see your context the way a clinician does. It cannot ethically diagnose, treat, or offer crisis care. It is a tool—one that should be measured by whether it helps people move through their days with greater clarity, steadiness and control.

Why psychologists should help build these systems

Psychologists bring a particular kind of know-how that is crucial here. We are trained to notice patterns, to test assumptions, and to design interventions that are clear, proportionate, and humane. In the context of an AI wellbeing assistant, that translates into several responsibilities.

1) Designing for real needs, not for novelty
New technology often begins with what it can do rather than what people actually need. Psychologists who work with neurodivergent adults are close to the practical tasks that make the most difference: deciding on a starting point, keeping a manageable pace, building rest into plans, setting limits that protect energy, and maintaining a sense of agency. When psychologists are at the table, the assistant is
more likely to prioritise these foundations over shiny features.

2) Reducing cognitive load
Small changes in how information is presented can have large effects. Clear headings, consistent layouts, literal instructions on request, and visible “next steps” make tasks less taxing. Psychologists are used to shaping information so it can be processed without unnecessary strain. That design discipline should be baked into an assistant from the start.

3) Respecting differences without pathologising
An assistant that constantly pushes a single “right way” to work will undermine the very people it is supposed to support. Psychologists can help the system offer options without judgement: time-boxed bursts or gentle pacing; text or checklists; quiet prompts or explicit countdowns; short summaries or deeper explanations. The point is not to normalise anyone—it is to help the person do what matters to them in a way that fits their style.

4) Setting safe and clear boundaries
A responsible assistant must know what is outside its scope. Psychologists understand when a prompt is drifting into treatment territory, when risk needs human attention, and when the best contribution is to slow down and recommend contact with a clinician or a trusted person. These boundaries protect users and keep the tool honest about what it can and cannot do.

5) Using evidence without turning the tool into a textbook
People need practical help, not lectures. Psychologists can translate well-supported methods (for example, step-wise planning, problem-solving approaches, sleep-supporting routines, or values-based goal-setting) into short, actionable prompts. The skill lies in offering enough structure to be useful while keeping the tone plain and non-directive.

6) Measuring what actually matters
We should evaluate the assistant by outcomes that people care about: fewer dropped steps, clearer communication, more prepared appointments, steadier routines, and a greater sense of control. Psychologists can design simple, low- burden ways to check whether those outcomes are improving and adjust the assistant accordingly.

Why an AI assistant cannot be a therapist

The word “therapist” carries specific meanings: training, supervision, accountability, and a human relationship that supports change. No matter how advanced a model becomes, several features of therapy are not replicable by a tool

Human relationship and responsibility

Therapy is not just information; it is a relationship grounded in trust, presence and accountability. A therapist notices subtleties in tone, timing and pauses. They adjust to the person in front of them, not only to the words that person uses. They are also responsible for the decisions they make and the guidance they offer. An AI system cannot carry that responsibility or offer that kind of attuned presence.

Context and judgement

Therapists hold the wider picture: history, personal commitments, support networks, and the many pressures that shape a life. They help make sense of patterns over time and across contexts. An assistant can work with what is shared in a conversation, but it does not bear the ethical duty of understanding a person’s full story or of coordinating with other supports. It should not attempt to.

Risk and crisis

When risk emerges, therapists know how to respond, who to contact, and how to act in the person’s best interests. An assistant should never be used for emergency care or for decisions that require human safeguarding. Its role is to encourage steady routines and clearer plans, not to substitute for urgent support.

Boundaries and scope

Assessment, diagnosis, and therapy belong in human hands. An assistant can help someone prepare for an assessment, keep records of questions they want to ask, or turn a plan from therapy into daily steps. But it must not present itself as providing therapy. Clear boundaries are protective, not restrictive.

In short, an AI assistant can be useful precisely because it is not trying to be a therapist. It focuses on the practical layer of daily life where small steps add up, and it does so without claiming clinical authority.

Design principles that matter for neurodivergent adults

The assistant I’m building follows principles shaped by years of listening to what helps. These are not gimmicks; they are basics done thoroughly.

Predictable structure

Consistency lowers effort. The assistant uses stable templates: aim, steps, time estimate, obstacles, and next review point. The same structure appears each time unless the user requests a change. Predictability reduces the energy cost of starting.

Clarity by default, depth on demand

The assistant begins with clear, succinct guidance. If more detail is desired, it can expand on request. This keeps the initial interaction focused and avoids overloading the user.

Literal, plain language

When asked, the assistant removes metaphors and ambiguity and gives concrete, testable steps. It can also translate dense instructions into straightforward checklists.

Adjustable pacing

People differ in how they like to move through tasks. The assistant can work in brief bursts with pauses, in single-task focus blocks, or in gentle, spaced steps with longer intervals. It adapts to the user’s chosen rhythm.

Energy-aware plans

Plans include buffers for transitions, rest, and recovery. The assistant does not reward over-extension; it helps set limits that protect energy and attention.

Gentle accountability

The assistant can ask whether a plan felt manageable, what helped, and what got in the way. It does so without judgement. The aim is to learn what works for this person and to refine the next plan accordingly.

Communication support without scripting people’s lives
The assistant can help draft emails or messages that are clear and to the point, and organise thoughts before appointments. It suggests neutral wording when requested, and it keeps the user’s own voice central.

Where an AI wellbeing assistant belongs in daily life
Because this tool is not a therapist, its value lies in the texture of everyday routines. Here are areas where it can make a steady difference without overreaching.

Preparing and planning

  • Turning intentions into a short sequence of steps with realistic time estimates.
  • Adding protective buffers around transitions (before and after a meeting, between errands).
  • Grouping related tasks to reduce context switching.
  • Making visible what is “enough” for today so that stopping can be a choice,not a doubt.

Getting started and keeping going

  • Offering a brief “first action” that lowers the barrier to beginning.
  • Running quiet, written “focus sessions” with timed checkpoints.
  • Tracking what helped and what hindered, so the next plan is sharper.

Communication and advocacy

  • Converting complex points into concise messages while preserving nuance.
  • Structuring questions for appointments so the important items are discussed.
  • Preparing explanations of needs in clear, respectful language that does not undercut the person’s autonomy.

Reflection and pattern-spotting

  • Keeping simple, low-effort logs of sleep, focus, overstimulation, and recovery that the user defines.
  • Noticing patterns the user asks it to watch for (“Did shorter work block help?”) and providing summaries on request.
  • Encouraging values-aligned choices by linking tasks to what matters to the user, without moralising.
  • Learning and work
  • Translating instructions into stepwise checklists.
  • Clarifying terminology into plain language.
  • Providing alternate formats (brief summary, bullet points, or fuller explanation) depending on the moment.

Daily living and transitions

  • Building routines that include rest and sensory regulation.
  • Offering reminders that are descriptive rather than alarming.
  • Helping set boundaries around time and commitments.
  • None of this requires the assistant to be a therapist. It requires the assistant to be steady, clear, adaptable, and modest about its role.

Safeguards and honest limits

Any responsible system must be open about its limits.

  • No crisis use. The assistant is not designed for emergencies or urgent mental health concerns. In those situations, human help is essential.
  • No diagnosis or treatment. It does not assess, diagnose, or treat conditions. It helps organise daily life and prepare for clinical conversations, if those are happening.
  • Fallibility is expected. All language models make mistakes. The assistant is designed to keep suggestions simple, check for understanding, and invite correction.
  • User control. The user chooses the level of structure, the tone, and the format. The assistant follows those settings and asks before changing them.
  • Conservative claims. The assistant sticks to practical planning, communication support, and reflection prompts. It does not make sweeping promises.
  • These limits are not obstacles; they are the basis for trust.

How an AI Wellbeing Assistant Could be Effectively Built

A tool is only as good as the process that shapes it. The development approach should be centred on co-design with neurodivergent adults. That includes:

  • Iterative testing of prompts, formats and pacing until the interaction feels calm and useful.
  • Removing ambiguity unless the user explicitly wants brainstorming.
  • Keeping the default outputs brief and structured, with optional depth.
  • Ensuring the assistant can acknowledge uncertainty and ask clarifying questions without derailing the user’s momentum.
  • Measuring outcomes people care about: fewer abandoned tasks, clearer communication, steadier days, and a stronger sense of agency.
  • The aim should be to produce an assistant that feels like a clear-headed colleague who respects your way of working and adjusts to it.

What success looks like

  • Success is practical and observable. People should notice that:
  • Starting is less effortful because the first step is concrete and small.
  • Plans are realistic and include rest, so completion does not require a surge of willpower.
  • Communication takes less time and lands more clearly.
  • Appointments feel more productive because questions and priorities are ready.
  • Patterns that matter become visible without complex tracking.
  • There is more room for what the person values, not just what is urgent.
  • If those things happen reliably, the assistant is doing its job.

Why this matters for the profession

Psychologists are, at core, experts in behaviour—how people initiate, sustain, and change actions in real conditions. That expertise should shape any LLM intended for wellbeing: not just what it says, but how it structures choices, pacing, prompts, and feedback so that helpful behaviour becomes easier. We understand reinforcement, habit formation, motivation under uncertainty, sensory load, and the influence of context and cues; those principles belong in the model’s design and everyday functioning.

Without expert psychological input, an LLM can sound plausible while nudging unhelpful patterns—overcommitment, avoidance, or brittle perfectionism. Embedding behavioural science from the outset ensures the assistant doesn’t only “look right” in text, it actually supports the behaviours that matter.
Psychologists should not stand on the sidelines while general-purpose AI systems are applied to wellbeing. We have a responsibility to shape the tools that people will use, to protect boundaries, and to keep the focus on human agency. Our input ensures that:

  • The tone is respectful without being sentimental.
  • The structure reduces effort rather than adding a new layer of complexity.
  • The system is transparent about limits and routes people back to human care when needed.
  • Evaluations are meaningful and improvements are driven by real outcomes,not click metrics.

Involvement does not mean turning therapy into templates. It means translating what we know about pacing, clarity, and everyday support into a tool that is available when needed. Building and maintaining language-model assistants is a highly technical field, and the entry barrier for clinicians is real. That is exactly why we need to educate ourselves enough to contribute—not to become software engineers, but to bring clinical judgement, ethical sense, and practical knowledge of what actually helps.

If psychologists step back, design choices are made only by technical teams, narrowing the knowledge base and increasing the risk of tools that are misleading, unhelpful, or unsafe. Learning the basics—how these systems generate outputs, where they fail, and how to evaluate them—lets us collaborate as equal partners and set sensible guardrails. Our profession should treat this as core literacy for modern
care, not a specialist hobby.

Many patients, psychologists and therapists are understandably wary of AI. That caution is healthy. The best way to reduce it is to contribute to how these tools are made—bringing clinical standards, behavioural science, and lived experience into the design. An effective wellbeing assistant should be continually tested with neurodivergent adults, and by neurodivergent psychologists with a foot in both camps, which helps us spot where tone, pacing or prompts slip from helpful to unhelpful. This hands-on, iterative testing, combined with clear boundaries and transparent evaluation, is how justified concern becomes informed confidence in those with understandable concerns about AI.

A clear place in the ecosystem of care
Here is the place I see for an AI wellbeing assistant:

  • Between sessions: to keep plans moving, record what helped, and prepare questions for next time.
  • Alongside self-management: to turn intentions into steps and keep those steps visible and proportionate.
  • In work and study: to clarify tasks and reduce the effort of getting started.
  • In daily living: to steady routines and transitions with clear, adjustable prompts.
  • At decision points: to lay out options plainly and check they fit the person’s values and current energy.
  • This is a supportive, bounded role. It is practical, not clinical. It is broad enough to help many people and flexible enough to serve those with specific access needs.

Closing

I would like to build a wellbeing assistant because too many people are left to carry too much of the organising, planning and translating alone. When designed with care, an AI wellbeing assistant can lower those burdens. It can do so without pretending to be a therapist and without speaking down to the people it aims to support.

For neurodivergent adults—and indeed for anyone who wants steadier days—the goal is simple: clearer steps, kinder pacing, and a stronger sense of control over the shape of everyday life. If a tool can consistently provide that, it earns its place. And if psychologists help design it, that place will be safer, more useful, and more respectful of the many ways minds can work.

How to Use an AI Wellbeing Assistant Safely – Some Tips
  1. Speak Naturally and Authentically
    You don’t need to find perfect words. Just speak as you would to someone who listens with care. You can begin with what’s present for you:
    “I’ve been feeling anxious and I’m not sure why.”
    “Can you help me plan my day when I’m struggling with motivation?”
    Your wellbeing assistant should respond with empathy and evidence-informed guidance.
  2. Be Patient and Keep an Open Mind
    Your wellbeing assistant should learn from the flow of your conversation, so it may not always respond exactly as you expect at first. You can clarify or guide it back to your focus.
    Think of it like an interactive self-help workbook: the more you stay engaged, the more helpful and aligned to your needs it becomes.
  3. Explore Guidance and Tools
    Your wellbeing assistant should offer both reflection and practical strategies. You can ask it for:
    Coping techniques for anxiety or low mood Structure and planning help for focus and motivation
    Emotional awareness and communication support Small “action steps” to practise between conversation
  4. Understand Its Role
    Your wellbeing assistant is a psychologically informed AI support assistant, not a human therapist.
    It might help you gain perspective and build coping skills, but it isn’t designed to replace professional care.
  5. Personalise the Way You Interact
    Tell it your preferences:
    “I prefer clear, step-by-step explanations.”
    “Can you check in on how I’m feeling before suggesting anything?”
    This flexibility can make it especially supportive for neurodiverse users who may experience communication or focus challenges.
  6. If You are Feeling Suicidal or Like you want to Harm Yourself
    An AI wellbeing assistant might suggest where you can get help but it cannot offer useful support that a human being can give. So, please contact someone for a chat, someone you can trust, such as a family member, friend, GP (some are very responsive), a familiar mental health professional (e.g., a therapist, mental health nurse, crisis team) or a helpline, such as the Samaritans (and others
    that are listed in my resources page on this website).
  7. Approach It as a Journey, Not a Quick Fix
    Your wellbeing assistant will work better when used as part of your ongoing wellbeing practice — like journaling, mindfulness, or coaching. Each conversation can be a step toward understanding yourself better. Be pragmatic – AI models make mistakes and hallucinate – if something does not
    feel right or seems odd, question it, ask for reference links to assess source information and above all:

    Here’s a very cute pooch as a landing pad after a very serious article!