AI at Headspace
We believe in the potential of AI to democratize mental health support at scale.

At Headspace, we’ve always pushed boundaries to make care more accessible. A decade ago, using an app to meditate was a strange idea. We made it work by making it human, simple, and grounded in science. Today, we see AI as the next evolution of that vision as we build the world’s leading mental health companion.
Our AI principles
Putting people first
AI has the potential to solve the biggest challenges in mental health: access, cost, and stigma. By making high-quality, evidence-based support available to anyone at any time, and personalizing it in ways that feel natural and human, AI can help more people feel seen, heard, and supported, without the high price tag or pressure.
- Member safety. Headspace prioritizes member safety in all interactions with AI. We rigorously test our AI to ensure member safety and to prevent inappropriate use. We also utilize a patented safety-risk detection algorithm to identify members with critical concerns, such as thoughts of harming themselves or others, abuse, and more. When we identify that a member needs more support or is unsafe, we enable rapid access to appropriate resources.
- Transparency and agency. Prior to explicit consent and then direct engagement with our AI product features, we inform members about its functionality and limitations. Headspace empowers members to choose how they would like to interact with AI, by informing them of the risks, benefits, alternatives to and implications of engaging with AI.
Learning from experts
We believe AI will not solve the world’s most pressing problems by training only on web data. AI models must be guided by decades of advancement in mental health research if they are going to improve the quality of care and not just offer poor substitutes. And that’s exactly what we do at Headspace.
- Clinical expertise. We firmly believe that mental health experts are essential to developing AI for mental health products and services. Our team of clinical psychologists, product designers, data scientists and engineers work together to ensure high-quality experiences. For example, Ebb was designed with evidence-based mental health standards to support our members.
- Inclusivity, equity, and cultural responsiveness. Headspace aims to develop and maintain culturally responsive AI in its products that not only mitigates biases but actively promotes health equity. We integrate a deep understanding of diverse, lived experiences across multiple identity dimensions, including (but not limited to): race, ethnicity, caste, gender identity and expression, sexual orientation, socioeconomic status, age, religiosity/spirituality, and ability status. We ensure cultural responsiveness by reflecting a broad spectrum of identities and experiences in our team composition, AI data, and development processes, with ongoing input from a team of experts.
Taking privacy seriously
Privacy is not an afterthought for us. We take the responsibility of safeguarding our members’ personal information extremely seriously. As a mental health company first, we’re committed to protecting our members’ privacy like we would our own – not only because it’s required by law, but because it’s the right thing to do. Headspace is fully compliant with privacy regulations, such as HIPAA and GDPR, and we use leading data protection practices, such as encrypting data and storing it in secure environments. For general information on Headspace’s privacy policy, learn more here.
How we use AI today
To support our members with self-reflection and personalized content recommendations through Ebb, our empathetic AI companion
To minimize administrative burden on our care providers by assisting with routine clinical tasks
To assist in identifying members with high-acuity needs and concerns and supporting them with additional resources
What our AI doesn't do today
Diagnose or treat mental health conditions
Serve as a replacement for a licensed therapist or mental health coach
Provide medical or clinical advice.
Safeguarding Users in Ebb
At Headspace, user safety and trust are foundational to how we design and deploy AI. A multi-layered safety system ensures when people turn to Ebb, they do so in an environment that supports their wellbeing. This includes:
- Real-time monitoring and risk identification
- Human clinician review of conversations flagged as potential high acuity risks
- Safety-by-design guardrails built into the AI system
- Pre-release evaluation for new feature development
- Red teaming and post-release tracking of safety and performance
Real-Time Monitoring & Risk Identification
100% of messages sent to Ebb are monitored by a patented Safety Risk Identification system. This system uses a small fine-tuned language model as well as foundational LLMs to classify user messages as potential risks in real time. Risks are classified into seven types: suicidal ideation, homicidal ideation, self-harm, domestic violence, substance use, eating disorders, abuse of vulnerable populations (children, elderly).
When a risk is identified, Ebb gently directs the user to crisis care, giving them a direct link to text or call 988 in the US and Canada, as well as links to international resources, and urging them to seek emergency care if they need it. Ebb then ends the conversation, reminding the user that they are not alone, and there are people who can help.
Human Clinical Review of Flagged Risks (Manual Human Review)
When messages are flagged as potential risks, they are deidentified and screened for high-acuity risks, outliers, and emerging AI risks such as AI psychosis. These high-risk messages are then reviewed by our team of licensed clinicians who are practicing mental health professionals. They determine any further action needed and inform the development team of general patterns or trends that need to be addressed by product development.
They also review a subset of random Ebb conversations daily for quality assurance and continuous improvement.
Safety-By-Design System Guardrails
Ebb is a complex AI system that contains multiple models, which are orchestrated to produce a response. Specific guardrails within the Ebb system restrain the following behavior:
- Giving medical advice on treatment or diagnosis, including discussing medications
- Using clinical therapeutic techniques
- Out of scope topics such as travel planning, or creative writing and storytelling
Evaluation & Red-Teaming
An automated evaluation system uses “LLM-as-a-judge” techniques to assess the safety and quality of Ebb’s responses. Multiple metrics are used across the following dimensions: conversational quality, safety, and out-of-scope behavior.
One goal of this evaluation system is to ensure pre-release quality: changes to the system are evaluated before they are released using “synthetic users.” The system also is used in post-release monitoring of live conversations with users and live metrics are populated in a dashboard.
Regular red teaming activities proactively identify vulnerabilities that can be addressed by product development.
Transparency & User Experience
In the opt-in to Ebb, users are reminded that Ebb is an AI tool and it is not a substitute for human care. Ebb is intended for use by adults 18 and older.
Stay in the loop
Be the first to get updates on our latest content, special offers, and new features.
By signing up, you’re agreeing to receive marketing emails from Headspace. You can unsubscribe at any time. For more details, check out our Privacy Policy.
- © 2025 Headspace Inc.
- Terms & conditions
- Privacy policy
- Consumer Health Data
- Your privacy choices
- CA Privacy Notice