Headspace_logo_svg
Try for freeTry for free

AI principles
at Headspace

Headspace is committed to building AI product features that are safe and secure.

Group 370448

Built in partnership with mental health professionals

Proprietary AI safety systems

Built with privacy by design

How we use AI today

Checkmark To support our members with self-reflection and finding gratitude through Ebb, our empathetic AI companion

Checkmark To minimize administrative burden on our care providers by assisting with routine clinical tasks

Checkmark To assist in identifying members with high-acuity needs and concerns and supporting them with additional resources

Checkmark To surface & recommend relevant content for members

Checkmark To surface potentially inappropriate or endangering comments in community postings for provider review


What our AI doesn't do

Close Diagnose or treat mental health conditions

Close Serve as a replacement for a licensed therapist or mental health coach

Close Provide medical or clinical advice.

Our guiding principles


  • Our team of clinical psychologists, product designers, data scientists and engineers work together to ensure high quality experiences. For example, Ebb was designed with evidence-based mental health standards to support our members. We firmly believe that mental health experts are essential to developing AI for mental health products and services.

  • We take the responsibility of safeguarding our members’ personal information extremely seriously. As a mental health company, we’re committed to protecting our members’ privacy like we would our own – not only because it’s required by law, but because it’s the right thing to do. Headspace AI uses leading privacy and data protection practices, such as encrypting data and storing it in secure environments. For general information on Headspace’s privacy policy, learn more here.

  • Headspace prioritizes member safety in all interactions with AI. We rigorously test our AI to ensure member safety and to prevent inappropriate use. We utilize proprietary safety systems to identify members with critical concerns, such as thoughts of harming themselves or others, abuse, and more. When we identify that a member needs more support or is unsafe, we enable rapid access to appropriate resources. Finally, we routinely evaluate and improve our safety systems.

  • Headspace aims to develop and maintain culturally responsive AI in its products that not only mitigates biases but actively promotes health equity. We integrate a deep understanding of diverse, lived experiences across multiple identity dimensions, including (but not limited to): race, ethnicity, caste, gender identity and expression, sexual orientation, socioeconomic status, age, religiosity/spirituality, and ability status.

    We ensure cultural responsiveness by reflecting a broad spectrum of identities and experiences in our team composition, AI data, and development processes, with ongoing input from a team of DEIB experts.


  • Prior to explicit consent and then direct engagement with our AI product features, we inform members about its functionality and limitations. Headspace empowers members to choose how they would like to interact with AI, by informing them of the risks, benefits, alternatives to and implications of engaging with AI.

The AI Council

In a mindful, methodical, and responsible effort to uphold our AI principles, we’ve formed a cross-functional council made up of a diverse group of clinical and DEIB experts. We call this council the Clinically Rigorous and Culturally Responsive AI Council, or The AI Council for short.

The AI Council is committed to embedding AI principles in Headspace with integrity. It includes seasoned experts in clinical intervention and culturally responsive care, and is supported by multiple executive sponsors and cross-functional teammates. The council guides the development of our AI technologies, making sure they meet rigorous mental health standards and are culturally responsive. The AI Council's efforts highlight Headspace's commitment to mental health solutions that are not just effective and ethical, but also equitable and attuned to the diverse communities we serve.

AI safety

Proactive Safety Features
We use custom AI models to help identify members who may be at mental health risk, unable to keep themselves safe, or in need of more support. This proprietary technology allows us both an extra layer of safety and the ability to quickly connect members to the right resource at the right time. For example, when a member is interacting with our AI companion Ebb, the tool includes an "always-on" filter designed to detect expressions of risk. This critical safety feature ensures that members in need of higher degrees of support are identified early, allowing for immediate intervention and rapid guidance to appropriate support resources.

Safety-Oriented Interaction Design
The safety protocols embedded into Headspace’s AI tools go beyond crisis detection; they are also designed to prevent harm by encouraging interactions that are safe and promote health. This is achieved through careful, expert-led AI development processes and consistent evaluation of interaction flows for safety risks and ideal member experience.

Ongoing System Assessment
Headspace’s AI development includes ongoing systems assessment to ensure risk coverage as member interactions evolve. These ongoing efforts help maintain a high standard of safety as our tools adapt to new user data and evolving mental health practices.

Frequently asked questions

  • We use leading practices to protect privacy and adhere to privacy-related regulations. We encrypt data and store it in secure environments. We also minimize employee access to data on a need-to-know basis.

  • We acknowledge that AI bias is a significant concern in the industry that must be addressed in an ongoing and rigorous fashion. We have operational systems in place not only to mitigate bias, but to promote health equity when using AI. With regard to our members’ health, it is not enough to “do no harm” – Headspace extends beyond this principle to promote the health, wellness, and outcomes of diverse people and communities with AI.

  • We deploy a number of strategies to ensure Headspace AI is culturally responsive:

    • Establish inclusive AI product teams with individuals who are diverse in their identities, and integrate DEIB experts into product development processes to foster accountability.
    • Conduct ongoing evaluations of our AI with datasets that reflect a diversity of lived experience.
    • Identify underrepresented groups’ needs during the development process, and to the extent possible, involve these groups during our development processes.
    • Routinely evaluate and update our AI approach in accordance with emerging best-practice standards in the field of DEIB.

Stay in the loop

Be the first to get updates on our latest content, special offers, and new features.

By signing up, you’re agreeing to receive marketing emails from Headspace. You can unsubscribe at any time. For more details, check out our Privacy Policy.

Get some Headspace

  • Send a gift
  • Redeem a code
  • Student Plan
  • All articles
  • Subscribe
  • Headspace for Work
  • Administrator Portal
  • Explore our content library

About Us

Support

My Headspace

Login
    • Terms & conditions
    • Privacy policy
    • Consumer Health Data
    • Your privacy choices
      Privacy Choices Icon
    • CA Privacy Notice

Get the app

  • Download app - Apple: EN
  • Download app - Android: EN
  • © 2024 Headspace Inc.
  • Terms & conditions
  • Privacy policy
  • Consumer Health Data
  • Your privacy choices
    Privacy Choices Icon
  • CA Privacy Notice