How Mental Health Apps Are Handling Personal Information

Blog Post
Tarikdiz / Shutterstock.com
Feb. 23, 2024

The exponential advancement of artificial intelligence (AI) technology, which many people had previously encountered primarily through books and movies, has encouraged industries to start incorporating the technology into their everyday operations. The health industry—and, more specifically, the mental health industry—is no exception.

Mental health applications that were previously limited to tracking mood changes and offering symptom management advice have started integrating artificial intelligence, introducing chatbots for users to interact with instead of human therapists. Chatbots are computer programs meant to replicate interactions with a human. In mental health scenarios, these interfaces become tools to fill gaps in places like schools, where funding for human therapists and support can be limited. Already, concerns over health information privacy have arisen—and the unfortunate reality is that laws like the Health Insurance Portability and Accountability Act (HIPAA), which would protect this information in other contexts, don’t fully apply to third-party health apps.

HIPAA primarily covers healthcare providers like doctors, hospitals, and pharmacies, along with their vendors. Businesses that do not directly deal with similar private health information or maintain an agreement, typically through health insurance providers, are not covered by HIPAA. Additionally, some third party applications lack a direct connection to a healthcare provider and may use data brokers to sell consumer information.

With the complicated current landscape of artificial intelligence, these third party applications may sidestep certain boundaries that traditional providers cannot. For instance, health insurance providers do not always pay for services outside of their state of coverage. As the AI and applications are not necessarily location-bound, the companies running these applications circumvent this requirement.

Like other AI-based applications, mental health chatbot-oriented applications are trained to understand behavior and respond to individuals—but there is a stark difference between asking a chatbot to summarize information and expressing intimate emotions to that bot. Users could provide information on negative mental states, such as suicidal thoughts or self-harm, thus leading to compromising and vulnerable moments. Depending on the rules in their terms, applications may share this information with other third parties—such as health insurance companies, who could then decide to increase premiums or even deny coverage. Information shared with mental health chat bots could also be used in targeted advertising if nothing in existing company policies prohibits this practice.

Many mental health applications perform the same essential functions (with certain differences), but they do not treat user information with the same sensitivity. Some applications also do not appear as transparent as others in data collection measures. The landscape of mental health chatbot-oriented apps is growing, and the strength of their privacy protections vary widely, as evinced by the privacy policies of currently-available mental health apps, such as Elomia, Wysa, Limbic, Mindspa, and Nuna. It’s important for companies incorporating AI in their mental health services to not only know what privacy policies exist in this market but for them to also understand which of these policies are considered best practices in privacy.

Defining and Protecting Sensitive Information

Before diving into the privacy policies of mental health apps, it’s necessary to distinguish between “personal information” and “sensitive information,” which are both collected by such apps. Personal information can be defined as information that is “used to distinguish or trace an individual’s identity.” Sensitive information, however, can be any data that, if lost, misused, or illegally modified, may negatively affect an individual’s privacy rights. While health information not under HIPAA has previously been treated as general personal information, states like Washington are implementing strong legislation that will cover a wide range of health data as sensitive, and have attendant stricter guidelines.

Legislation addressing the treatment of personal information and sensitive information varies around the world. Regulations like the General Data Protection Regulation (GDPR) in the EU, for example, require all types of personal information to be treated as being of equal importance, with certain special categories, including health data having slightly elevated levels of protection. Meanwhile, U.S. federal laws are limited in addressing applicable protections of information provided to a third party, so mental health app companies based in the United States can approach personal information in all sorts of ways. For instance, Mindspa, an app with chatbots that are only intended to be used when a user is experiencing an emergency, and Elomia, a mental health app that’s meant to be used at any time, don’t make distinctions between these contexts in their privacy policies. They also don’t distinguish between the potentially different levels of sensitivity associated with ordinary and crisis use.

Wysa, on the other hand, clearly indicates how it protects personal information. Making a distinction between personal and sensitive data, its privacy policy notes that all health-based information receives additional protection. Similarly, Limbic labels everything as personal information but notes that data, including health, genetic, and biometric, fall within a “special category” that requires more explicit consent than other personal information collected to be used.

Transparency about Data Collection, Use, and Retention

Some apps’ privacy policies do not highlight or address the problems that come from providing sensitive health information to AI chatbots. AI responses rely on continued usage, and the information shared can be used by the program to further understand a user’s mental state. The program can also flag, in some cases, if an individual is expressing extreme behavior that can harm the user personally. Several apps emphasize notable differences between the levels of transparency, data collection, and retention measures.

Collection and Scope

A majority of the applications note that information collected comes in two forms: account creation and application usage. Typically falling under basic demographic information, such as name, gender, or language of choice, applications may request information that cannot necessarily be deleted unless a user opts to stop using the app altogether. Mindspa, for instance, notes that name and email address are required data fields to access the account. These mobile applications, however, often require additional information to use the chatbots successfully. In one case, Mindspa may request users provide physical, mental, and general wellness health data to maneuver the application in its entirety. Though users can refuse to share additional information, they are then more limited in how the application can be used.

Other apps, like Nuna, provide a more substantial list of information on a case-by-case basis, including IP addresses and device characteristics. The application may even collect birth, marriage, divorce, and death records, as its policy defines these records as “publicly available personal information.” The application does not delineate an additional category of sensitive information related to health data provided by consumers, nor does it explain if such data would receive a different level of collection or security. Interestingly, considering Nuna’s development relies on cognitive behavioral therapy research, it fails to clarify how the data collection may vary, if at all.

Usage

The mental health app Elomia is trained on consultations by human therapists but does not explain in detail how the AI may use this information in implementation. Its brief privacy policy does not define personal information or indicate additional protections for health-based details shared with the chatbot, but it does guarantee the information is not shared with third parties. However, Elomia’s privacy description on the app’s Apple Store listing notes that data linked to an individual may be used for advertising data purposes.

Unlike other mental health apps, Limbic is not primarily promoted as an AI chatbot-first application. It instead encourages therapists to use its referral assistant to cut down time, collect demographic information, and determine current risks and harms to users prior to directing them to a real person. However, the company is expanding to incorporate a therapy assistant with similar classifications. Through Limbic Access, it encourages companies to use this chatbot first, as opposed to a person behind the screen. Nonetheless, it maintains a comprehensive chart and explanation for the type of personal data processed, its purpose, and any lawful basis for doing so. Health data—defined by the app as information like prescription history, mood logs, and emotional triggers—are used for research development internally and in the public interest (such as sharing relevant data with the NHS).

Data Retention, Storage, and Deletion

Certain apps establish no data retention periods for any information, regardless of its sensitivity level. Elomia, Mindspa, Nuna, and Limbic are among those that keep retention information vague. Wysa, however, sets a direct timeframe for data retention and storage. This can range from 15 days to 10 years, depending on the information, but most personal information from using the application itself falls within 10 years.

Mental health apps also differ in their deletion policies. Nuna claims that while it does retain information, with no timeline in mind, the app will delete or anonymize data if it is no longer necessary to keep it for business practices. Mindspa allows users to request information to be deleted, particularly for children using the app, but it lacks an explicit guarantee.

Pushing Forward for Stronger Privacy Policies

With these factors in mind, there are certain steps that can be taken by companies looking to pursue similar services or improve upon their own, and there are opportunities to improve upon existing practices by putting the consumers first. Namely, when handling information, there needs to be a clear written difference between personal and sensitive information. Apps like Wysa that already make that distinction in their definitions are a step in the right direction. Likewise, companies based in the United States may adopt stricter measures that prioritize user well-being, such as those from the GDPR, which would create a basis that treats all information—no matter the sensitivity level—as equally important. In other words, companies can improve their privacy practices by creating transparent definitions and maximizing data protection.

This is not to say that there hasn’t been activity in improving privacy measures over the last few years. The work conducted by Mozilla’s team in highlighting the privacy and security of some mental health applications, including AI-centric programs, has created positive change. Some applications have improved following Mozilla’s initial review, opting to reach out to Mozilla directly for better clarifications on how they can improve.

Nonetheless, some select improvements do not make the widespread problems with transparency and classification of sensitive health data disappear. If companies conduct audits similar to Mozilla internally, they can improve their privacy policies to create stronger definitions and push more robust measures in favor of consumer protection. At a time when pressure from third parties and users can change how companies collect personal information, it’s more important than ever to continue pushing those companies for better protections.

Related Topics
Data Privacy Platform Accountability