Privacy language of mental health apps opens holes for user data

In the world of mental health apps, privacy scandals have become almost commonplace. Every few months, reports or research reveal seemingly unscrupulous data-sharing practices in apps such as Crisis Text Line, Talkspace, BetterHelp and others: people have given information to these apps in the hope to feel better, and then it turns out their data has been used in ways that help businesses make money (and don’t).

Sounds like a twisted mole game to me. When below meticulous examinationthe apps often modify or adjust their policies, then new applications or new problems appear. It’s Not Just Me: Mozilla Researchers said this week that mental health apps have some of the worst privacy protections of any app category.

Watching the cycle over the past few years has me interested in how, exactly, this continues to happen. App terms of service and privacy policies are supposed to govern what companies are allowed to do with user data. But most people barely read them before signing (pressing accept), and even if they do, they’re often so complex that it’s hard to know their implications at a glance.

“It makes it completely unknown to the consumer what even saying yes means,” says David Grande, an associate professor of medicine at the University of Pennsylvania School of Medicine who studies digital health privacy.

So what does it mean to say yes? I took a look at the fine print on a few to get an idea of ​​what’s going on under the hood. “Mental health app” is a broad category, and it can cover anything from peer-to-peer counseling hotlines to AI chatbots to one-on-one connections with real therapists. Policies, protections, and regulations vary between all categories. But I found two common features between many privacy policies that made me wonder what was the point of even having a policy in the first place.

We may change this policy at any time

Even if you read a privacy policy carefully and carefully before enrolling in a digital mental health program, and even if you really feel comfortable with that policy, the company may go back and change that policy when she wants it. They might tell you, maybe not.

Jessica Roberts, director of the Health Law and Policy Institute at the University of Houston, and Jim Hawkins, professor of law at the University of Houston, highlighted the problems with this type of language in a 2020 editorial in the journal Science. Someone can sign up expecting a mental health app to protect their data in some way, then have the policy revamped to leave their data open to wider use than they want. . Unless they come back to check the policy, they wouldn’t know.

One app I reviewed, Happify, specifically states in its policy that users will be able to choose whether they want the new data uses in any new privacy policies to apply to their information. They can opt out if they don’t want to be dragged into the new policy. BetterHelp, on the other hand, says the only recourse if someone doesn’t like the new policy is to stop using the platform altogether.

Having this kind of flexibility in privacy policies is by design. The type of data these apps collect is valuable, and companies likely want to be able to take advantage of any opportunities that may arise for new ways to use this data in the future. “There are a lot of benefits to keeping these things very open from a business perspective,” Grande says. “It’s hard to predict a year or two years, five years into the future, what other new uses you might think of for this data.”

If we sell the business, we also sell your data

Feeling comfortable with all the ways a company uses your data at the time you sign up to use a service also does not guarantee that someone else will not be in charge of that company in the future. . All of the privacy policies I looked at included specific language saying that if the app is acquired, sold, merged with another group or some other commercial thing, the data goes with it.

The policy therefore only applies at the moment. This may not apply in the future, after you have already used the service and provided it with information about your mental health. “So you could say they’re completely useless,” says John Torous, a digital health researcher in the department of psychiatry at Beth Israel Deaconess Medical Center.

And data could be precisely why one company buys another in the first place. The information people give to mental health apps is very personal and therefore very valuable – arguably more so than other types of health data. Advertisers may wish to target people with specific mental health needs for other types of products or treatments. Chat transcripts from a therapy session can be mined to gain insights into how people are feeling and reacting to different situations, which could be useful for groups developing artificial intelligence programs.

“I think that’s why we’ve seen more and more cases in behavioral health — that’s where the data is most valuable and easiest to harvest,” Torous says.


I asked Happify, Cerebral, BetterHelp, and 7 Cups about these specific language elements in their policies. Only Happify and Cerebral responded. Spokespersons for both described the language as “standard” in the industry. “In either case, the individual user will need to review the changes and opt-in,” Happify spokeswoman Erin Bocherer said in an email to The edge.

Cerebral’s data sale policy is beneficial because it allows customers to continue processing if ownership changes, says a statement emailed to The edge by spokesperson Anne Elorriaga. The language allowing the company to change the privacy terms at any time “allows us to keep our customers informed about how we treat their personal information,” the statement said.

Now, those are just two small sections of privacy policies in mental health apps. They jumped on me like specific language elements that give companies wide leeway to make sweeping decisions about user data — but the rest of the policies often do the same thing. Many of these digital health tools do not have medical professionals who speak directly with patients, so they are not subject to HIPAA Guidelines around the protection and disclosure of health information. Even if they decide to follow HIPAA guidelines, they still have broad freedoms with user data: the rule allows groups to share personal health information as long as it’s anonymized and devoid of identifying information .

And these general policies aren’t just a factor in mental health apps. They’re also common to other types of health apps (and apps in general), and digital health companies often have enormous power over the information people give them. But mental health data comes under greater scrutiny because most people have a different view of this data than other types of health information. A survey of American adults published in Open JAMA Network in January, for example, found that most people were less likely to want to share digital information about depression than about cancer. The data can be incredibly sensitive – it includes details about people’s personal experiences and vulnerable conversations that they may want to keep private.

Getting healthcare (or any personal activity) online usually means that a certain amount of data is sucked up by the internet, Torous says. This is the usual compromise, and expectations of complete privacy in online spaces are probably unrealistic. But, he says, it should be possible to moderate the amount that occurs. “Nothing online is 100% private,” he says. “But we know we can make things a lot more private than they are right now.”

Yet making changes that would truly improve data protection for people’s mental health information is difficult. Demand for mental health apps is high: Their use has exploded in popularity during the COVID-19 pandemic, as more people were looking for treatmentbut there again there was not enough mental health care available. Data is valuable and there are no real external pressures for companies to change.

So the policies, which allow people to lose control of their data, continue to have the same structures. And until the next major media report draws attention to a specific case of a specific app, users might not know what they’re vulnerable to. Left unchecked, Torous says, this cycle could erode trust in digital mental health in general. “Health and mental health care is built on trust,” he says. “I think if we continue down this path, we will eventually lose the trust of patients and clinicians.”

Leave a Reply

Your email address will not be published.