This page provides general information about the security and privacy issues involved in the development of (or use of) mobile technology in behavioral-health contexts. Some of these issues are still an area of active research and represent research opportunities in their own right – but also topics that should be considered by anyone developing or deploying mHealth technology.
In developing these lists, we borrowed heavily from two review papers from the Trustworthy Health and Wellness (THaW.org) research group:
- David Kotz, Carl A. Gunter, Santosh Kumar, and Jonathan P. Weiner. Privacy and Security in Mobile Health – A Research Agenda. IEEE Computer, 49(6):22-30, June 2016. DOI 10.1109/MC.2016.185.
- David Kotz, Kevin Fu, Carl Gunter, and Avi Rubin. Security for Mobile and Cloud Frontiers in Healthcare. Communications of the ACM, 58(8):21-23, August 2015. DOI 10.1145/2790830.
We present the information in bullet form for quick reference, and group them into topical categories.
Data sharing and consent
An mHealth system should
- separate data collection, analysis, and presentation to limit data flow to those who need it;
- expose to its users, in an understandable way, what data are being collected, what information is being shared with whom, what might be inferred by that information, and where and how the information might be used; lengthy, legalistic privacy policies are not effective at communicating to users, and should be avoided;
- notify users of any deviations from the agreed-upon protocol, or changes to the protocol, in an understandable way;
- give patients (or study participants) usable mechanisms to limit or adjust the collection, upload, sharing, or retention of data about them;
- recognize that users (such as patients or research participants) may not want mHealth data to reveal all of their detailed activities; and
- enable users to suspend reporting for periods of time, or block systems from storing or sharing data that is not directly relevant to the research study or their treatment.
Research is needed on effective interfaces and means to accomplish the above goals, and also
- to develop mechanisms that can automatically turn sensors on and off (for example, when bathing or toileting, turn off wearable cameras) to preserve user privacy,
- to develop means to enforce data-management policies as mHealth data are collected, stored, processed, and shared, e.g., to ensure that an individual’s personal privacy preferences remain attached to data about them, and that these preferences are enforced even as the data are stored and forwarded to providers and other healthcare system participants.
User consent or policy determines who can access mHealth data, but how do mHealth systems confidently identify the individual(s) they are sensing or who is using the system?
- mHealth data-collection systems need to identify the individual(s) they are sensing, so they can attach the correct identity to the mHealth data for provenance.
- mHealth data-usage systems need to identify the individual(s) attempting to access the data, to properly limit access to authorized parties and to create accurate audit logs.
Traditional authentication mechanisms (like passwords) can disrupt workflow and are difficult or infeasible to implement on small devices – particularly devices with no keyboard or touchscreen.
- Authentication mechanisms for lay users need to be intuitive and easy enough to use to avoid encouraging users to disable or weaken the mechanism.
- Authentication mechanisms for clinical settings must recognize that staff often wear gloves and masks (obviating solutions based on face and fingerprint recognition), and work with a wide range of devices (smartphones, tablets, desktops and laptops) throughout the day.
Consumer devices such as smart phones, smart watches, and smart tablets, as well as table-top devices like the emerging class of voice-based assistants, can provide an excellent platform for health-related monitoring or intervention. When used for sensitive health applications, however, it is imperative to secure these platforms to reduce the risk of information breach.
- Research is needed to develop (and evolve) best-practice guidelines for both the users and deployers of such technology, as well as those who develop the platforms and their applications.
Some mobile devices, including some mobile apps, are now considered “medical devices” by the FDA and thus may be subject to regulatory approval.
- Medical devices (including smartphone apps) are networked and run safety-critical software, and thus must be designed with cybersecurity in mind.
- Medical devices built on commodity software need also defend itself against conventional malware that spreads opportunistically to all systems running that software. Developers should plan for such devices to be used well past their designed lifetime, despite an evolving threat environment, and plan for years of software maintenance and fail-safe operation.
Mobile-health technology – whether devices, apps, or systems – should be designed for accountability.
- For example, by recording audit logs that can be later used for spot checks or for after-incident review when something goes wrong.
Anonymization and data sharing
Anonymization, or de-identification, is an important tool for protecting privacy in data flow, when the patient (or participant) identity is not needed by a given user of the data.
- Research is needed to better understand and quantify re-identification risks inherent in various mobile sensors, and develop data-transformation methods to limit such risks while retaining scientific utility. Such research will support the ever-growing need for (and societal benefits of) sharing of health-related data .
- Although HIPAA regulations provide some guidance about data fields considered identifiers or quasi-identifiers, and guidance about the de-identification of medical datasets, such guidance may be insufficient in providing anonymity to users captured in mHealth datasets – which often contain a wider, richer forms of data than that found in traditional medical records.
Behavioral data, particularly that collected from free-living conditions (everyday life), can be particularly valuable for research – but also particularly sensitive, because a wide range of information may be inferred from the raw data.
- Research is needed to better understand and characterize privacy disclosure tradeoffs inherent in sharing behavioral data, and to develop data-transformation methods to limit privacy risks while retaining the data’s scientific utility.
- Furthermore, research needs to develop extensible methods for collecting, storing, and presenting contextual information along with health-related data collected by mHealth devices and apps to help data users (researchers or clinicians) verify and interpret the health data.
There are many public documents, including from federal agencies like FDA and NIST, that provide relevant information and guidance. Some of the more interesting documents include:
 Bartha Maria Knoppers and Adrian Mark Thorogood. Ethics and Big Data in health. Current Opinion in Systems Biology, 4: 53–57, August 2017. DOI 10.1016/J.COISB.2017.07.001.
David Kotz, Carl A. Gunter, Santosh Kumar, and Jonathan P. Weiner. Privacy and Security in Mobile Health – A Research Agenda. IEEE Computer, 49(6):22-30, June 2016. DOI 10.1109/MC.2016.185.
David Kotz, Kevin Fu, Carl Gunter, and Avi Rubin. Security for Mobile and Cloud Frontiers in Healthcare. Communications of the ACM, 58(8):21-23, August 2015. DOI 10.1145/2790830.