Scroll to top
Tag: machine learning

ChatGPT Gets Dartmouth Talking

Article Excerpt: ChatGPT, OpenAI’s trending chatbot that generates conversational responses to user prompts through advanced artificial intelligence, has been busy since its launch in late November… “ChatGPT and other generative AI technologies have huge potential for—and will have huge effects on—education,” says Provost David Kotz ’86, the Pat and John Rosenwald Professor in the Department of Computer Science. “My hope is to provide immediate support to faculty and instructors to become familiar with the technology and its impacts, and then look further down the road to consider how we can leverage it as a pedagogical tool, recognizing that it will be part of the future of teaching, learning, scholarship, and work.”

Full Article:

Article Source: Dartmouth News


Personalising Mental Health Care

Article Excerpt: Although researchers have made unprecedented progress in identifying ‘averaged’ or ‘population-level’ mechanisms of mental health disorders, these approaches have led to a drowning effect at an individual level where person-specific information is often lost if it doesn’t align with an averaged expectation. To bridge this gap between research and clinical practice, we have developed a novel individualised machine learning framework called Affinity Scores. By identifying personalised signatures that can be integrated into a clinician’s decision-making for each of their patients, Affinity Scores represent a fundamental shift in our approach to personalised psychiatry.

Full Article:

Article Source: Pursuit


Can Smartphones Help Predict Suicide?

Article Excerpt: A unique research project is tracking hundreds of people at risk for suicide, using data from smartphones and wearable biosensors to identify periods of high danger — and intervene… In the field of mental health, few new areas generate as much excitement as machine learning, which uses computer algorithms to better predict human behavior. There is, at the same time, exploding interest in biosensors that can track a person’s mood in real time, factoring in music choices, social media posts, facial expression and vocal expression.

Full Article:

Article Source: The New York Times


Artificial Intelligence Tools Quickly Detect Signs of Injection Drug Use in Patients’ Health Records

Article Excerpt: An automated process that combines natural language processing and machine learning identified people who inject drugs (PWID) in electronic health records more quickly and accurately than current methods that rely on manual record reviews. Currently, people who inject drugs are identified through International Classification of Diseases (ICD) codes that are specified in patients’ electronic health records by the health care providers or extracted from those notes by trained human coders who review them for billing purposes. But there is no specific ICD code for injection drug use, so providers and coders must rely on a combination of non-specific codes as proxies to identify PWIDs—a slow approach that can lead to inaccuracies.

Full Article:

Article Source: Medical XPress


Developing Trust in Healthcare AI, Step by Step

Article Excerpt: A new analysis examines how artificial intelligence in medicine can impact clinical decisions and identifies the steps that could build more trust in machine learning models from doctors and patients… As the usage of artificial intelligence in healthcare grows, some providers are skeptical about how much they should trust machine learning models deployed in clinical settings. AI products and services have the potential to determine who gets what form of medical care and when – so stakes are high when algorithms are deployed, as Chilmark’s 2022 “AI and Trust in Healthcare Report,” published September 13, explains.

Full Article:

Article Source: Healthcare IT News


Leveraging Data From Wearable Medical Devices

Article Excerpt: Diabetes, and other chronic conditions like cancer or cardiovascular disease, require a lifetime of management. In recent years, a slew of wearable devices such as glucose monitors, activity trackers, heart rate monitors, and pulse oximeters have been adopted by patients and health care providers to track and manage these conditions more effectively. These devices are also a rich source of data that can be analyzed to better understand the factors and behaviors that lead to improved health outcomes. “But they’re vastly underutilized,” says Temiloluwa Prioleau, assistant professor of computer science and co-director of the Augmented Health Lab, which is focused on bridging this gap.

Full Article:

Article Source: Dartmouth News


Predicting the Next-Day Perceived and Physiological Stress of Pregnant Women by Using Machine Learning and Explainability: Algorithm Development and Validation

Ng A, Wei B, Jain J, Ward E, Tandon S, Moskowitz J, Krogh-Jespersen S, Wakschlag L, Alshurafa N. Predicting the Next-Day Perceived and Physiological Stress of Pregnant Women by Using Machine Learning and Explainability: Algorithm Development and Validation. JMIR Mhealth Uhealth 2022;10(8):e33850. DOI: 10.2196/33850

This study aimed to develop and evaluate a machine learning model to predict next-day physiological and prenatal stress by collecting sensor heart rate data and ecological momentary assessment (EMA) questionnaires. This study applied an explainability model for the prediction results. A total of 16 adult pregnant women from an obstetrics and gynecology clinic were enrolled in the study. Participants received a 12-week cognitive behavioral therapy intervention and wore a mobile electrocardiography (heart rate) sensor for 12 weeks. Participants completed EMAs for perceived stress on their mobile phones 5 times a day for 12 weeks. In total, about 4000 hours of data were collected and participants completed 2800 EMAs. Researchers used these data to train and evaluate 6 different machine learning models to select the best performing model for predicting next-day physiological and perceived stress. The random forest classifier performed the best for both physiological and perceived stress, with an average F1 score (a commonly used evaluation metric) of 81.9% and 72.5%, respectively. Two features significantly predicted both physiological and perceived stress: feeling unable to overcome difficulties and participants’ number of children. Results demonstrated that a machine learning model can predict next-day physiological and perceived stress among pregnant women. Future studies should validate the model with a larger sample size.


Evaluation of an artificial intelligence-based medical device for diagnosis of autism spectrum disorder

Megerian JT, Dey S, Melmed RD, Coury DL, Lerner M, Nicholls CJ, Sohl K, Rouhbakhsh R, Narasimhan A, Romain J, Golla S, Shareef S, Ostrovsky A, Shannon J, Kraft C, Liu-Mayo S, Abbas H, Gal-Szabo DE, Wall DP, & Taraman S (2022). Evaluation of an artificial intelligence-based medical device for diagnosis of autism spectrum disorder. NPJ Digital Medicine, 5(1), 57–57.

Researchers conducted a double-blinded, multi-site, active comparator cohort study to test the accuracy of artificial intelligence software for diagnosing autism spectrum disorder (ASD). The software device collects data about child behavioral features from 3 sources (caregiver questionnaire, analysis of two short 1 minute home videos recorded and uploaded by the child’s caregiver, provider questionnaire). Data are processed using a machine learning algorithm to indicate whether a person is ASD positive, ASD negative, or inconclusive (i.e., inputted data are not sufficient for a predictive output). Researchers evaluated the software in a study with 425 children aged 18-72 months for whom a caregiver or provider had a concern about developmental delay. Researchers compared the software outputs to the clinical standard (diagnosis made by a provider based on DSM-5 criteria). Results demonstrated that data collection with the software device took less time to administer and require less specialty training relative to clinical standard process. For about 33% of the sample, the algorithm output supported accurate diagnoses compared with clinical evaluation. Of the children for whom the software algorithm made a definite evaluation, 98.4% with clinically diagnosed ASD received an ASD positive result and 78.9% without a clinical diagnosis of ASD received an ASD negative result. All children who received a false-positive result (n=15) had a non-ASD developmental condition. Only one child received a false negative result in this study. Overall, this machine learning tool demonstrated high sensitivity and good specificity for diagnosing ASD. The tool can potentially expand the ability to effectively diagnose children with ASD in primary care to facilitate early intervention and more efficient use of specialist resources.


Development and multimodal validation of a substance misuse algorithm for referral to treatment using artificial intelligence (SMART-AI): A retrospective deep learning study

Afshar M, Sharma B, Dligach D, Oguss M, Brown R, Chhabra N, Thompson HM, Markossian T, Joyce C, Churpek MM, & Karnik NS (2022). Development and multimodal validation of a substance misuse algorithm for referral to treatment using artificial intelligence (SMART-AI): a retrospective deep learning study. The Lancet (British Edition), 4(6), e426–e435.

SMART-AI is a substance misuse algorithm to support referral to treatment using artificial intelligence. The tool is a machine learning classifier tool for identifying alcohol misuse, opioid misuse, and non-opioid drug misuse using clinical notes collected in the electronic health records. Using of patients (N=16,917) during the first 24 hours of hospitalization, the prospective primary analysis consisted of temporal validation done to examine misuse classification and the association to outcomes and treatment referrals. Results from manual screening identified 3.5% of patients had any type of substance misuse and 11% of these patients had more than one type of misuse. SMART-AI showed good calibration and validity, with a false negative rate of 0.18-0.19 and a false positive rate of 0.03 between non-Hispanic Black and non-Hispanic White subgroups. The results also show prediction performance can change over time or in differing patient settings, where prevalence of substance misuse varies. There were also significant changes during the COVID-19 pandemic which required the algorithm to be recalibrated. Overall, this study demonstrated that clinical notes from the electronic health record during initial hospitalization can be used to identify substance misuse accurately with the help of artificial intelligence and may be used to potentially improve screening rates.