Skip directly to search Skip directly to A to Z list Skip directly to navigation Skip directly to page options Skip directly to site content

Hot Topics of the Day|PHGKB
Search PHGKB:

Archive

Hot Topics of the Day are picked by experts to capture the latest information and publications on public health genomics and precision health for various diseases and health topics. Sources include published scientific literature, reviews, blogs and popular press articles.

Sign up MyPHGKB to receive the daily hot topic email alert.

Search Archive:
Archived Hot Topics of the Day By Date
551 hot topic(s) found with the query "Artificial intelligence"

Reporting guidelines in medical artificial intelligence: a systematic review and meta-analysis
F Kolbinger et al, Comm Med, April 11, 2024 (Posted: Apr 12, 2024 9AM)

From the abstract: "AI reporting guidelines for medical research vary with respect to the quality of the underlying consensus process, breadth, and target research phase. Some guideline items such as reporting of study design and model performance recur across guidelines, whereas other items are specific to particular fields and research stages. Our analysis highlights the importance of reporting guidelines in clinical AI research and underscores the need for common standards that address the identified variations and gaps in current guidelines. Overall, this comprehensive overview could help researchers and public stakeholders reinforce quality standards for increased reliability, reproducibility, clinical validity, and public trust in AI research in healthcare. "


Utilizing geospatial artificial intelligence to map cancer disparities across health regions
A Fadiel et al, Sci Report, April 2, 2024 (Posted: Apr 02, 2024 9AM)

From the abstract: "We have developed an innovative tool, the Intelligent Catchment Analysis Tool (iCAT), designed to identify and address healthcare disparities across specific regions. Powered by Artificial Intelligence and Machine Learning, our tool employs a robust Geographic Information System (GIS) to map healthcare outcomes and disease disparities. iCAT allows users to query publicly available data sources, health system data, and treatment data, offering insights into gaps and disparities in diagnosis and treatment paradigms. "


Deep learning in cancer genomics and histopathology
M Unger et al, Genome Medicine, March 27, 2024 (Posted: Mar 27, 2024 1PM)

From the abstract: " Histopathology and genomic profiling are cornerstones of precision oncology and are routinely obtained for patients with cancer. Traditionally, histopathology slides are manually reviewed by highly trained pathologists. Genomic data, on the other hand, is evaluated by engineered computational pipelines. In both applications, the advent of modern artificial intelligence methods, specifically machine learning (ML) and deep learning (DL), have opened up a fundamentally new way of extracting actionable insights from raw data, which could augment and potentially replace some aspects of traditional evaluation workflows. "


How to support the transition to AI-powered healthcare.
et al. Nat Med 2024 3 (3) 609-610 (Posted: Mar 25, 2024 8AM)

From the article: "To make health systems more sustainable in the long-term, incentivize artificial intelligence (AI) and digital technologies that are grounded on careful testing and real-world validation. We are confident that this transition to ‘AI-powered’ healthcare will occur and that it has the potential to bring widespread public good. At the same time, we believe that these benefits will realize more steadily and more quickly with carefully designed clinical studies and evidence-based implementation of AI algorithms and devices in the real world."


The promise of AI in personalized breast cancer screening: are we there yet?
Despina Kontos et al. Nat Rev Clin Oncol 2024 3 (Posted: Mar 21, 2024 7AM)

From the abstract: " The benefits and potential harms of mammography-based screening for breast cancer are often a matter of debate. Here, I discuss the promises and limitations of a recent study that tested an artificial intelligence-based tool for the detection of breast cancer in digital mammograms in a large, prospective screening setting."


Recognition of Genetic Conditions After Learning With Images Created Using Generative Artificial Intelligence.
Rebekah L Waikel et al. JAMA Netw Open 2024 3 (3) e242609 (Posted: Mar 18, 2024 3PM)

From the abstract: " In this comparative effectiveness study, generative methods were used to create images of fake but realistic-appearing individuals with Kabuki and Noonan syndrome. Through online surveys, generated images were found to help residents recognize these syndromes and improved their confidence in this area compared with text-only descriptions, although real images were most helpful. These findings suggest that generative artificial intelligence could supplement genetics education for pediatric residents by helping teach the recognition of rare conditions. "


To warrant clinical adoption AI models require a multi-faceted implementation evaluation
D van De Sande et al, NPJ Digital Medicine, March 6, 2024 (Posted: Mar 06, 2024 9AM)

From the abstract: "Despite artificial intelligence (AI) technology progresses at unprecedented rate, our ability to translate these advancements into clinical value and adoption at the bedside remains comparatively limited. This paper reviews the current use of implementation outcomes in randomized controlled trials evaluating AI-based clinical decision support and found limited adoption. To advance trust and clinical adoption of AI, there is a need to bridge the gap between traditional quantitative metrics and implementation outcomes to better grasp the reasons behind the success or failure of AI systems and improve their translation into clinical value. "


Why we should not mistake accuracy of medical AI for efficiency
KR Jongsma et al, NPJ Digital Medicine, March 4, 2024 (Posted: Mar 06, 2024 9AM)

From the abstract: "In the medical literature, promising results regarding accuracy of medical AI are presented as claims for its potential to increase efficiency. This elision of concepts is misleading and incorrect. The promise that AI will reduce human workload rests on a too narrow assessment of what constitutes workload in the first place. Human operators need new skills and deal with new responsibilities, these systems need an elaborate infrastructure and support system that all contribute to an increased amount of human work and short-term efficiency wins may become sources of long-term inefficiency. "


An ECG-based artificial intelligence model for assessment of sudden cardiac death risk
L Holmstrom et al, Comm Med, February 2024 (Posted: Mar 03, 2024 10AM)

From the abstract: " Conventional ECG-based algorithms could contribute to sudden cardiac death (SCD) risk stratification but demonstrate moderate predictive capabilities. Deep learning (DL) models use the entire digital signal and could potentially improve predictive power. We aimed to train and validate a 12?lead ECG-based DL algorithm for SCD risk assessment. The DL model achieves an AUROC of 0.889 (95% CI 0.861–0.917) for the detection of SCD cases vs. controls in the internal held-out test dataset, and is successfully validated in external SCD cases with an AUROC of 0.820 (0.794–0.847). "


Integrating artificial intelligence into healthcare systems: more than just the algorithm
JC Kwong et al, NPJ Digital Medicine, March 1, 2024 (Posted: Mar 01, 2024 0PM)

From the abstract: "Despite the rapid growth of artificial intelligence (AI) applications in healthcare, few models have progressed beyond retrospective development or validation, creating what is commonly called the “AI chasm”. Among the subset of models that have moved into randomized controlled trials, even fewer have demonstrated clinically meaningful benefits. This reality is a sobering reminder that translating AI algorithms from in silico environments to real-world clinical settings remains a formidable challenge. "


AI-based diabetes care: risk prediction models and implementation concerns
SCY Wang et al, NPJ Digital Medicine, February 15, 2024 (Posted: Feb 16, 2024 4PM)

From the abstract: " The utilization of artificial intelligence (AI) in diabetes care has focused on early intervention and treatment management. Notably, usage has expanded to predict an individual’s risk for developing type 2 diabetes. A scoping review shows that while most studies used unimodal AI models, multimodal approaches were superior because they integrate multiple types of data. However, creating multimodal models and determining model performance are challenging tasks given the multi-factored nature of diabetes. For both unimodal and multimodal models, there are also concerns of bias with the lack of external validations and representation of race, age, and gender in training data."


Deep learning models across the range of skin disease.
Kaushik P Venkatesh et al. NPJ Digit Med 2024 2 (1) 32 (Posted: Feb 13, 2024 9AM)

From the abstract: "We explore the evolving landscape of diagnostic artificial intelligence (AI) in dermatology, particularly focusing on deep learning models for a wide array of skin diseases beyond skin cancer. We critically analyze the current state of AI in dermatology, its potential in enhancing diagnostic accuracy, and the challenges it faces in terms of bias, applicability, and therapeutic recommendations. "


Physician-machine partnerships boost diagnostic accuracy, but bias persists.
et al. Nat Med 2024 2 (Posted: Feb 09, 2024 10AM)

From the abstract: "In a large-scale digital experiment on dermatology diagnosis, we found that specialists and generalists achieved diagnostic accuracy of 38% and 19%, respectively. With decision support from a fair deep learning system, the diagnostic accuracy of physicians improved by more than 33%, but the gap in accuracy of generalists widened across skin tones. "


PubMed and beyond: biomedical literature search in the age of artificial intelligence.
Qiao Jin et al. EBioMedicine 2024 2 104988 (Posted: Feb 06, 2024 1PM)

From the abstract: " Recent improvements in artificial intelligence (AI) have expanded functionality beyond keywords, but they might be unfamiliar to clinicians and researchers. In response, we present an overview of over 30 literature search tools tailored to common biomedical use cases, aiming at helping readers efficiently fulfill their information needs. We first discuss recent improvements and continued challenges of the widely used PubMed. Then, we describe AI-based literature search tools catering to five specific information needs:"


AI-Generated Clinical Summaries Require More Than Accuracy.
Katherine E Goodman et al. JAMA 2024 1 (Posted: Jan 30, 2024 8AM)

From the article: "Little more than a year after ChatGPT’s public release, clinical applications of generative artificial intelligence and large language models (LLMs) are advancing rapidly. In the long term, LLMs may revolutionize much of clinical medicine, from patient diagnosis to treatment. In the short term, however, it is the everyday clinical tasks that LLMs will change most quickly and with the least scrutiny. Specifically, LLMs that summarize clinical notes, medications, and other forms of patient data are in advanced development and could soon reach patients. Summarization, though, is not as simple as it seems, and variation in LLM-generated summaries could exert important and unpredictable effects on clinician decision-making. "


Transparency of artificial intelligence/machine learning-enabled medical devices.
Aubrey A Shick et al. NPJ Digit Med 2024 1 (1) 21 (Posted: Jan 29, 2024 8AM)

From the article: " The United States Food and Drug Administration (FDA) is reviewing an increasing number of applications for AI/ML devices, with the number receiving FDA marketing authorization nearing seven hundred as of October 2023. AI/ML devices have unique considerations during their development and use, including those for usability, equity of access, management of performance bias, the potential for continuous learning, and stakeholder (manufacturer, patient, caregiver, healthcare provider, etc.) accountability. These considerations impact not only the responsible development and use of AI/ML devices but also the regulation of such devices"


Understanding Liability Risk from Using Health Care Artificial Intelligence Tools.
Michelle M Mello et al. N Engl J Med 2024 1 (3) 271-278 (Posted: Jan 20, 2024 10AM)

From the article: "Optimism about the explosive potential of artificial intelligence (AI) to transform medicine is tempered by worry about what it may mean for the clinicians being “augmented.” One question is especially problematic because it may chill adoption: when AI contributes to patient injury, who will be held responsible? Some attorneys counsel health care organizations with dire warnings about liability1 and dauntingly long lists of legal concerns.2 Unfortunately, liability concern can lead to overly conservative decisions,3 including reluctance to try new things. "


Medical AI falters when assessing patients it hasn’t seen
M Nadaf, Nature, January 11, 2024 (Posted: Jan 12, 2024 6AM)

From the article: "Computer algorithms that are designed to help doctors treat people with schizophrenia do not adapt well to fresh, unseen data, a study has found. Such tools — which use artificial intelligence (AI) to spot patterns in large data sets and predict how individuals will respond to a particular treatment — are central to precision medicine, in which health-care professionals try to tailor treatment to each person. "


A Nationwide Network of Health AI Assurance Laboratories.
Nigam H Shah et al. JAMA 2023 12 (Posted: Dec 22, 2023 10AM)

From the abstract: "Given the importance of rigorous development and evaluation standards needed of artificial intelligence (AI) models used in health care, nationwide accepted procedures to provide assurance that the use of AI is fair, appropriate, valid, effective, and safe are urgently needed. While there are several efforts to develop standards and best practices to evaluate AI, there is a gap between having such guidance and the application of such guidance to both existing and new AI models being developed. As of now, there is no publicly available, nationwide mechanism that enables objective evaluation and ongoing assessment of the consequences of using health AI models in clinical care settings. "


Organizational Factors in Clinical Data Sharing for Artificial Intelligence in Health Care.
Alaa Youssef et al. JAMA Netw Open 2023 12 (12) e2348422 (Posted: Dec 20, 2023 9AM)

From the abstract: "Are organizational factors associated with the motivation of health organizations to share clinical data for artificial intelligence (AI) development? In this qualitative study, 27 leaders from 18 health organizations were interviewed, and a predominant concern among them was data privacy risks. Most stakeholders viewed these as a substantial barrier for public health data sharing due to potential liability and reputational consequences; however, they identified external incentives as key factors for enhancing organizational motivation and fostering both within and across-sector data-sharing collaborations for AI development. The findings of this study suggest that data-sharing policies should be rooted in feasibility and incentivization strategies to promote responsible and equitable AI development in the health care sector. "


Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being
H Li, NPJ Digital Medicine, December 19, 2023 (Posted: Dec 20, 2023 9AM)

From the abstract: "Conversational artificial intelligence (AI), particularly AI-based conversational agents (CAs), is gaining traction in mental health care. Despite their growing usage, there is a scarcity of comprehensive evaluations of their impact on mental health and well-being. This systematic review and meta-analysis aims to fill this gap by synthesizing evidence on the effectiveness of AI-based CAs in improving mental health and factors influencing their effectiveness and user experience. "


Medicine's Lessons for AI Regulation.
Laura Stark et al. N Engl J Med 2023 12 (24) 2213-2215 (Posted: Dec 14, 2023 8AM)

From the article: " The capacity of AI is rapidly evolving — as are public concerns about norms of use, corporate accountability, and effects on global security, labor, climate, and other areas. The history of human-subjects research suggests that it will be important to keep rules for AI as nimble as the science they regulate."


A medical multimodal large language model for future pandemics
Liu et al, NPJ Digital Medicine, December 2, 2023 (Posted: Dec 02, 2023 8AM)

From the paper: "With the goal of quick deployment of tools for rapid response to rare diseases, we present the medical multimodal large language model (Med-MLLM) framework. We evaluate the effectiveness of Med-MLLM using the COVID-19 pandemic “in replay”, showing that Med-MLLM is able to accomplish accurate COVID-19 decision-support tasks with limited labelled data. In contrast, existing efforts usually require thousands, or even more, labelled data to achieve similar performance. "


Machine learning improves prediction of clinical outcomes for invasive breast cancers.
et al. Nat Med 2023 11 (Posted: Dec 01, 2023 7AM)

From the article: " A prognostic model for invasive breast cancer that is based on interpretable measurements of epithelial, stromal, and immune components outperforms histologic grading by expert pathologists. This model could improve clinical management of patients diagnosed with invasive breast cancer and address the concerns of pathologists about artificial intelligence (AI) trustworthiness by providing transparent and explainable predictions."


Psychological factors underlying attitudes toward AI tools.
Julian De Freitas et al. Nat Hum Behav 2023 11 (11) 1845-1854 (Posted: Nov 22, 2023 9AM)

From the abstract: "What are the psychological factors driving attitudes toward artificial intelligence (AI) tools, and how can resistance to AI systems be overcome when they are beneficial? Here we organize the main sources of resistance into five main categories: opacity, emotionlessness, rigidity, autonomy and group membership. We relate each of these barriers to fundamental aspects of cognition, then cover empirical studies providing correlational or causal evidence for how the barrier influences attitudes toward AI tools. "


The Future of Clinical Trials Artificial to Augmented to Applied Intelligence
AF Hernandez, JAMA, November 11, 2023 (Posted: Nov 11, 2023 4PM)

From the paper: "Artificial intelligence (AI) is already revolutionizing the conduct of science and the practice of medicine. From basic discovery to gaining insights about someone’s health through machine-eyes, new applications of AI are announced almost daily. AI is increasingly being used to enhance the practice of medicine with action ranging from augmenting human judgment to full task automation. The potential for AI to transform clinical research is also great, and the progress already observed in discovery science and applications to medicine are critically needed in the clinical research domain. "


Quantifying the impact of AI recommendations with explanations on prescription decision making
M Nagendran et al, NPJ Digital Medicine, November 7, 2023 (Posted: Nov 07, 2023 3PM)

From the abstract: "The influence of AI recommendations on physician behaviour remains poorly characterised. We assess how clinicians’ decisions may be influenced by additional information more broadly, and how this influence can be modified by either the source of the information (human peers or AI) and the presence or absence of an AI explanation (XAI, here using simple feature importance). We used a modified between-subjects design where intensive care doctors (N?=?86) were presented on a computer for each of 16 trials with a patient case and prompted to prescribe continuous values for two drugs. "


Artificial intelligence exceeds humans in epidemiological job coding.
Mathijs A Langezaal et al. Commun Med (Lond) 2023 11 (1) 160 (Posted: Nov 06, 2023 10AM)

From the abstract: " We developed OPERAS, a customizable decision support system for epidemiological job coding. Using 812,522 entries, we developed and tested classification models for the Professions et Catégories Socioprofessionnelles (PCS)2003, Nomenclature d’Activités Française (NAF)2008, International Standard Classifications of Occupation (ISCO)-88, and ISCO-68. Each code comes with an estimated correctness measure to identify instances potentially requiring expert review. Here, OPERAS’ decision support enables an increase in efficiency and accuracy of the coding process"


Can Predictive AI Improve Early Detection of Sepsis and Other Conditions?
R Volker et al, JAMA, November 1, 2023 (Posted: Nov 01, 2023 1PM)

From the article: "AI and medicine intersect on a rapidly changing terrain where the possibilities are tremendous—tools that aid in the early detection of sepsis, for example, or help streamline transitions of care. AI is also ready for development in preventing pressure ulcers. In some areas of health care, AI may be ready for prime time, but in others, more research is needed to adapt these tools for real-world clinical use. "


A scoping review of artificial intelligence-based methods for diabetes risk prediction.
Farida Mohsen et al. NPJ Digit Med 2023 10 (1) 197 (Posted: Oct 30, 2023 9AM)

From the abstract: "The increasing prevalence of type 2 diabetes mellitus (T2DM) and its associated health complications highlight the need to develop predictive models for early diagnosis and intervention. While many artificial intelligence (AI) models for T2DM risk prediction have emerged, a comprehensive review of their advancements and challenges is currently lacking. This scoping review maps out the existing literature on AI-based models for T2DM prediction, adhering to the PRISMA extension for Scoping Reviews guidelines. "


The value of standards for health datasets in artificial intelligence-based applications.
Anmol Arora et al. Nat Med 2023 10 (Posted: Oct 27, 2023 9AM)

From the abstract: "Artificial intelligence as a medical device is increasingly being applied to healthcare for diagnosis, risk stratification and resource allocation. However, a growing body of evidence has highlighted the risk of algorithmic bias, which may perpetuate existing health inequity. This problem arises in part because of systemic inequalities in dataset curation, unequal opportunity to participate in research and inequalities of access. "


An AI revolution is brewing in medicine. What will it look like?
Mariana Lenharo et al. Nature 2023 10 (7984) 686-688 (Posted: Oct 25, 2023 9AM)

From the paper: "Although AI enthusiasts now tend to steer clear of bold claims about machines replacing doctors, many say that these models could overcome some of the current limitations of medical AI, and they could one day surpass physicians in certain scenarios. “The real goal to me is for AI to help us do the things that humans aren’t very good at.” But there’s a long journey ahead before these latest tools can be used for clinical care in the real world. "


New AI Tools Must Have Health Equity in Their DNA
Y Hswen et al, JAMA October 11, 2023 (Posted: Oct 11, 2023 3PM)

From the abstract: "When it comes to the question around equity, I think the possibility of bias is enormous. The training sets—the way that we build these models and how we train them—if they’re built off of existing ways in which we work, existing ways in which our societies and our medical systems are structured, there’s a great risk of it introducing or perpetuating the biases that we’ve been experiencing as a system for generations now and in some ways for hundreds of years. We have to deliberately design that out of AI. That’s a really important part of how we’re going to succeed if we’re going to actually build AIs that are mindful of health equity in the future."


Autonomous AI systems in the face of liability, regulations and costs.
Agustina D Saenz et al. NPJ Digit Med 2023 10 (1) 185 (Posted: Oct 07, 2023 8AM)

From the abstract: "This perspective explores the liability implications for physicians, hospitals, and creators of AI technology, as well as the evolving regulatory landscape and payment models. Physicians may be favored in malpractice cases if they follow rigorously validated AI recommendations. However, AI developers may face liability for failing to adhere to industry-standard best practices during development and implementation. "


Improving model fairness in image-based computer-aided diagnosis.
Mingquan Lin et al. Nat Commun 2023 10 (1) 6261 (Posted: Oct 07, 2023 8AM)

From the abstract: "Deep learning has become a popular tool for computer-aided diagnosis using medical images, sometimes matching or exceeding the performance of clinicians. However, these models can also reflect and amplify human bias, potentially resulting inaccurate missed diagnoses. Despite this concern, the problem of improving model fairness in medical image classification by deep learning has yet to be fully studied. "


Ethics of artificial intelligence in prenatal and pediatric genomic medicine.
Simon Coghlan et al. J Community Genet 2023 10 (Posted: Oct 07, 2023 8AM)

From the abstract: " This paper examines the ethics of introducing emerging forms of artificial intelligence (AI) into prenatal and pediatric genomic medicine. Application of genomic AI to these early life settings has not received much attention in the ethics literature. We focus on three contexts: (1) prenatal genomic sequencing for possible fetal abnormalities, (2) rapid genomic sequencing for critically ill children, and (3) reanalysis of genomic data obtained from children for diagnostic purposes."


Where Medical Statistics Meets Artificial Intelligence.
David J Hunter et al. N Engl J Med 2023 9 (13) 1211-1219 (Posted: Sep 28, 2023 11AM)

From the paper: " The very features that make AI a valuable tool for data analysis are the same ones that make it vulnerable from a statistical perspective. This paradox is particularly pertinent for medical science. Techniques that are adequate for targeted advertising to voters and consumers or that enhance weather prediction may not meet the rigorous demands of risk prediction or diagnosis in medicine.7,8 In this review article, we discuss the statistical challenges in applying AI to biomedical data analysis and the delicate balance that researchers face in wishing to learn as much as possible from data while ensuring that data-driven conclusions are accurate, robust, and reproducible. "


AI can help to speed up drug discovery - but only if we give it the right data.
Marissa Mock et al. Nature 2023 9 (7979) 467-470 (Posted: Sep 20, 2023 7AM)

From the paper: "Artificial-intelligence tools that enable companies to share data about drug candidates while keeping sensitive information safe can unleash the potential of machine learning and cutting-edge lab techniques, for the common good. "


Revolutionizing Cancer Research: The Impact of Artificial Intelligence in Digital Biobanking
C Frascarelli et al, J Per Med, September 2023 (Posted: Sep 18, 2023 11AM)

From the abstract: "As digital pathology and artificial intelligence (AI) have entered the precision medicine arena, biobanks are progressively transitioning from mere biorepositories to integrated computational databanks. Consequently, the application of AI and machine learning on these biobank datasets holds huge potential to profoundly impact cancer research. Methods. In this paper, we explore how AI and machine learning can respond to the digital evolution of biobanks with flexibility, solutions, and effective services. "


A foundation model for generalizable disease detection from retinal images.
Yukun Zhou et al. Nature 2023 9 (Posted: Sep 14, 2023 8AM)

From the abstract: "Medical artificial intelligence (AI) offers great potential for recognizing signs of health conditions in retinal images and expediting the diagnosis of eye diseases and systemic disorders1. However, the development of AI models requires substantial annotation and models are usually task-specific with limited generalizability to different clinical applications2. Here, we present RETFound, a foundation model for retinal images that learns generalizable representations from unlabelled retinal images and provides a basis for label-efficient model adaptation in several applications. "


AI detects eye disease and risk of Parkinson's from retinal images.
Mariana Lenharo et al. Nature 2023 9 (Posted: Sep 14, 2023 8AM)

From the article: " Scientists have developed an artificial intelligence (AI) tool capable of diagnosing and predicting the risk of developing multiple health conditions — from ocular diseases to heart failure to Parkinson’s disease — all on the basis of people’s retinal images. AI tools have been trained to detect disease using retinal images before, but what makes the new tool — called RETFound — special is that it was developed using a method known as self-supervised learning. That means that the researchers did not have to analyse each of the 1.6 million retinal images used for training and label them as ‘normal’ or ‘not normal’, for instance. "


Considerations for addressing bias in artificial intelligence for health equity
Abramoff MD, et al, NPJ Digital Medicine, September 12, 2023 (Posted: Sep 13, 2023 0PM)

From the abstract: "Health equity is a primary goal of healthcare stakeholders: patients and their advocacy groups, clinicians, other providers and their professional societies, bioethicists, payors and value based care organizations, regulatory agencies, legislators, and creators of artificial intelligence/machine learning (AI/ML)-enabled medical devices. Lack of equitable access to diagnosis and treatment may be improved through new digital health technologies, especially AI/ML, but these may also exacerbate disparities, depending on how bias is addressed. "


Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI
V Muralidahan et al. NPJ Digital Medicine, September 6, 2023 (Posted: Sep 06, 2023 9AM)

From the abstract: "ACCEPT-AI is a framework of recommendations for the safe inclusion of pediatric data in artificial intelligence and machine learning (AI/ML) research. It has been built on fundamental ethical principles of pediatric and AI research and incorporates age, consent, assent, communication, equity, protection of data, and technological considerations. ACCEPT-AI has been designed to guide researchers, clinicians, regulators, and policymakers and can be utilized as an independent tool, or adjunctively to existing AI/ML guidelines."


Study proposes use of artificial intelligence to diagnose autism spectrum disorder
R Muniz, Medical XPress, August 2023 (Posted: Aug 18, 2023 11AM)

Much recent research proposes methods for diagnosing ASD based on machine learning but uses a single statistical parameter, ignoring brain network organization, which is the innovation featured by this study, the article notes. The analysis of fMRI data highlighted changes in certain brain regions associated with cognitive, emotional, learning and memory processes.


AI in Public Health
J Pina, ASTHO Blog, August 2023 (Posted: Aug 17, 2023 11AM)

Generative Artificial Intelligence (AI) tools have become increasingly available and accessible in recent years, empowering individuals and organizations to harness the potential of AI and machine learning. These newly available resources have sparked great curiosity within the public health community, and ASTHO members are considering the value of these tools in practice. Through ASTHO’s work in public health data modernization, and broadly in population health innovation, we’ve received many requests to address, recognize, and expound on the value and potential of AI in our field. However, as with any disruptive technology, responsible and ethical use is essential to ensure that these tools are employed in a manner that respects privacy, avoids misinformation, minimizes bias and inequities, and upholds societal well-being.


AI in Medicine—Focus on Clinical Outcomes, Patient-Centered Care, Quality, and Equity
R Khera et al, JAMA, Auguts 11, 2023 (Posted: Aug 11, 2023 11AM)

Recent emergence of large language models (LLMs) in highly visible and interactive applications has ignited interest in how new AI technologies can improve medicine and health for patients, the public, clinicians, health systems, and more. The rapidity of these developments, their potential impact on health care, and JAMA’s mission to publish the best science that advances medicine and public health compel the journal to renew its commitment to facilitating the rigorous scientific development, evaluation, and implementation of AI in health care.


Genetic Profiles Affect Smokers' Lung Cancer Risk
CN Martimez, Medscape, August 2023 (Posted: Aug 11, 2023 11AM)

Smokers with extreme phenotypes of high and low risk of developing tobacco-associated lung cancer have different genetic profiles, according to a multidisciplinary study conducted by specialists from the Cancer Center at the University of Navarra Clinic (CUN). The study was conducted using DNA from 133 heavy smokers who had not developed lung cancer at a mean age of 80 years, and from another 116 heavy smokers who had developed this type of cancer at a mean age of 50 years. This DNA was sequenced using next-generation techniques, and the results were analyzed using bioinformatics and artificial intelligence.


Get up to Speed on the Latest Developments in the Field! Register for the ORISE Current Issues in Genomics and Precision Public Health Online Training Event, September 7–8, 2023.
W White et al, CDC Blog Post, August 9, 2023 (Posted: Aug 09, 2023 11AM)

Advances in genomics, data science, machine learning, and artificial intelligence are transforming practice. Next generation public health and medical workforces need to understand these developments and how they can be used to benefit population health. Recognizing this challenge, Oak Ridge Institute for Science and Education (ORISE) is partnering with the Office of Genomics and Precision Public Health at the Centers for Disease Control and Prevention (CDC) to offer a free 2-day in-person training event covering the latest developments in these fields: Current Issues in Genomics and Precision Public Health – Using Genomics and Big Data to Improve Population Health and Reduce Health Inequities.


PhenoScore quantifies phenotypic variation for rare genetic diseases by combining facial analysis with other clinical features using a machine-learning framework.
Alexander J M Dingemans et al. Nat Genet 2023 8 (Posted: Aug 08, 2023 8PM)

We developed PhenoScore: an open-source, artificial intelligence-based phenomics framework, combining facial recognition technology with Human Phenotype Ontology data analysis to quantify phenotypic similarity. Here we show PhenoScore’s ability to recognize distinct phenotypic entities by establishing recognizable phenotypes for 37 of 40 investigated syndromes against clinical features observed in individuals with other neurodevelopmental disorders and show it is an improvement on existing approaches.


Collaborative strategies for deploying AI-based physician decision support systems: challenges and deployment approaches.
Mirja Mittermaier et al. NPJ Digit Med 2023 8 (1) 137 (Posted: Aug 08, 2023 8PM)

AI-based prediction models demonstrate equal or surpassing performance compared to experienced physicians in various research settings. However, only a few have made it into clinical practice. Further, there is no standardized protocol for integrating AI-based physician support systems into the daily clinical routine to improve healthcare delivery. Generally, AI/physician collaboration strategies have not been extensively investigated.


An AI-Enhanced Electronic Health Record Could Boost Primary Care Productivity.
Jeffrey E Harris et al. JAMA 2023 8 (Posted: Aug 08, 2023 8PM)

More than a few commentators have seriously inquired whether artificial intelligence (AI) could ultimately replace many clinicians. The far likelier prospect, however, is that the newly emerging technology will enhance clinical productivity. To be sure, AI-based pattern recognition software can already scan retinal photos for complications of diabetes, detect tuberculosis on chest x-rays, and evaluate screening mammograms. And some AI applications have been found to be comparable if not superior to human clinical judgment.


Rebooting cancer screening with artificial intelligence
S Adams et al, Lancet, August 5, 2023 (Posted: Aug 07, 2023 10AM)

Cancer screening typically relies on a reductionist, all-comer population approach, with screening eligibility based predominantly on age. This approach does not account for the multidimensional complexity of each individual, including a person's biological, physiological, and environmental data, and can miss those who may be at high risk of disease. As the total number of cancer cases continues to increase, and at younger ages, a rebooting of cancer screening is needed.


AI and Medical Education — A 21st-Century Pandora’s Box
A Cooper et al, NEJM, August 3, 2023 (Posted: Aug 02, 2023 6PM)

Many valid concerns have been raised about AI’s effects on medicine, including the propensity for AI to make up information that it then presents as fact (termed a “hallucination”), its implications for patient privacy, and the risk of biases being baked into source data. But we worry that the focus on these immediate challenges obscures many of the broader implications that AI could have for medical education — in particular, the ways in which this technology could affect the thought structures and practice patterns of medical trainees and physicians for generations to come.


Deep-learning-enabled protein–protein interaction analysis for prediction of SARS-CoV-2 infectivity and variant evolution
G Wang et al, Nature Medicine, July 31, 2023 (Posted: Aug 01, 2023 9AM)

Host–pathogen interactions and pathogen evolution are underpinned by protein–protein interactions between viral and host proteins. An understanding of how viral variants affect protein–protein binding is important for predicting viral–host interactions, such as the emergence of new pathogenic SARS-CoV-2 variants. Here we propose an artificial intelligence-based framework called UniBind, in which proteins are represented as a graph at the residue and atom levels. UniBind integrates protein three-dimensional structure and binding affinity and is capable of multi-task learning for heterogeneous biological data integration.


Roadmap on the use of artificial intelligence for imaging of vulnerable atherosclerotic plaque in coronary arteries.
Bernhard Föllmer et al. Nat Rev Cardiol 2023 7 (Posted: Jul 27, 2023 7AM)

Artificial intelligence (AI) is likely to revolutionize the way medical images are analysed and has the potential to improve the identification and analysis of vulnerable or high-risk atherosclerotic plaques in coronary arteries, leading to advances in the treatment of coronary artery disease. In this Roadmap, we review existing evidence on the application of AI to the imaging of vulnerable plaque in coronary arteries and provide consensus recommendations developed by an interdisciplinary group of experts on AI and non-invasive and invasive coronary imaging.


How to fight antibiotic resistance.
Mike May et al. Nat Med 2023 7 (7) 1583-1586 (Posted: Jul 19, 2023 7AM)

Researchers use artificial intelligence and bioinformatics to discover new antimicrobials, while governments pilot financial incentives for pharma. Antimicrobial resistance (AMR) to existing medicines creates one of the biggest dangers for global health. According to the Antimicrobial Resistance Collaborators, AMR had a role in 4.95 million deaths in 2019, with the highest death rate — 27.3 deaths per 100,000 people — found in sub-Saharan Africa. This is a problem that only promises to get worse.


Accelerating science with human-aware artificial intelligence.
Jamshid Sourati et al. Nat Hum Behav 2023 7 (Posted: Jul 17, 2023 8AM)

Artificial intelligence (AI) models trained on published scientific findings have been used to invent valuable materials and targeted therapies, but they typically ignore the human scientists who continually alter the landscape of discovery. Here we show that incorporating the distribution of human expertise by training unsupervised models on simulated inferences that are cognitively accessible to experts dramatically improves (by up to 400%) AI prediction of future discoveries beyond models focused on research content alone.


Prospective validation of dermoscopy-based open-source artificial intelligence for melanoma diagnosis (PROVE-AI study)
M Marchetti et al, NPJ Digital Medicine, July 12, 2023 (Posted: Jul 12, 2023 7AM)

The use of artificial intelligence (AI) has the potential to improve the assessment of lesions suspicious of melanoma, but few clinical studies have been conducted. We validated the accuracy of an open-source, non-commercial AI algorithm for melanoma diagnosis and assessed its potential impact on dermatologist decision-making. We conducted a prospective, observational clinical study to assess the diagnostic accuracy of the AI algorithm (ADAE) in predicting melanoma from dermoscopy skin lesion images.


Artificial Intelligence in Clinical Diagnosis: Opportunities, Challenges, and Hype.
Prathit A Kulkarni et al. JAMA 2023 7 (Posted: Jul 07, 2023 9AM)

Generative artificial intelligence (AI) has recently been hailed as a promising tool to improve health care quality. One study compared output from the AI chatbot for medical questions with answers from physicians1; other studies have evaluated the AI chatbot’s responses to sample clinical vignettes. A foundational aspect of high-quality health care—making a correct and timely diagnosis—remains a challenge in modern medicine despite decades of technological advances. Therefore, any emerging technology with potential to reduce diagnostic errors warrants serious examination.


The Challenges for Regulating Medical Use of ChatGPT and Other Large Language Models.
Timo Minssen et al. JAMA 2023 7 (Posted: Jul 07, 2023 9AM)

The introduction of artificial intelligence (AI) into medical devices, decision support, and clinical practice is not new, with a particular uptick in investment and deployment within the past decade. Regulators (eg, the US Food and Drug Administration, the European Medicines Agency, and the National Medical Products Administration), intergovernmental organizations, civil society groups, institutional review boards at hospitals, and others have worked hard to define the scope of what AI applications should require review and approval, implementing rules in a fast-changing terrain with mixed results.


Generative AI in Health Care and Liability Risks for Physicians and Safety Concerns for Patients.
Mindy Duffourc et al. JAMA 2023 7 (Posted: Jul 07, 2023 9AM)

Generative AI is being heralded in the medical field for its potential to ease the long-lamented burden of medical documentation by generating visit notes, treatment codes, and medical summaries. Physicians and patients might also turn to generative AI to answer medical questions about symptoms, treatment recommendations, or potential diagnoses. While these tools may improve patient care, the liability implications of using AI to generate health information are still in flux.


Health Care Privacy Risks of AI Chatbots.
Genevieve P Kanter et al. JAMA 2023 7 (Posted: Jul 07, 2023 9AM)

With the debut of ChatGPT, clinicians and health systems are embracing a brainy, fluent colleague eager to assist with some of the most thankless tasks in medicine. The promise of an artificial intelligence (AI)–powered chatbot that has passed the US Medical Licensing Examination1 and can also—without complaining—prepare structured medical notes from a mélange of clinical facts, identify billing codes, and respond to patient portal messages appears to be an unmitigated boon.


The imperative for regulatory oversight of large language models (or generative AI) in healthcare
B Mesko et al, NPJ Digital Medicine, July 6, 2023 (Posted: Jul 06, 2023 8AM)

The regulation of generative AI in medicine and healthcare without damaging their exciting and transformative potential is a timely and critical challenge to ensure safety, maintain ethical standards, and protect patient privacy. We argue that regulatory oversight should assure medical professionals and patients can use LLMs without causing harm or compromising their data or privacy. This paper summarizes our practical recommendations for what we can expect from regulators to bring this vision to reality.


Can ChatGPT understand genetics?
FE Streib, EJHG, July 5, 2023 (Posted: Jul 06, 2023 8AM)

A recent study assessed the application of the chatbot in genetics by testing its expertise. The reason why this needs to be studies is that when training ChatGPT, very large corpora of text about essentially all conceivable topics are used including to some extend also publications about biomedical, medical and clinical research utilizing genetics and genomics. However, ChatGPT is not designed to focus on either of these nor on any other area but it is a generic tool. This implies that, currently, the expert level of this tool in particular application domains is largely unknown which means that in some areas ChatGPT could be “competent” while in others it could fail or it could be even a polymath.


Artificial Intelligence in Molecular Medicine
B Gomes et al, NEJM, July 5, 2023 (Posted: Jul 06, 2023 8AM)

Deep learning, a powerful subset of machine learning that includes the use of deep neural networks, has had high-profile applications in image object recognition, voice recognition, autonomous driving, and virtual assistance. These approaches are now being applied in medicine to yield clinically directive medical information. In this review article, we briefly describe the methods used to generate high-dimensional molecular data and then focus on the key role that machine learning plays in the clinical application of such data.


Embracing generative AI in health care
The Lancet Regional Health, July 2023 (Posted: Jul 06, 2023 8AM)

To date, AI technologies have had limited, yet considerable, applications in health care. Such technologies have been used to improve the analysis of medical images such as x-rays, CT scans, and MRIs for disease diagnosis; for extracting and analyzing information from electronic health records; for personalised medicine; for remote monitoring with wearable devices, sensors, and home monitoring systems; and for drug discovery and development. In contrast, GenAI has the potential to transform clinical workflows and the way doctors work. For example, at the basic level, GenAI can help health-care professionals interpret data such as a patient's medical history, imaging records, genomics, or laboratory results with a simple query, even if the information is stored across different formats and locations.


Development and validation of an artificial intelligence model for the early classification of the aetiology of meningitis and encephalitis: a retrospective observational study
BK Choi et al, Ebiomedicine, July 2023 (Posted: Jun 26, 2023 8AM)

Early diagnosis and appropriate treatment are essential in meningitis and encephalitis management. We aimed to implement and verify an artificial intelligence (AI) model for early aetiological determination of patients with encephalitis and meningitis and identify important variables in the classification process. Between January 1, 2006, and June 30, 2021, 283 patients were enrolled in the training/test dataset. An ensemble model with extreme gradient boosting and TabNet showed the best performance among the eight AI models with various settings in the external validation dataset (n = 220); accuracy, 0.8909; precision, 0.8987; recall, 0.8909; F1 score, 0.8948; AUROC, 0.9163. The AI model outperformed all clinicians.


Co-evolution of epidemiology and artificial intelligence: challenges and opportunities.
Joohon Sung et al. Int J Epidemiol 2023 6 (Posted: Jun 24, 2023 10AM)

Artificial intelligence (AI), also often referred to as machine learning (ML) and deep learning (DL) is an automated process whereby information is extracted from a given dataset using computing techniques to create an algorithm for making predictions and/or classifications.1 The key difference between AI and classic epidemiology is that the latter builds models based on explicit assumptions about what matters and how, so that the results can be directly interpretable, whereas AI builds algorithms in essence for predictive models discovered from the data, without necessarily understanding why.


Systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviours
B Singh et al, NPJ Digital Medicine (Posted: Jun 24, 2023 10AM)

Nineteen trials were included. Sample sizes ranged between 25–958, and mean participant age ranged between 9–71 years. Most interventions (n?=?15, 79%) targeted physical activity, and most trials had a low-quality rating (n?=?14, 74%). Meta-analysis results showed significant effects (all p?<?0.05) of chatbots for increasing total physical activity (SMD?=?0.28 [95% CI?=?0.16, 0.40]), daily steps (SMD?=?0.28 [95% CI?=?0.17, 0.39]), MVPA (SMD?=?0.53 [95% CI?=?0.24, 0.83]), fruit and vegetable consumption (SMD?=?0.59 [95% CI?=?0.25, 0.93]), sleep duration (SMD?=?0.44 [95% CI?=?0.32, 0.55]) and sleep quality (SMD?=?0.50 [95% CI?=?0.09, 0.90]).


Ensuring fair, safe, and interpretable artificial intelligence-based prediction tools in a real-world oncological setting
R George et al, Comm Med, June 22, 2023 (Posted: Jun 22, 2023 7AM)

Cancer patients often experience treatment-related symptoms which, if uncontrolled, may require emergency department admission. We developed models identifying breast or genitourinary cancer patients at the risk of attending emergency department (ED) within 30-days and demonstrated the development, validation, and proactive approach to in-production monitoring of an artificial intelligence-based predictive model during a 3-month simulated deployment at a cancer hospital in the United States.


Accuracy of a Generative Artificial Intelligence Model in a Complex Diagnostic Challenge.
Zahir Kanjee et al. JAMA 2023 6 (Posted: Jun 19, 2023 1PM)

Generative AI is a promising adjunct to human cognition in diagnosis. The model evaluated in this study, similar to some other modern differential diagnosis generators, is a diagnostic “black box”; future research should investigate potential biases and diagnostic blind spots of generative AI models. Clinicopathologic conferences are best understood as diagnostic puzzles; once privacy and confidentiality concerns are addressed, studies should assess performance with data from real-world patient encounters.


Bias in AI-based models for medical applications: challenges and mitigation strategies
M Mittermair et al, NPJ Digital Medicine, June 14, 2023 (Posted: Jun 14, 2023 8AM)

Artificial intelligence systems are increasingly being applied to healthcare. In surgery, AI applications hold promise as tools to predict surgical outcomes, assess technical skills, or guide surgeons intraoperatively via computer vision. On the other hand, AI systems can also suffer from bias, compounding existing inequities in socioeconomic status, race, ethnicity, religion, gender and disability. Bias particularly impacts disadvantaged populations, which can be subject to algorithmic predictions that are less accurate or underestimate the need for care.


An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals.
Sophie Isabelle Lambert et al. NPJ Digit Med 2023 6 (1) 111 (Posted: Jun 11, 2023 8AM)

The aim of this integrative review is to explore barriers and facilitators influencing healthcare professionals’ acceptance of AI in the hospital setting. Forty-two articles met the inclusion criteria for this review. Pertinent elements to the study such as the type of AI, factors influencing acceptance, and the participants’ profession were extracted from the included studies, and the studies were appraised for their quality. The data extraction and results were presented according to the Unified Theory of Acceptance and Use of Technology (UTAUT) model.


Regulators Face Novel Challenges as Artificial Intelligence Tools Enter Medical Practice
S Gottlieb et al, JAMA Health Forum, June 8, 2023 (Posted: Jun 08, 2023 11AM)

The emergence of artificial intelligence (AI) tools ushers in a groundbreaking opportunity in medicine. They have the potential to dramatically streamline drug development, broaden the spectrum of biological targets, and enhance the accuracy of diagnosis and treatment. However, integrating these technologies into the current regulatory frameworks presents a considerable challenge. Global regulatory bodies will grapple with the task of applying their established norms to these novel entities. Consequently, new policies are needed to ensure the safety and efficacy of these tools for patients. These fitted solutions must balance the need for innovation with that of patient safety and benefit.


Health system-scale language models are all-purpose prediction engines
LY Jiang et al, Nature, June 7, 2023 (Posted: Jun 07, 2023 1PM)

Here we show that unstructured clinical notes from the electronic health record can enable the training of clinical language models, which can be used as all-purpose clinical predictive engines with low-resistance development and deployment. Our approach leverages recent advances in natural language processing to train a large language model for medical language (NYUTron) and subsequently fine-tune it across a wide range of clinical and operational predictive tasks.


Inside the nascent industry of AI-designed drugs.
Carrie Arnold et al. Nat Med 2023 6 (Posted: Jun 05, 2023 8AM)


A Review of the Role of Artificial Intelligence in Healthcare
A Al Kuweiti et al, J Per Med, June 5, 2023 (Posted: Jun 05, 2023 8AM)

AI meets several technical, ethical, and social challenges, including privacy, safety, the right to decide and try, costs, information and consent, access, and efficacy, while integrating AI into healthcare. The governance of AI applications is crucial for patient safety and accountability and for raising HCPs’ belief in enhancing acceptance and boosting significant health consequences. Effective governance is a prerequisite to precisely address regulatory, ethical, and trust issues while advancing the acceptance and implementation of AI. Since COVID-19 hit the global health system, the concept of AI has created a revolution in healthcare, and such an uprising could be another step forward to meet future healthcare needs.


The Current and Future State of AI Interpretation of Medical Images.
Pranav Rajpurkar et al. N Engl J Med 2023 5 (21) 1981-1990 (Posted: Jun 01, 2023 0PM)

The interpretation of medical images — a task that lies at the heart of the radiologist’s work — has involved the growing adoption of artificial intelligence (AI) applications in recent years. This article reviews progress, challenges, and opportunities in the development of radiologic AI models and their adoption in clinical practice.


Solving the explainable AI conundrum by bridging clinicians' needs and developers' goals.
Nadine Bienefeld et al. NPJ Digit Med 2023 5 (1) 94 (Posted: May 23, 2023 11AM)

Our study identifies three key differences between developer and clinician mental models of XAI, including opposing goals (model interpretability vs. clinical plausibility), different sources of truth (data vs. patient), and the role of exploring new vs. exploiting old knowledge. Based on our findings, we propose design solutions that can help address the XAI conundrum in healthcare.


An artificial intelligence based app for skin cancer detection evaluated in a population based setting.
Anna M Smak Gregoor et al. NPJ Digit Med 2023 5 (1) 90 (Posted: May 23, 2023 11AM)

Artificial intelligence (AI) based algorithms for classification of suspicious skin lesions have been implemented in mobile phone apps (mHealth), but their effect on healthcare systems is undocumented. In 2019, a large health insurance company offered 2.2 million adults free access to an mHealth app for skin cancer detection. To study the impact on dermatological healthcare consumption, we conducted a retrospective population-based pragmatic study. We matched 18,960 mHealth-users who completed at least one successful assessment with the app to 56,880 controls.


NIH launches largest precision nutrition research effort of its kind
NIH ALL of Us, May 2023 Brand (Posted: May 17, 2023 11AM)

The National Institutes of Health is now enrolling participants in a landmark initiative to advance nutrition research. Nutrition for Precision Health, powered by the All of Us Research Program (NPH), is working with 14 sites across the United States to engage 10,000 participants from diverse backgrounds and learn more about how our bodies respond differently to food. NPH will use artificial intelligence (AI)-based approaches to analyze information provided by participants in order to develop algorithms that predict responses to dietary patterns. The study’s findings may one day allow healthcare providers to offer more customized nutritional guidance to improve overall health.


A deep learning algorithm to predict risk of pancreatic cancer from disease trajectories.
Davide Placido et al. Nat Med 2023 5 (Posted: May 09, 2023 5AM)

Pancreatic cancer is an aggressive disease that typically presents late with poor outcomes, indicating a pronounced need for early detection. In this study, we applied artificial intelligence methods to clinical data from 6 million patients (24,000 pancreatic cancer cases) in Denmark (Danish National Patient Registry (DNPR)) and from 3 million patients (3,900 cases) in the United States (US Veterans Affairs (US-VA)). We trained machine learning models on the sequence of disease codes in clinical histories and tested prediction of cancer occurrence within incremental time windows (CancerRiskNet).


Systematic review and meta-analysis of performance of wearable artificial intelligence in detecting and predicting depression
AA AlRazak et al, NPJ Digital Medicine, May 5, 2023 (Posted: May 05, 2023 10AM)

AI is a promising tool for depression detection and prediction although it is in its infancy and not ready for use in clinical practice. Until further research improve its performance, wearable AI should be used in conjunction with other methods for diagnosing and predicting depression. Further studies are needed to examine the performance of wearable AI based on a combination of wearable device data and neuroimaging data and to distinguish patients with depression from those with other diseases.


AI-assisted prediction of differential response to antidepressant classes using electronic health records.
Yi-Han Sheu et al. NPJ Digit Med 2023 4 (1) 73 (Posted: Apr 27, 2023 8AM)

Antidepressant selection is largely a trial-and-error process. We used electronic health record (EHR) data and artificial intelligence (AI) to predict response to four antidepressants classes (SSRI, SNRI, bupropion, and mirtazapine) 4 to 12 weeks after antidepressant initiation. The final data set comprised 17,556 patients. We show that antidepressant response can be accurately predicted from real-world EHR data with AI modeling, and our approach could inform further development of clinical decision support systems for more effective treatment selection.


AI outperforms sonographers at diagnosing cardiac function on echocardiography.
Irene Fernández-Ruiz et al. Nat Rev Cardiol (Posted: Apr 23, 2023 7AM)

An artificial intelligence (AI)-guided workflow for initial evaluation of left ventricular ejection fraction (LVEF) in echocardiography is non-inferior to initial assessment by a sonographer, according to findings from a blinded, randomized non-inferiority trial. Moreover, cardiologists required less time to over-read and were less likely to adjust the initial AI-guided assessment, and the AI assessment was more consistent with previous independent assessments by a cardiologist.


Foundation models for generalist medical artificial intelligence
M Moor et al, Nature, April 12, 2023 (Posted: Apr 12, 2023 11AM)

We propose a new paradigm for medical AI, which we refer to as generalist medical AI (GMAI). GMAI models will be capable of carrying out a diverse set of tasks using very little or no task-specific labelled data. Built through self-supervision on large, diverse datasets, GMAI will flexibly interpret different combinations of medical modalities, including data from imaging, electronic health records, laboratory results, genomics, graphs or medical text. Models will in turn produce expressive outputs such as free-text explanations, spoken recommendations or image annotations that demonstrate advanced medical reasoning abilities.


Collaborative strategies for deploying artificial intelligence to complement physician diagnoses of acute respiratory distress syndrome.
Negar Farzaneh et al. NPJ digital medicine 2023 4 (1) 62 (Posted: Apr 10, 2023 7AM)

There is a growing gap between studies describing the capabilities of artificial intelligence (AI) diagnostic systems using deep learning versus efforts to investigate how or when to integrate AI systems into a real-world clinical practice to support physicians and improve diagnosis. To address this gap, we investigate four potential strategies for AI model deployment and physician collaboration to determine their potential impact on diagnostic accuracy. As a case study, we examine an AI model trained to identify findings of the acute respiratory distress syndrome (ARDS) on chest X-ray images.


Is Medicine Ready for AI?
et al. The New England journal of medicine 2023 4 (14) e49 (Posted: Apr 06, 2023 9AM)

In this episode of “Intention to Treat,” Maia Hightower and Isaac Kohane join host Rachel Gotbaum to explore the promise and hazards of artificial-intelligence and machine-learning tools for both clinical and administrative uses in medicine.


Artificial Intelligence and Machine Learning in Clinical Medicine, 2023.
Charlotte J Haug et al. The New England journal of medicine 2023 3 (13) 1201-1208 (Posted: Apr 06, 2023 9AM)

The use of AI and machine learning in medicine has expanded beyond the reading of medical images. AI and machine-learning programs have entered medicine in many ways, including, but not limited to, helping to identify outbreaks of infectious diseases that may have an impact on public health; combining clinical, genetic, and many other laboratory outputs to identify rare and common conditions that might otherwise have escaped detection; and aiding in hospital business operations.


AI 'fairness' research held back by lack of diversity Authors of papers on the potential biases of artificial intelligence tools in health care are predominantly white, male and from high-income countries.
C Wong, Nature, March 30, 2023 (Posted: Mar 31, 2023 7AM)

A lack of racial and gender diversity could be hindering the efforts of researchers working to improve the fairness of artificial intelligence (AI) tools in health care, such as those designed to detect disease from blood samples or imaging data. Scientists analyzed 375 research and review articles on the fairness of artificial intelligence in health care, published in 296 journals between 1991 and 2022. Of 1,984 authors, 64% were white, whereas 27% were Asian, 5% were Black and 4% were Hispanic.


Artificial Intelligence in Medicine.
Andrew L Beam et al. The New England journal of medicine 2023 3 (13) 1220-1221 (Posted: Mar 30, 2023 4PM)

Medicine is much different from other areas where AI is being applied. AI enables new discoveries and improved processes in the entire health care continuum; ethical, governance, and regulatory considerations are critical in the design, implementation, and integration of every component of the AI applications and systems. Because of concerns about both utility and safety, new applications will generally have to adhere to the same standards applied to other medical technologies.


Will ChatGPT transform healthcare?
et al. Nature medicine 2023 3 (3) 505-506 (Posted: Mar 28, 2023 8AM)

Large language models, such as ChatGPT, use deep learning (DL) to reproduce human language in a convincing and human-like way. They are becoming increasingly common and are already being used in content marketing, customer services and a variety of business applications. As a result, it is inevitable that language models will also soon debut in healthcare, an area where they hold tremendous potential to improve health and enhance patients’ lives, but not without pitfalls.


Effectiveness of artificial intelligence screening in preventing vision loss from diabetes: a policy model.
Roomasa Channa et al. NPJ digital medicine 2023 3 (1) 53 (Posted: Mar 28, 2023 6AM)

We designed the Care Process for Preventing Vision Loss from Diabetes (CAREVL), as a Markov model to compare the effectiveness of point-of-care autonomous AI-based screening with in-office clinical exam by an eye care provider (ECP), on preventing vision loss among patients with diabetes. The estimated incidence of vision loss at 5 years was 1535 per 100,000 in the AI-screened group compared to 1625 per 100,000 in the ECP group, leading to a modelled risk difference of 90 per 100,000. The base-case CAREVL model estimated that an autonomous AI-based screening strategy would result in 27,000 fewer Americans with vision loss at 5 years compared with ECP.


AI-Generated Medical Advice-GPT and Beyond.
Claudia E Haupt et al. JAMA 2023 3 (Posted: Mar 28, 2023 6AM)

This Viewpoint surveys the medical applications of GPT and related technologies and considers whether new forms of regulation are necessary to minimize safety and legal risks to patients and clinicians. These risks depend largely on whether the software is used to assist health care practitioners or to replace them, and the degree to which clinicians maintain control.


Harnessing the Promise of Artificial Intelligence Responsibly.
David A Dorr et al. JAMA 2023 3 (Posted: Mar 28, 2023 6AM)

Recent reviews show that nearly all algorithms still fail to achieve substantial gains over human performance when implemented widely and are often based on limited evidence. The development and use of AI algorithms in health care require careful consideration of ethical frameworks relevant to health care and biomedicine, professional oaths and standards, and the systems in which they are implemented.


Overcoming the challenges to implementation of artificial intelligence in pathology.
Jorge S Reis-Filho et al. Journal of the National Cancer Institute 2023 3 (Posted: Mar 20, 2023 7AM)

The application of artificial intelligence (AI) to digital whole slide images has the potential of democratizing the access to expert pathology and affordable biomarkers, by supporting pathologists in the provision of timely and accurate diagnosis as well as supporting oncologists by extracting prognostic and predictive biomarkers directly from tissue slides. The long-awaited adoption of AI in pathology, however, has not materialized, and the transformation of pathology is happening at a pace that is much slower than that observed in other fields.


Current State and Innovations in Newborn Screening: Continuing to Do Good and Avoid Harm
G La Marca et al, IJNS, March 2023 (Posted: Mar 20, 2023 7AM)

Future developments, utilizing genomic techniques, are likely to play an increasingly important role in newborn screening, possibly combined with artificial intelligence (AI)-driven software. We will consider the balance required to harness the potential of these new advances whilst maintaining the benefits and reducing the risks for harm associated with all screening.


Application of comprehensive evaluation framework to Coronavirus Disease 19 studies: A systematic review of translational aspects of artificial intelligence in health care
AE Casey et al, MEDRXIV, February 26, 2023 (Posted: Feb 28, 2023 6AM)


Predictive models in emergency medicine and their missing data strategies: a systematic review.
Emilien Arnaud et al. NPJ digital medicine 2023 2 (1) 28 (Posted: Feb 26, 2023 8AM)

In the field of emergency medicine (EM), the use of decision support tools based on artificial intelligence has increased markedly in recent years. In some cases, data are omitted deliberately and thus constitute “data not purposely collected” (DNPC). This accepted information bias can be managed in various ways: dropping patients with missing data, imputing with the mean, or using automatic techniques (e.g., machine learning) to handle or impute the data. Here, we systematically reviewed the methods used to handle missing data in EM research.


The role of patient-reported outcome measures in trials of artificial intelligence health technologies: a systematic evaluation of ClinicalTrials.gov records (1997–2022)
FJ Pierce et al., The Lancet Digital Health, February 2023 (Posted: Feb 23, 2023 9AM)

The search identified 2958 trials, of which 627 were included in the analysis. 152 (24%) of the included trials used one or more PROM, visual analogue scale, patient-reported experience measure, or usability measure as a trial endpoint. The type of AI health technologies used by these trials included AI-enabled smart devices, clinical decision support systems, and chatbots. The number of clinical trials of AI health technologies registered on ClinicalTrials.gov and the proportion of trials that used PROMs increased from registry inception to 2022.


more


Disclaimer: Articles listed in Hot Topics of the Day are selected by Public Health Genomics Branch to provide current awareness of the scientific literature and news. Inclusion in the update does not necessarily represent the views of the Centers for Disease Control and Prevention nor does it imply endorsement of the article's methods or findings. CDC and DHHS assume no responsibility for the factual accuracy of the items presented. The selection, omission, or content of items does not imply any endorsement or other position taken by CDC or DHHS. Opinion, findings and conclusions expressed by the original authors of items included in the Clips, or persons quoted therein, are strictly their own and are in no way meant to represent the opinion or views of CDC or DHHS. References to publications, news sources, and non-CDC Websites are provided solely for informational purposes and do not imply endorsement by CDC or DHHS.
TOP