Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Craniofac Surg ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38940555

ABSTRACT

INTRODUCTION: Deformational plagiocephaly (DP) can be classified into 5 severity types using the Argenta scale (AS). Patients with type III or higher require referral to craniofacial surgery for management. Primary care pediatricians (PCPs) are often the first to encounter patients with DP, but current screening methods are subjective, increasing the risk of bias, especially for clinicians with little exposure to this population. The authors propose the use of artificial intelligence (AI) to classify patients with DP using the AS and to make recommendations for referral to craniofacial surgery. METHODS: Vertex photographs were obtained for patients diagnosed with unilateral DP from 2019 to 2020. Using the photographs, an AI program was created to characterize the head contour of these infants into 3 groups based on the AS. The program was trained using photographs from patients whose DP severity was confirmed clinically by craniofacial surgeons. To assess the accuracy of the software, the AS predicted by the program was compared with the clinical diagnosis. RESULTS: Nineteen patients were assessed by the AI software. All 3 patients with type I DP were correctly classified by the program (100%). In addition, 4 patients with type II were correctly identified (67%), and 7 were correctly classified as type III or greater (70%). CONCLUSIONS: Using vertex photographs and AI, the authors were able to objectively classify patients with DP based on the AS. If converted into a smartphone application, the program could be helpful to PCPs in remote or low-resource settings, allowing them to objectively determine which patients require referral to craniofacial surgery.

2.
J Surg Res ; 295: 158-167, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38016269

ABSTRACT

INTRODUCTION: Artificial intelligence (AI) may benefit pediatric healthcare, but it also raises ethical and pragmatic questions. Parental support is important for the advancement of AI in pediatric medicine. However, there is little literature describing parental attitudes toward AI in pediatric healthcare, and existing studies do not represent parents of hospitalized children well. METHODS: We administered the Attitudes toward Artificial Intelligence in Pediatric Healthcare, a validated survey, to parents of hospitalized children in a single tertiary children's hospital. Surveys were administered by trained study personnel (11/2/2021-5/1/2022). Demographic data were collected. An Attitudes toward Artificial Intelligence in Pediatric Healthcare score, assessing openness toward AI-assisted medicine, was calculated for seven areas of concern. Subgroup analyses were conducted using Mann-Whitney U tests to assess the effect of race, gender, education, insurance, length of stay, and intensive care unit (ICU) admission on AI use. RESULTS: We approached 90 parents and conducted 76 surveys for a response rate of 84%. Overall, parents were open to the use of AI in pediatric medicine. Social justice, convenience, privacy, and shared decision-making were important concerns. Parents of children admitted to an ICU expressed the most significantly different attitudes compared to parents of children not admitted to an ICU. CONCLUSIONS: Parents were overall supportive of AI-assisted healthcare decision-making. In particular, parents of children admitted to ICU have significantly different attitudes, and further study is needed to characterize these differences. Parents value transparency and disclosure pathways should be developed to support this expectation.


Subject(s)
Artificial Intelligence , Child, Hospitalized , Humans , Child , Attitude , Intensive Care Units , Parents
3.
J Biomed Inform ; 147: 104531, 2023 11.
Article in English | MEDLINE | ID: mdl-37884177

ABSTRACT

INTRODUCTION: The use of artificial intelligence (AI), particularly machine learning and predictive analytics, has shown great promise in health care. Despite its strong potential, there has been limited use in health care settings. In this systematic review, we aim to determine the main barriers to successful implementation of AI in healthcare and discuss potential ways to overcome these challenges. METHODS: We conducted a literature search in PubMed (1/1/2001-1/1/2023). The search was restricted to publications in the English language, and human study subjects. We excluded articles that did not discuss AI, machine learning, predictive analytics, and barriers to the use of these techniques in health care. Using grounded theory methodology, we abstracted concepts to identify major barriers to AI use in medicine. RESULTS: We identified a total of 2,382 articles. After reviewing the 306 included papers, we developed 19 major themes, which we categorized into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). These themes included: Lack of Explainability, Need for Validation Protocols, Need for Standards for Interoperability, Need for Reporting Guidelines, Need for Standardization of Performance Metrics, Lack of Plan for Updating Algorithm, Job Loss, Skills Loss, Workflow Challenges, Loss of Patient Autonomy and Consent, Disturbing the Patient-Clinician Relationship, Lack of Trust in AI, Logistical Challenges, Lack of strategic plan, Lack of Cost-effectiveness Analysis and Proof of Efficacy, Privacy, Liability, Bias and Social Justice, and Education. CONCLUSION: We identified 19 major barriers to the use of AI in healthcare and categorized them into three levels: the Technical/Algorithm, Stakeholder, and Social levels (TASS). Future studies should expand on barriers in pediatric care and focus on developing clearly defined protocols to overcome these barriers.


Subject(s)
Algorithms , Artificial Intelligence , Medicine , Benchmarking , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...