Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
1.
JMIR Res Protoc ; 13: e54933, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38776540

ABSTRACT

BACKGROUND: There is data paucity regarding users' awareness of privacy concerns and the resulting impact on the acceptance of mobile health (mHealth) apps, especially in the Saudi context. Such information is pertinent in addressing users' needs in the Kingdom of Saudi Arabia (KSA). OBJECTIVE: This article presents a study protocol for a mixed method study to assess the perspectives of patients and stakeholders regarding the privacy, security, and confidentiality of data collected via mHealth apps in the KSA and the factors affecting the adoption of mHealth apps. METHODS: A mixed method study design will be used. In the quantitative phase, patients and end users of mHealth apps will be randomly recruited from various provinces in Saudi Arabia with a high population of mHealth users. The research instrument will be developed based on the emerging themes and findings from the interview conducted among stakeholders, app developers, health care professionals, and users of mHealth apps (n=25). The survey will focus on (1) how to improve patients' awareness of data security, privacy, and confidentiality; (2) feedback on the current mHealth apps in terms of data security, privacy, and confidentiality; and (3) the features that might improve data security, privacy, and confidentiality of mHealth apps. Meanwhile, specific sections of the questionnaire will focus on patients' awareness, privacy concerns, confidentiality concerns, security concerns, perceived usefulness, perceived ease of use, and behavioral intention. Qualitative data will be analyzed thematically using NVivo version 12. Descriptive statistics, regression analysis, and structural equation modeling will be performed using SPSS and partial least squares structural equation modeling. RESULTS: The ethical approval for this research has been obtained from the Biomedical and Scientific Research Ethics Committee, University of Warwick, and the Medical Research and Ethics Committee Ministry of Health in the KSA. The qualitative phase is ongoing and 15 participants have been interviewed. The interviews for the remaining 10 participants will be completed by November 25, 2023. Preliminary thematic analysis is still ongoing. Meanwhile, the quantitative phase will commence by December 10, 2023, with 150 participants providing signed and informed consent to participate in the study. CONCLUSIONS: The mixed methods study will elucidate the antecedents of patients' awareness and concerns regarding the privacy, security, and confidentiality of data collected via mHealth apps in the KSA. Furthermore, pertinent findings on the perspectives of stakeholders and health care professionals toward the aforementioned issues will be gleaned. The results will assist policy makers in developing strategies to improve Saudi users'/patients' adoption of mHealth apps and addressing the concerns raised to benefit significantly from these advanced health care modalities. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/54933.


Subject(s)
Computer Security , Confidentiality , Mobile Applications , Telemedicine , Humans , Saudi Arabia , Surveys and Questionnaires , Male , Female , Privacy , Adult , Qualitative Research , Stakeholder Participation
2.
J Med Internet Res ; 26: e50715, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38820572

ABSTRACT

BACKGROUND: Mobile health (mHealth) apps have the potential to enhance health care service delivery. However, concerns regarding patients' confidentiality, privacy, and security consistently affect the adoption of mHealth apps. Despite this, no review has comprehensively summarized the findings of studies on this subject matter. OBJECTIVE: This systematic review aims to investigate patients' perspectives and awareness of the confidentiality, privacy, and security of the data collected through mHealth apps. METHODS: Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, a comprehensive literature search was conducted in 3 electronic databases: PubMed, Ovid, and ScienceDirect. All the retrieved articles were screened according to specific inclusion criteria to select relevant articles published between 2014 and 2022. RESULTS: A total of 33 articles exploring mHealth patients' perspectives and awareness of data privacy, security, and confidentiality issues and the associated factors were included in this systematic review. Thematic analyses of the retrieved data led to the synthesis of 4 themes: concerns about data privacy, confidentiality, and security; awareness; facilitators and enablers; and associated factors. Patients showed discordant and concordant perspectives regarding data privacy, security, and confidentiality, as well as suggesting approaches to improve the use of mHealth apps (facilitators), such as protection of personal data, ensuring that health status or medical conditions are not mentioned, brief training or education on data security, and assuring data confidentiality and privacy. Similarly, awareness of the subject matter differed across the studies, suggesting the need to improve patients' awareness of data security and privacy. Older patients, those with a history of experiencing data breaches, and those belonging to the higher-income class were more likely to raise concerns about the data security and privacy of mHealth apps. These concerns were not frequent among patients with higher satisfaction levels and those who perceived the data type to be less sensitive. CONCLUSIONS: Patients expressed diverse views on mHealth apps' privacy, security, and confidentiality, with some of the issues raised affecting technology use. These findings may assist mHealth app developers and other stakeholders in improving patients' awareness and adjusting current privacy and security features in mHealth apps to enhance their adoption and use. TRIAL REGISTRATION: PROSPERO CRD42023456658; https://tinyurl.com/ytnjtmca.


Subject(s)
Computer Security , Confidentiality , Mobile Applications , Telemedicine , Humans , Privacy
3.
JMIR Form Res ; 8: e52462, 2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38517457

ABSTRACT

BACKGROUND: In this paper, we present an automated method for article classification, leveraging the power of large language models (LLMs). OBJECTIVE: The aim of this study is to evaluate the applicability of various LLMs based on textual content of scientific ophthalmology papers. METHODS: We developed a model based on natural language processing techniques, including advanced LLMs, to process and analyze the textual content of scientific papers. Specifically, we used zero-shot learning LLMs and compared Bidirectional and Auto-Regressive Transformers (BART) and its variants with Bidirectional Encoder Representations from Transformers (BERT) and its variants, such as distilBERT, SciBERT, PubmedBERT, and BioBERT. To evaluate the LLMs, we compiled a data set (retinal diseases [RenD] ) of 1000 ocular disease-related articles, which were expertly annotated by a panel of 6 specialists into 19 distinct categories. In addition to the classification of articles, we also performed analysis on different classified groups to find the patterns and trends in the field. RESULTS: The classification results demonstrate the effectiveness of LLMs in categorizing a large number of ophthalmology papers without human intervention. The model achieved a mean accuracy of 0.86 and a mean F1-score of 0.85 based on the RenD data set. CONCLUSIONS: The proposed framework achieves notable improvements in both accuracy and efficiency. Its application in the domain of ophthalmology showcases its potential for knowledge organization and retrieval. We performed a trend analysis that enables researchers and clinicians to easily categorize and retrieve relevant papers, saving time and effort in literature review and information gathering as well as identification of emerging scientific trends within different disciplines. Moreover, the extendibility of the model to other scientific fields broadens its impact in facilitating research and trend analysis across diverse disciplines.

4.
JMIR Form Res ; 8: e49411, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38441952

ABSTRACT

BACKGROUND: Research gaps refer to unanswered questions in the existing body of knowledge, either due to a lack of studies or inconclusive results. Research gaps are essential starting points and motivation in scientific research. Traditional methods for identifying research gaps, such as literature reviews and expert opinions, can be time consuming, labor intensive, and prone to bias. They may also fall short when dealing with rapidly evolving or time-sensitive subjects. Thus, innovative scalable approaches are needed to identify research gaps, systematically assess the literature, and prioritize areas for further study in the topic of interest. OBJECTIVE: In this paper, we propose a machine learning-based approach for identifying research gaps through the analysis of scientific literature. We used the COVID-19 pandemic as a case study. METHODS: We conducted an analysis to identify research gaps in COVID-19 literature using the COVID-19 Open Research (CORD-19) data set, which comprises 1,121,433 papers related to the COVID-19 pandemic. Our approach is based on the BERTopic topic modeling technique, which leverages transformers and class-based term frequency-inverse document frequency to create dense clusters allowing for easily interpretable topics. Our BERTopic-based approach involves 3 stages: embedding documents, clustering documents (dimension reduction and clustering), and representing topics (generating candidates and maximizing candidate relevance). RESULTS: After applying the study selection criteria, we included 33,206 abstracts in the analysis of this study. The final list of research gaps identified 21 different areas, which were grouped into 6 principal topics. These topics were: "virus of COVID-19," "risk factors of COVID-19," "prevention of COVID-19," "treatment of COVID-19," "health care delivery during COVID-19," "and impact of COVID-19." The most prominent topic, observed in over half of the analyzed studies, was "the impact of COVID-19." CONCLUSIONS: The proposed machine learning-based approach has the potential to identify research gaps in scientific literature. This study is not intended to replace individual literature research within a selected topic. Instead, it can serve as a guide to formulate precise literature search queries in specific areas associated with research questions that previous publications have earmarked for future exploration. Future research should leverage an up-to-date list of studies that are retrieved from the most common databases in the target area. When feasible, full texts or, at minimum, discussion sections should be analyzed rather than limiting their analysis to abstracts. Furthermore, future studies could evaluate more efficient modeling algorithms, especially those combining topic modeling with statistical uncertainty quantification, such as conformal prediction.

5.
Artif Intell Med ; 149: 102802, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38462292

ABSTRACT

Effective modeling of patient representation from electronic health records (EHRs) is increasingly becoming a vital research topic. Yet, modeling the non-stationarity in EHR data has received less attention. Most existing studies follow a strong assumption of stationarity in patient representation from EHRs. However, in practice, a patient's visits are irregularly spaced over a relatively long period of time, and disease progression patterns exhibit non-stationarity. Furthermore, the time gaps between patient visits often encapsulate significant domain knowledge, potentially revealing undiscovered patterns that characterize specific medical conditions. To address these challenges, we introduce a new method which combines the self-attention mechanism with non-stationary kernel approximation to capture both contextual information and temporal relationships between patient visits in EHRs. To assess the effectiveness of our proposed approach, we use two real-world EHR datasets, comprising a total of 76,925 patients, for the task of predicting the next diagnosis code for a patient, given their EHR history. The first dataset is a general EHR cohort and consists of 11,451 patients with a total of 3,485 unique diagnosis codes. The second dataset is a disease-specific cohort that includes 65,474 pregnant patients and encompasses a total of 9,782 unique diagnosis codes. Our experimental evaluation involved nine prediction models, categorized into three distinct groups. Group 1 comprises the baselines: original self-attention with positional encoding model, RETAIN model, and LSTM model. Group 2 includes models employing self-attention with stationary kernel approximations, specifically incorporating three variations of Bochner's feature maps. Lastly, Group 3 consists of models utilizing self-attention with non-stationary kernel approximations, including quadratic, cubic, and bi-quadratic polynomials. The experimental results demonstrate that non-stationary kernels significantly outperformed baseline methods for NDCG@10 and Hit@10 metrics in both datasets. The performance boost was more substantial in dataset 1 for the NDCG@10 metric. On the other hand, stationary Kernels showed significant but smaller gains over baselines and were nearly as effective as Non-stationary Kernels for Hit@10 in dataset 2. These findings robustly validate the efficacy of employing non-stationary kernels for temporal modeling of EHR data, and emphasize the importance of modeling non-stationary temporal information in healthcare prediction tasks.


Subject(s)
Algorithms , Electronic Health Records , Humans , Disease Progression
6.
Healthc Inform Res ; 30(1): 49-59, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38359849

ABSTRACT

OBJECTIVES: With the sudden global shift to online learning modalities, this study aimed to understand the unique challenges and experiences of emergency remote teaching (ERT) in nursing education. METHODS: We conducted a comprehensive online international cross-sectional survey to capture the current state and firsthand experiences of ERT in the nursing discipline. Our analytical methods included a combination of traditional statistical analysis, advanced natural language processing techniques, latent Dirichlet allocation using Python, and a thorough qualitative assessment of feedback from open-ended questions. RESULTS: We received responses from 328 nursing educators from 18 different countries. The data revealed generally positive satisfaction levels, strong technological self-efficacy, and significant support from their institutions. Notably, the characteristics of professors, such as age (p = 0.02) and position (p = 0.03), influenced satisfaction levels. The ERT experience varied significantly by country, as evidenced by satisfaction (p = 0.05), delivery (p = 0.001), teacher-student interaction (p = 0.04), and willingness to use ERT in the future (p = 0.04). However, concerns were raised about the depth of content, the transition to online delivery, teacher-student interaction, and the technology gap. CONCLUSIONS: Our findings can help advance nursing education. Nevertheless, collaborative efforts from all stakeholders are essential to address current challenges, achieve digital equity, and develop a standardized curriculum for nursing education.

7.
J Med Internet Res ; 26: e52622, 2024 Jan 31.
Article in English | MEDLINE | ID: mdl-38294846

ABSTRACT

BACKGROUND: Students usually encounter stress throughout their academic path. Ongoing stressors may lead to chronic stress, adversely affecting their physical and mental well-being. Thus, early detection and monitoring of stress among students are crucial. Wearable artificial intelligence (AI) has emerged as a valuable tool for this purpose. It offers an objective, noninvasive, nonobtrusive, automated approach to continuously monitor biomarkers in real time, thereby addressing the limitations of traditional approaches such as self-reported questionnaires. OBJECTIVE: This systematic review and meta-analysis aim to assess the performance of wearable AI in detecting and predicting stress among students. METHODS: Search sources in this review included 7 electronic databases (MEDLINE, Embase, PsycINFO, ACM Digital Library, Scopus, IEEE Xplore, and Google Scholar). We also checked the reference lists of the included studies and checked studies that cited the included studies. The search was conducted on June 12, 2023. This review included research articles centered on the creation or application of AI algorithms for the detection or prediction of stress among students using data from wearable devices. In total, 2 independent reviewers performed study selection, data extraction, and risk-of-bias assessment. The Quality Assessment of Diagnostic Accuracy Studies-Revised tool was adapted and used to examine the risk of bias in the included studies. Evidence synthesis was conducted using narrative and statistical techniques. RESULTS: This review included 5.8% (19/327) of the studies retrieved from the search sources. A meta-analysis of 37 accuracy estimates derived from 32% (6/19) of the studies revealed a pooled mean accuracy of 0.856 (95% CI 0.70-0.93). Subgroup analyses demonstrated that the accuracy of wearable AI was moderated by the number of stress classes (P=.02), type of wearable device (P=.049), location of the wearable device (P=.02), data set size (P=.009), and ground truth (P=.001). The average estimates of sensitivity, specificity, and F1-score were 0.755 (SD 0.181), 0.744 (SD 0.147), and 0.759 (SD 0.139), respectively. CONCLUSIONS: Wearable AI shows promise in detecting student stress but currently has suboptimal performance. The results of the subgroup analyses should be carefully interpreted given that many of these findings may be due to other confounding factors rather than the underlying grouping characteristics. Thus, wearable AI should be used alongside other assessments (eg, clinical questionnaires) until further evidence is available. Future research should explore the ability of wearable AI to differentiate types of stress, distinguish stress from other mental health issues, predict future occurrences of stress, consider factors such as the placement of the wearable device and the methods used to assess the ground truth, and report detailed results to facilitate the conduct of meta-analyses. TRIAL REGISTRATION: PROSPERO CRD42023435051; http://tinyurl.com/3fzb5rnp.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Databases, Factual , Libraries, Digital , Mental Health
8.
Cureus ; 15(11): e48643, 2023 Nov.
Article in English | MEDLINE | ID: mdl-38090452

ABSTRACT

Amidst evolving healthcare demands, nursing education plays a pivotal role in preparing future nurses for complex challenges. Traditional approaches, however, must be revised to meet modern healthcare needs. The ChatGPT, an AI-based chatbot, has garnered significant attention due to its ability to personalize learning experiences, enhance virtual clinical simulations, and foster collaborative learning in nursing education. This review aims to thoroughly assess the potential impact of integrating ChatGPT into nursing education. The hypothesis is that valuable insights can be provided for stakeholders through a comprehensive SWOT analysis examining the strengths, weaknesses, opportunities, and threats associated with ChatGPT. This will enable informed decisions about its integration, prioritizing improved learning outcomes. A thorough narrative literature review was undertaken to provide a solid foundation for the SWOT analysis. The materials included scholarly articles and reports, which ensure the study's credibility and allow for a holistic and unbiased assessment. The analysis identified accessibility, consistency, adaptability, cost-effectiveness, and staying up-to-date as crucial factors influencing the strengths, weaknesses, opportunities, and threats associated with ChatGPT integration in nursing education. These themes provided a framework to understand the potential risks and benefits of integrating ChatGPT into nursing education. This review highlights the importance of responsible and effective use of ChatGPT in nursing education and the need for collaboration among educators, policymakers, and AI developers. Addressing the identified challenges and leveraging the strengths of ChatGPT can lead to improved learning outcomes and enriched educational experiences for students. The findings emphasize the importance of responsibly integrating ChatGPT in nursing education, balancing technological advancement with careful consideration of associated risks, to achieve optimal outcomes.

9.
Front Public Health ; 11: 1278343, 2023.
Article in English | MEDLINE | ID: mdl-38094233

ABSTRACT

Background: A pooled estimate of stunting prevalence in refugee and internally displaced under-five children can help quantify the problem and focus on the nutritional needs of these marginalized groups. We aimed to assess the pooled prevalence of stunting in refugees and internally displaced under-five children from different parts of the globe. Methods: In this systematic review and meta-analysis, seven databases (Cochrane, EBSCOHost, EMBASE, ProQuest, PubMed, Scopus, and Web of Science) along with "preprint servers" were searched systematically from the earliest available date to 14 February 2023. Refugee and internally displaced (IDP) under-five children were included, and study quality was assessed using "National Heart, Lung, and Blood Institute (NHLBI)" tools. Results: A total of 776 abstracts (PubMed = 208, Scopus = 192, Cochrane = 1, Web of Science = 27, Embase = 8, EBSCOHost = 123, ProQuest = 5, Google Scholar = 209, and Preprints = 3) were retrieved, duplicates removed, and screened, among which 30 studies were found eligible for qualitative and quantitative synthesis. The pooled prevalence of stunting was 26% [95% confidence interval (CI): 21-31]. Heterogeneity was high (I2 = 99%, p < 0.01). A subgroup analysis of the type of study subjects revealed a pooled stunting prevalence of 37% (95% CI: 23-53) in internally displaced populations and 22% (95% CI: 18-28) among refugee children. Based on geographical distribution, the stunting was 32% (95% CI: 24-40) in the African region, 34% (95% CI: 24-46) in the South-East Asian region, and 14% (95% CI: 11-19) in Eastern Mediterranean region. Conclusion: The stunting rate is more in the internally displaced population than the refugee population and more in the South-East Asian and African regions. Our recommendation is to conduct further research to evaluate the determinants of undernutrition among under-five children of refugees and internally displaced populations from different regions so that international organizations and responsible stakeholders of that region can take effective remedial actions. Systematic review registration: https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=387156, PROSPERO [CRD42023387156].


Subject(s)
Malnutrition , Refugees , Child , Humans , Prevalence , Bibliometrics , Growth Disorders/epidemiology
10.
Biomed Eng Online ; 22(1): 126, 2023 Dec 16.
Article in English | MEDLINE | ID: mdl-38102597

ABSTRACT

Artificial intelligence (AI) has shown excellent diagnostic performance in detecting various complex problems related to many areas of healthcare including ophthalmology. AI diagnostic systems developed from fundus images have become state-of-the-art tools in diagnosing retinal conditions and glaucoma as well as other ocular diseases. However, designing and implementing AI models using large imaging data is challenging. In this study, we review different machine learning (ML) and deep learning (DL) techniques applied to multiple modalities of retinal data, such as fundus images and visual fields for glaucoma detection, progression assessment, staging and so on. We summarize findings and provide several taxonomies to help the reader understand the evolution of conventional and emerging AI models in glaucoma. We discuss opportunities and challenges facing AI application in glaucoma and highlight some key themes from the existing literature that may help to explore future studies. Our goal in this systematic review is to help readers and researchers to understand critical aspects of AI related to glaucoma as well as determine the necessary steps and requirements for the successful development of AI models in glaucoma.


Subject(s)
Deep Learning , Glaucoma , Ophthalmology , Humans , Artificial Intelligence , Glaucoma/diagnostic imaging , Machine Learning , Ophthalmology/methods
11.
J Med Internet Res ; 25: e48754, 2023 11 08.
Article in English | MEDLINE | ID: mdl-37938883

ABSTRACT

BACKGROUND: Anxiety disorders rank among the most prevalent mental disorders worldwide. Anxiety symptoms are typically evaluated using self-assessment surveys or interview-based assessment methods conducted by clinicians, which can be subjective, time-consuming, and challenging to repeat. Therefore, there is an increasing demand for using technologies capable of providing objective and early detection of anxiety. Wearable artificial intelligence (AI), the combination of AI technology and wearable devices, has been widely used to detect and predict anxiety disorders automatically, objectively, and more efficiently. OBJECTIVE: This systematic review and meta-analysis aims to assess the performance of wearable AI in detecting and predicting anxiety. METHODS: Relevant studies were retrieved by searching 8 electronic databases and backward and forward reference list checking. In total, 2 reviewers independently carried out study selection, data extraction, and risk-of-bias assessment. The included studies were assessed for risk of bias using a modified version of the Quality Assessment of Diagnostic Accuracy Studies-Revised. Evidence was synthesized using a narrative (ie, text and tables) and statistical (ie, meta-analysis) approach as appropriate. RESULTS: Of the 918 records identified, 21 (2.3%) were included in this review. A meta-analysis of results from 81% (17/21) of the studies revealed a pooled mean accuracy of 0.82 (95% CI 0.71-0.89). Meta-analyses of results from 48% (10/21) of the studies showed a pooled mean sensitivity of 0.79 (95% CI 0.57-0.91) and a pooled mean specificity of 0.92 (95% CI 0.68-0.98). Subgroup analyses demonstrated that the performance of wearable AI was not moderated by algorithms, aims of AI, wearable devices used, status of wearable devices, data types, data sources, reference standards, and validation methods. CONCLUSIONS: Although wearable AI has the potential to detect anxiety, it is not yet advanced enough for clinical use. Until further evidence shows an ideal performance of wearable AI, it should be used along with other clinical assessments. Wearable device companies need to develop devices that can promptly detect anxiety and identify specific time points during the day when anxiety levels are high. Further research is needed to differentiate types of anxiety, compare the performance of different wearable devices, and investigate the impact of the combination of wearable device data and neuroimaging data on the performance of wearable AI. TRIAL REGISTRATION: PROSPERO CRD42023387560; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=387560.


Subject(s)
Anxiety , Artificial Intelligence , Humans , Anxiety/diagnosis , Anxiety Disorders , Algorithms , Databases, Factual
12.
Blood Rev ; 62: 101133, 2023 11.
Article in English | MEDLINE | ID: mdl-37748945

ABSTRACT

This scoping review explores the potential of artificial intelligence (AI) in enhancing the screening, diagnosis, and monitoring of disorders related to body iron levels. A systematic search was performed to identify studies that utilize machine learning in iron-related disorders. The search revealed a wide range of machine learning algorithms used by different studies. Notably, most studies used a single data type. The studies varied in terms of sample sizes, participant ages, and geographical locations. AI's role in quantifying iron concentration is still in its early stages, yet its potential is significant. The question is whether AI-based diagnostic biomarkers can offer innovative approaches for screening, diagnosing, and monitoring of iron overload and anemia.


Subject(s)
Iron Overload , Iron , Humans , Artificial Intelligence , Algorithms , Iron Overload/diagnosis , Iron Overload/etiology , Iron Overload/therapy
13.
J Med Internet Res ; 25: e42950, 2023 08 18.
Article in English | MEDLINE | ID: mdl-37594791

ABSTRACT

BACKGROUND: The prevalence of Parkinson disease (PD) is becoming an increasing concern owing to the aging population in the United Kingdom. Wearable devices have the potential to improve the clinical care of patients with PD while reducing health care costs. Consequently, exploring the features of these wearable devices is important to identify the limitations and further areas of investigation of how wearable devices are currently used in clinical care in the United Kingdom. OBJECTIVE: In this scoping review, we aimed to explore the features of wearable devices used for PD in hospitals in the United Kingdom. METHODS: A scoping review of the current research was undertaken and reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. The literature search was undertaken on June 6, 2022, and publications were obtained from MEDLINE or PubMed, Embase, and the Cochrane Library. Eligible publications were initially screened by their titles and abstracts. Publications that passed the initial screening underwent a full review. The study characteristics were extracted from the final publications, and the evidence was synthesized using a narrative approach. Any queries were reviewed by the first and second authors. RESULTS: Of the 4543 publications identified, 39 (0.86%) publications underwent a full review, and 20 (0.44%) publications were included in the scoping review. Most studies (11/20, 55%) were conducted at the Newcastle upon Tyne Hospitals NHS Foundation Trust, with sample sizes ranging from 10 to 418. Most study participants were male individuals with a mean age ranging from 57.7 to 78.0 years. The AX3 was the most popular device brand used, and it was commercially manufactured by Axivity. Common wearable device types included body-worn sensors, inertial measurement units, and smartwatches that used accelerometers and gyroscopes to measure the clinical features of PD. Most wearable device primary measures involved the measured gait, bradykinesia, and dyskinesia. The most common wearable device placements were the lumbar region, head, and wrist. Furthermore, 65% (13/20) of the studies used artificial intelligence or machine learning to support PD data analysis. CONCLUSIONS: This study demonstrated that wearable devices could help provide a more detailed analysis of PD symptoms during the assessment phase and personalize treatment. Using machine learning, wearable devices could differentiate PD from other neurodegenerative diseases. The identified evidence gaps include the lack of analysis of wearable device cybersecurity and data management. The lack of cost-effectiveness analysis and large-scale participation in studies resulted in uncertainty regarding the feasibility of the widespread use of wearable devices. The uncertainty around the identified research gaps was further exacerbated by the lack of medical regulation of wearable devices for PD, particularly in the United Kingdom where regulations were changing due to the political landscape.


Subject(s)
Parkinson Disease , Humans , Male , Aged , Middle Aged , Female , Parkinson Disease/therapy , Artificial Intelligence , Aging , Commerce , Hospitals
14.
NPJ Digit Med ; 6(1): 122, 2023 Jul 08.
Article in English | MEDLINE | ID: mdl-37422507

ABSTRACT

Attention, which is the process of noticing the surrounding environment and processing information, is one of the cognitive functions that deteriorate gradually as people grow older. Games that are used for other than entertainment, such as improving attention, are often referred to as serious games. This study examined the effectiveness of serious games on attention among elderly individuals suffering from cognitive impairment. A systematic review and meta-analyses of randomized controlled trials were carried out. A total of 10 trials ultimately met all eligibility criteria of the 559 records retrieved. The synthesis of very low-quality evidence from three trials, as analyzed in a meta-study, indicated that serious games outperform no/passive interventions in enhancing attention in cognitively impaired older adults (P < 0.001). Additionally, findings from two other studies demonstrated that serious games are more effective than traditional cognitive training in boosting attention among cognitively impaired older adults. One study also concluded that serious games are better than traditional exercises in enhancing attention. Serious games can enhance attention in cognitively impaired older adults. However, given the low quality of the evidence, the limited number of participants in most studies, the absence of some comparative studies, and the dearth of studies included in the meta-analyses, the results remain inconclusive. Thus, until the aforementioned limitations are rectified in future research, serious games should serve as a supplement, rather than a replacement, to current interventions.

15.
Stud Health Technol Inform ; 305: 283-286, 2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37387018

ABSTRACT

In 2019 alone, Diabetes Mellitus impacted 463 million individuals worldwide. Blood glucose levels (BGL) are often monitored via invasive techniques as part of routine protocols. Recently, AI-based approaches have shown the ability to predict BGL using data acquired by non-invasive Wearable Devices (WDs), therefore improving diabetes monitoring and treatment. It is crucial to study the relationships between non-invasive WD features and markers of glycemic health. Therefore, this study aimed to investigate accuracy of linear and non-linear models in estimating BGL. A dataset containing digital metrics as well as diabetic status collected using traditional means was used. Data consisted of 13 participants data collected from WDs, these participants were divided in two groups young, and Adult Our experimental design included Data Collection, Feature Engineering, ML model selection/development, and reporting evaluation of metrics. The study showed that linear and non-linear models both have high accuracy in estimating BGL using WD data (RMSE range: 0.181 to 0.271, MAE range: 0.093 to 0.142). We provide further evidence of the feasibility of using commercially available WDs for the purpose of BGL estimation amongst diabetics when using Machine learning approaches.


Subject(s)
Blood Glucose , Routinely Collected Health Data , Adult , Humans , Benchmarking , Data Collection , Machine Learning
16.
Stud Health Technol Inform ; 305: 291-294, 2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37387020

ABSTRACT

Intermittent fasting has been practiced for centuries across many cultures globally. Recently many studies have reported intermittent fasting for its lifestyle benefits, the major shift in eating habits and patterns is associated with several changes in hormones and circadian rhythms. Whether there are accompanying changes in stress levels is not widely reported especially in school children. The objective of this study is to examine the impact of intermittent fasting during Ramadan on stress levels in school children as measured using wearable artificial intelligence (AI). Twenty-nine school children (aged 13-17 years and 12M / 17F ratio) were given Fitbit devices and their stress, activity and sleep patterns analyzed 2 weeks before, 4 weeks during Ramadan fasting and 2 weeks after. This study revealed no statistically significant difference on stress scores during fasting, despite changes in stress levels being observed for 12 of the participants. Our study may imply intermittent fasting during Ramadan poses no direct risks in terms of stress, suggesting rather it may be linked to dietary habits, furthermore as stress score calculations are based on heart rate variability, this study implies fasting does not interfere the cardiac autonomic nervous system.


Subject(s)
Artificial Intelligence , Intermittent Fasting , Humans , Child , Fasting , Autonomic Nervous System , Fitness Trackers
17.
Stud Health Technol Inform ; 305: 452-455, 2023 Jun 29.
Article in English | MEDLINE | ID: mdl-37387063

ABSTRACT

Depression is a prevalent mental condition that is challenging to diagnose using conventional techniques. Using machine learning and deep learning models with motor activity data, wearable AI technology has shown promise in reliably and effectively identifying or predicting depression. In this work, we aim to examine the performance of simple linear and non-linear models in the prediction of depression levels. We compared eight linear and non-linear models (Ridge, ElasticNet, Lasso, Random Forest, Gradient boosting, Decision trees, Support vector machines, and Multilayer perceptron) for the task of predicting depression scores over a period using physiological features, motor activity data, and MADRAS scores. For the experimental evaluation, we used the Depresjon dataset which contains the motor activity data of depressed and non-depressed participants. According to our findings, simple linear and non-linear models may effectively estimate depression scores for depressed people without the need for complex models. This opens the door for the development of more effective and impartial techniques for identifying depression and treating/preventing it using commonly used, widely accessible wearable technology.


Subject(s)
Artificial Intelligence , Depression , Humans , Depression/diagnosis , India , Neural Networks, Computer , Machine Learning
18.
JMIR Med Educ ; 9: e48291, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37261894

ABSTRACT

The integration of large language models (LLMs), such as those in the Generative Pre-trained Transformers (GPT) series, into medical education has the potential to transform learning experiences for students and elevate their knowledge, skills, and competence. Drawing on a wealth of professional and academic experience, we propose that LLMs hold promise for revolutionizing medical curriculum development, teaching methodologies, personalized study plans and learning materials, student assessments, and more. However, we also critically examine the challenges that such integration might pose by addressing issues of algorithmic bias, overreliance, plagiarism, misinformation, inequity, privacy, and copyright concerns in medical education. As we navigate the shift from an information-driven educational paradigm to an artificial intelligence (AI)-driven educational paradigm, we argue that it is paramount to understand both the potential and the pitfalls of LLMs in medical education. This paper thus offers our perspective on the opportunities and challenges of using LLMs in this context. We believe that the insights gleaned from this analysis will serve as a foundation for future recommendations and best practices in the field, fostering the responsible and effective use of AI technologies in medical education.

19.
NPJ Digit Med ; 6(1): 84, 2023 May 05.
Article in English | MEDLINE | ID: mdl-37147384

ABSTRACT

Given the limitations of traditional approaches, wearable artificial intelligence (AI) is one of the technologies that have been exploited to detect or predict depression. The current review aimed at examining the performance of wearable AI in detecting and predicting depression. The search sources in this systematic review were 8 electronic databases. Study selection, data extraction, and risk of bias assessment were carried out by two reviewers independently. The extracted results were synthesized narratively and statistically. Of the 1314 citations retrieved from the databases, 54 studies were included in this review. The pooled mean of the highest accuracy, sensitivity, specificity, and root mean square error (RMSE) was 0.89, 0.87, 0.93, and 4.55, respectively. The pooled mean of lowest accuracy, sensitivity, specificity, and RMSE was 0.70, 0.61, 0.73, and 3.76, respectively. Subgroup analyses revealed that there is a statistically significant difference in the highest accuracy, lowest accuracy, highest sensitivity, highest specificity, and lowest specificity between algorithms, and there is a statistically significant difference in the lowest sensitivity and lowest specificity between wearable devices. Wearable AI is a promising tool for depression detection and prediction although it is in its infancy and not ready for use in clinical practice. Until further research improve its performance, wearable AI should be used in conjunction with other methods for diagnosing and predicting depression. Further studies are needed to examine the performance of wearable AI based on a combination of wearable device data and neuroimaging data and to distinguish patients with depression from those with other diseases.

20.
J Med Internet Res ; 25: e43607, 2023 04 12.
Article in English | MEDLINE | ID: mdl-37043277

ABSTRACT

BACKGROUND: Learning disabilities are among the major cognitive impairments caused by aging. Among the interventions used to improve learning among older adults are serious games, which are participative electronic games designed for purposes other than entertainment. Although some systematic reviews have examined the effectiveness of serious games on learning, they are undermined by some limitations, such as focusing on older adults without cognitive impairments, focusing on particular types of serious games, and not considering the comparator type in the analysis. OBJECTIVE: This review aimed to evaluate the effectiveness of serious games on verbal and nonverbal learning among older adults with cognitive impairment. METHODS: Eight electronic databases were searched to retrieve studies relevant to this systematic review and meta-analysis. Furthermore, we went through the studies that cited the included studies and screened the reference lists of the included studies and relevant reviews. Two reviewers independently checked the eligibility of the identified studies, extracted data from the included studies, and appraised their risk of bias and the quality of the evidence. The results of the included studies were summarized using a narrative synthesis or meta-analysis, as appropriate. RESULTS: Of the 559 citations retrieved, 11 (2%) randomized controlled trials (RCTs) ultimately met all eligibility criteria for this review. A meta-analysis of 45% (5/11) of the RCTs revealed that serious games are effective in improving verbal learning among older adults with cognitive impairment in comparison with no or sham interventions (P=.04), and serious games do not have a different effect on verbal learning between patients with mild cognitive impairment and those with Alzheimer disease (P=.89). A meta-analysis of 18% (2/11) of the RCTs revealed that serious games are as effective as conventional exercises in promoting verbal learning (P=.98). We also found that serious games outperformed no or sham interventions (4/11, 36%; P=.03) and conventional cognitive training (2/11, 18%; P<.001) in enhancing nonverbal learning. CONCLUSIONS: Serious games have the potential to enhance verbal and nonverbal learning among older adults with cognitive impairment. However, our findings remain inconclusive because of the low quality of evidence, the small sample size in most of the meta-analyzed studies (6/8, 75%), and the paucity of studies included in the meta-analyses. Thus, until further convincing proof of their effectiveness is offered, serious games should be used to supplement current interventions for verbal and nonverbal learning rather than replace them entirely. Further studies are needed to compare serious games with conventional cognitive training and conventional exercises, as well as different types of serious games, different platforms, different intervention periods, and different follow-up periods. TRIAL REGISTRATION: PROSPERO CRD42022348849; https://tinyurl.com/y6yewwfa.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Exergaming , Memory, Episodic , Aged , Humans , Cognitive Dysfunction/therapy , Exercise , Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...