Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 193
Filter
1.
Sci Rep ; 14(1): 22887, 2024 10 02.
Article in English | MEDLINE | ID: mdl-39358410

ABSTRACT

Ovarian cancer is a common gynecological tumor, with a high mortality rate and difficult clinical treatment. Early detection of ovarian cancer has significant diagnostic value. In response to the problem of poor diagnostic performance of traditional early diagnosis methods, this article designed an automated early ovarian cancer detection system to improve the detection of early ovarian cancer. The conventional early diagnosis methods include serum CA125 (carbohydrate antigen 125) detection and positron emission tomography/computed tomography (PET/CT) imaging. This article combined serum CA125 detection and PET/CT imaging to detect the CA125 level and maximum standardized uptake value (SUV) in patient's serum. When the CA125 level exceeded 35U/ml and the maximum SUV value exceeded 2.5, the test was considered positive. This article selected 200 patients from Jingzhou Hospital for the experiment and compared the three detection methods. The average specificity of single serum CA125 detection, single PET/CT imaging, and automated detection in patients under 50 were 61.24%, 79.57%, and 97.79%, respectively. The automated early ovarian cancer detection system designed in this article can significantly improve the specificity of early ovarian cancer detection and has excellent application value for early ovarian cancer detection.


Subject(s)
CA-125 Antigen , Computational Biology , Early Detection of Cancer , Ovarian Neoplasms , Positron Emission Tomography Computed Tomography , Humans , Female , Ovarian Neoplasms/diagnosis , Ovarian Neoplasms/diagnostic imaging , Ovarian Neoplasms/blood , Positron Emission Tomography Computed Tomography/methods , Early Detection of Cancer/methods , CA-125 Antigen/blood , Middle Aged , Computational Biology/methods , Adult , Aged , Sensitivity and Specificity , Membrane Proteins
2.
Sensors (Basel) ; 24(17)2024 Aug 24.
Article in English | MEDLINE | ID: mdl-39275396

ABSTRACT

BACKGROUND: The automatic detection of activities of daily living (ADL) is necessary to improve long-term home-based monitoring of Parkinson's disease (PD) symptoms. While most body-worn sensor algorithms for ADL detection were developed using laboratory research systems covering full-body kinematics, it is now crucial to achieve ADL detection using a single body-worn sensor that remains commercially available and affordable for ecological use. AIM: to detect and segment Walking, Turning, Sitting-down, and Standing-up activities of patients with PD using a Smartwatch positioned at the ankle. METHOD: Twenty-two patients living with PD performed a Timed Up and Go (TUG) task three times before engaging in cleaning ADL in a simulated free-living environment during a 3 min trial. Accelerations and angular velocities of the right or left ankle were recorded in three dimensions using a Smartwatch. The TUG task was used to develop detection algorithms for Walking, Turning, Sitting-down, and Standing-up, while the 3 min trial in the free-living environment was used to test and validate these algorithms. Sensitivity, specificity, and F-scores were calculated based on a manual segmentation of ADL. RESULTS: Sensitivity, specificity, and F-scores were 96.5%, 94.7%, and 96.0% for Walking; 90.0%, 93.6%, and 91.7% for Turning; 57.5%, 70.5%, and 52.3% for Sitting-down; and 57.5%, 72.9%, and 54.1% for Standing-up. The median of time difference between the manual and automatic segmentation was 1.31 s for Walking, 0.71 s for Turning, 2.75 s for Sitting-down, and 2.35 s for Standing-up. CONCLUSION: The results of this study demonstrate that segmenting ADL to characterize the mobility of people with PD based on a single Smartwatch can be comparable to manual segmentation while requiring significantly less time. While Walking and Turning were well detected, Sitting-down and Standing-up will require further investigation to develop better algorithms. Nonetheless, these achievements increase the odds of success in implementing wearable technologies for PD monitoring in ecological environments.


Subject(s)
Activities of Daily Living , Algorithms , Ankle , Parkinson Disease , Walking , Wearable Electronic Devices , Humans , Parkinson Disease/physiopathology , Male , Female , Aged , Ankle/physiopathology , Walking/physiology , Middle Aged , Biomechanical Phenomena/physiology
3.
Heliyon ; 10(18): e35998, 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39309945

ABSTRACT

In a kind of precision industrial equipment, small diameter abreast optical fibers are used for high-speed communication among functional nodes. The arrangement order at both terminals of the abreast optical fibers need to comply with communication protocols. In this paper, we propose an automatic terminal sequence consistency verification method based on computer vision. The Hue Saturation Value (HSV) color space is used for improving the image feature extraction capability. An abreast optical fiber sequence dictionary which converts the protocol logic into an input-output mapping table is provided to follow protocol confidentiality and improve inspecting speed. A light control baffle position adaptive algorithm is designed for improving the accuracy of optical fiber incident light control. The experimental results show that the method can achieve the conductivity inspection of 1 optical fiber every 50 seconds, and the inspection accuracy is over 96.5%, which generally improves the inspection efficiency by 45% compared with manual inspection.

4.
J Clin Med ; 13(14)2024 Jul 14.
Article in English | MEDLINE | ID: mdl-39064157

ABSTRACT

Background/Objectives: The aim of this study was to assess the diagnostic accuracy of the AI-driven platform Diagnocat for evaluating endodontic treatment outcomes using cone beam computed tomography (CBCT) images. Methods: A total of 55 consecutive patients (15 males and 40 females, aged 12-70 years) referred for CBCT imaging were included. CBCT images were analyzed using Diagnocat's AI platform, which assessed parameters such as the probability of filling, adequate obturation, adequate density, overfilling, voids in filling, short filling, and root canal number. The images were also evaluated by two experienced human readers. Diagnostic accuracy metrics (accuracy, precision, recall, and F1 score) were assessed and compared to the readers' consensus, which served as the reference standard. Results: The AI platform demonstrated high diagnostic accuracy for most parameters, with perfect scores for the probability of filling (accuracy, precision, recall, F1 = 100%). Adequate obturation showed moderate performance (accuracy = 84.1%, precision = 66.7%, recall = 92.3%, and F1 = 77.4%). Adequate density (accuracy = 95.5%, precision, recall, and F1 = 97.2%), overfilling (accuracy = 95.5%, precision = 86.7%, recall = 100%, and F1 = 92.9%), and short fillings (accuracy = 95.5%, precision = 100%, recall = 86.7%, and F1 = 92.9%) also exhibited strong performance. The performance of AI for voids in filling detection (accuracy = 88.6%, precision = 88.9%, recall = 66.7%, and F1 = 76.2%) highlighted areas for improvement. Conclusions: The AI platform Diagnocat showed high diagnostic accuracy in evaluating endodontic treatment outcomes using CBCT images, indicating its potential as a valuable tool in dental radiology.

5.
Eur J Radiol ; 177: 111590, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38959557

ABSTRACT

PURPOSE: To assess the perceptions and attitudes of radiologists toward the adoption of artificial intelligence (AI) in clinical practice. METHODS: A survey was conducted among members of the SIRM Lombardy. Radiologists' attitudes were assessed comprehensively, covering satisfaction with AI-based tools, propensity for innovation, and optimism for the future. The questionnaire consisted of two sections: the first gathered demographic and professional information using categorical responses, while the second evaluated radiologists' attitudes toward AI through Likert-type responses ranging from 1 to 5 (with 1 representing extremely negative attitudes, 3 indicating a neutral stance, and 5 reflecting extremely positive attitudes). Questionnaire refinement involved an iterative process with expert panels and a pilot phase to enhance consistency and eliminate redundancy. Exploratory data analysis employed descriptive statistics and visual assessment of Likert plots, supported by non-parametric tests for subgroup comparisons for a thorough analysis of specific emerging patterns. RESULTS: The survey yielded 232 valid responses. The findings reveal a generally optimistic outlook on AI adoption, especially among young radiologist (<30) and seasoned professionals (>60, p<0.01). However, while 36.2 % (84 out 232) of subjects reported daily use of AI-based tools, only a third considered their contribution decisive (30 %, 25 out of 84). AI literacy varied, with a notable proportion feeling inadequately informed (36 %, 84 out of 232), particularly among younger radiologists (46 %, p < 0.01). Positive attitudes towards the potential of AI to improve detection, characterization of anomalies and reduce workload (positive answers > 80 %) and were consistent across subgroups. Radiologists' opinions were more skeptical about the role of AI in enhancing decision-making processes, including the choice of further investigation, and in personalized medicine in general. Overall, respondents recognized AI's significant impact on the radiology profession, viewing it as an opportunity (61 %, 141 out of 232) rather than a threat (18 %, 42 out of 232), with a majority expressing belief in AI's relevance to future radiologists' career choices (60 %, 139 out of 232). However, there were some concerns, particularly among breast radiologists (20 of 232 responders), regarding the potential impact of AI on the profession. Eighty-four percent of the respondents consider the final assessment by the radiologist still to be essential. CONCLUSION: Our results indicate an overall positive attitude towards the adoption of AI in radiology, though this is moderated by concerns regarding training and practical efficacy. Addressing AI literacy gaps, especially among younger radiologists, is essential. Furthermore, proactively adapting to technological advancements is crucial to fully leverage AI's potential benefits. Despite the generally positive outlook among radiologists, there remains significant work to be done to enhance the integration and widespread use of AI tools in clinical practice.


Subject(s)
Artificial Intelligence , Attitude of Health Personnel , Radiologists , Humans , Radiologists/psychology , Female , Male , Surveys and Questionnaires , Adult , Middle Aged , Italy , Aged
6.
J Neurosci Methods ; 409: 110199, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38897420

ABSTRACT

BACKGROUND: There are many automated spike-wave discharge detectors, but the known weaknesses of otherwise good methods and the varying working conditions of different research groups (mainly the access to hardware and software) invite further exploration into alternative approaches. NEW METHOD: The algorithm combines two criteria, one in the time-domain and one in the frequency-domain, exploiting morphological asymmetry and the presence of harmonics, respectively. The time-domain criterion is additionally adjusted by normal modelling between the first and second iterations. RESULTS: We report specificity, sensitivity and accuracy values for 20 recordings from 17 mature, male WAG/Rij rats. In addition, performance was preliminary tested with different hormones, pharmacological injections and species (mice) in a smaller sample. Accuracy and specificity were consistently above 91 %. The number of automatically detected spike-wave discharges was strongly correlated with the numbers derived from visual inspection. Sensitivity varied more strongly than specificity, but high values were observed in both rats and mice. COMPARISON WITH EXISTING METHODS: The algorithm avoids low-voltage movement artifacts, displays a lower false positive rate than many predecessors and appears to work across species, i.e. while designed initially with data from the WAG/Rij rat, the algorithm can pick up seizure activity in the mouse of considerably lower inter-spike frequency. Weaknesses of the proposed method include a lower sensitivity than several predecessors. CONCLUSION: The algorithm excels in being a selective and flexible (based on e.g. its performance across rats and mice) spike-wave discharge detector. Future work could attempt to increase the sensitivity of this approach.


Subject(s)
Algorithms , Animals , Male , Rats , Mice , Sensitivity and Specificity , Action Potentials/physiology , Signal Processing, Computer-Assisted , Brain/physiology , Brain/physiopathology , Electroencephalography/methods , Software
7.
J Stomatol Oral Maxillofac Surg ; 125(4S): 101946, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38857691

ABSTRACT

PURPOSE: This study aims to develop a deep learning framework for the automatic detection of the position relationship between the mandibular third molar (M3) and the mandibular canal (MC) on panoramic radiographs (PRs), to assist doctors in assessing and planning appropriate surgical interventions. METHODS: Datasets D1 and D2 were obtained by collecting 253 PRs from a hospitals and 197 PRs from online platforms. The RPIFormer model proposed in this study was trained and validated on D1 to create a segmentation model. The CycleGAN model was trained and validated on both D1 and D2 to develop an image enhancement model. Ultimately, the segmentation and enhancement models were integrated with an object detection model to create a fully automated framework for M3 and MC detection in PRs. Experimental evaluation included calculating Dice coefficient, IoU, Recall, and Precision during the process. RESULTS: The RPIFormer model proposed in this study achieved an average Dice coefficient of 92.56 % for segmenting M3 and MC, representing a 3.06 % improvement over the previous best study. The deep learning framework developed in this research enables automatic detection of M3 and MC in PRs without manual cropping, demonstrating superior detection accuracy and generalization capability. CONCLUSION: The framework developed in this study can be applied to PRs captured in different hospitals without the need for model fine-tuning. This feature is significant for aiding doctors in accurately assessing the spatial relationship between M3 and MC, thereby determining the optimal treatment plan to ensure patients' oral health and surgical safety.


Subject(s)
Deep Learning , Mandible , Molar, Third , Radiography, Panoramic , Humans , Molar, Third/diagnostic imaging , Radiography, Panoramic/methods , Mandible/diagnostic imaging , Female , Male , Adult
8.
Clin Neurophysiol ; 164: 30-39, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38843758

ABSTRACT

OBJECTIVE: High frequency oscillations (HFOs) are a biomarker of the seizure onset zone (SOZ) and can be visually or automatically detected. In theory, one can optimize an automated algorithm's parameters to maximize SOZ localization accuracy; however, there is no consensus on whether or how this should be done. Therefore, we optimized an automated detector using visually identified HFOs and evaluated the impact on SOZ localization accuracy. METHODS: We detected HFOs in intracranial EEG from 20 patients with refractory epilepsy from two centers using (1) unoptimized automated detection, (2) visual identification, and (3) automated detection optimized to match visually detected HFOs. RESULTS: SOZ localization accuracy based on HFO rate was not significantly different between the three methods. Across patients, visually optimized detector settings varied, and no single set of settings produced universally accurate SOZ localization. Exploratory analysis suggests that, for many patients, detection settings exist that would improve SOZ localization. CONCLUSIONS: SOZ localization accuracy was similar for all three methods, was not improved by visually optimizing detector settings, and may benefit from patient-specific parameter optimization. SIGNIFICANCE: Visual HFO marking is laborious, and optimizing automated detection using visual markings does not improve localization accuracy. New patient-specific detector optimization methods are needed.


Subject(s)
Drug Resistant Epilepsy , Humans , Female , Male , Adult , Drug Resistant Epilepsy/physiopathology , Drug Resistant Epilepsy/diagnosis , Electroencephalography/methods , Middle Aged , Electrocorticography/methods , Electrocorticography/standards , Seizures/physiopathology , Seizures/diagnosis , Brain Waves/physiology , Algorithms , Young Adult , Adolescent , Epilepsy/physiopathology , Epilepsy/diagnosis
9.
Hand Surg Rehabil ; 43(4): 101742, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38909690

ABSTRACT

This study proposes a Deep Learning algorithm to automatically detect perilunate dislocation in anteroposterior wrist radiographs. A total of 374 annotated radiographs, 345 normal and 29 pathological, of skeletally mature adolescents and adults aged ≥16 years were used to train, validate and test two YOLOv8 deep neural models. The training set included 245 normal and 15 pathological radiographs; the pathological training set was supplemented by 240 radiographs obtained by data augmentation. The test set comprised 30 normal and 10 pathological radiographs. The first model was used for detecting the carpal region, and the second for segmenting a region between Gilula's 2nd and 3rd arcs. The output of the segmentation model, trained multiple times with varying random initial parameter values and augmentations, was then assigned a probability of being normal or pathological through ensemble averaging. In the study dataset, the algorithm achieved an overall F1-score of 0.880: 0.928 in the normal subgroup, with 1.0 precision, and 0.833 in the pathological subgroup, with 1.0 recall (or sensitivity), demonstrating that diagnosis of perilunate dislocation can be improved by automatic analysis of anteroposterior radiographs. LEVEL OF EVIDENCE: : III.


Subject(s)
Deep Learning , Joint Dislocations , Lunate Bone , Humans , Joint Dislocations/diagnostic imaging , Lunate Bone/diagnostic imaging , Lunate Bone/injuries , Adolescent , Adult , Algorithms , Young Adult , Radiography , Male , Female , Wrist Joint/diagnostic imaging
10.
J Clin Med ; 13(12)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38929931

ABSTRACT

Background/Objectives: The purpose of this preliminary study was to evaluate the diagnostic performance of an AI-driven platform, Diagnocat (Diagnocat Ltd., San Francisco, CA, USA), for assessing endodontic treatment outcomes using panoramic radiographs (PANs). Materials and Methods: The study included 55 PAN images of 55 patients (15 males and 40 females, aged 12-70) who underwent imaging at a private dental center. All images were acquired using a Hyperion X9 PRO digital cephalometer and were evaluated using Diagnocat, a cloud-based AI platform. The AI system assessed the following endodontic treatment features: filling probability, obturation adequacy, density, overfilling, voids in filling, and short filling. Two human observers independently evaluated the images, and their consensus served as the reference standard. The diagnostic accuracy metrics were calculated. Results: The AI system demonstrated high accuracy (90.72%) and a strong F1 score (95.12%) in detecting the probability of endodontic filling. However, the system showed variable performance in other categories, with lower accuracy metrics and unacceptable F1 scores for short filling and voids in filling assessments (8.33% and 14.29%, respectively). The accuracy for detecting adequate obturation and density was 55.81% and 62.79%, respectively. Conclusions: The AI-based system showed very high accuracy in identifying endodontically treated teeth but exhibited variable diagnostic accuracy for other qualitative features of endodontic treatment.

11.
J Clin Med ; 13(12)2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38930132

ABSTRACT

Background: This study evaluates the diagnostic accuracy of an AI-assisted tool in assessing the proximity of the mandibular canal (MC) to the root apices (RAs) of mandibular teeth using computed tomography (CT). Methods: This study involved 57 patients aged 18-30 whose CT scans were analyzed by both AI and human experts. The primary aim was to measure the closest distance between the MC and RAs and to assess the AI tool's diagnostic performance. The results indicated significant variability in RA-MC distances, with third molars showing the smallest mean distances and first molars the greatest. Diagnostic accuracy metrics for the AI tool were assessed at three thresholds (0 mm, 0.5 mm, and 1 mm). Results: The AI demonstrated high specificity but generally low diagnostic accuracy, with the highest metrics at the 0.5 mm threshold with 40.91% sensitivity and 97.06% specificity. Conclusions: This study underscores the limited potential of tested AI programs in reducing iatrogenic damage to the inferior alveolar nerve (IAN) during dental procedures. Significant differences in RA-MC distances between evaluated teeth were found.

12.
Sensors (Basel) ; 24(11)2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38894401

ABSTRACT

Cognitive engagement involves mental and physical involvement, with observable behaviors as indicators. Automatically measuring cognitive engagement can offer valuable insights for instructors. However, object occlusion, inter-class similarity, and intra-class variance make designing an effective detection method challenging. To deal with these problems, we propose the Object-Enhanced-You Only Look Once version 8 nano (OE-YOLOv8n) model. This model employs the YOLOv8n framework with an improved Inner Minimum Point Distance Intersection over Union (IMPDIoU) Loss to detect cognitive engagement. To evaluate the proposed methodology, we construct a real-world Students' Cognitive Engagement (SCE) dataset. Extensive experiments on the self-built dataset show the superior performance of the proposed model, which improves the detection performance of the five distinct classes with a precision of 92.5%.


Subject(s)
Cognition , Humans , Cognition/physiology , Students/psychology , Algorithms
13.
Epilepsy Res ; 204: 107385, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38851173

ABSTRACT

PURPOSE: Long-term ambulatory EEG recordings can improve the monitoring of absence epilepsy in children, but signal quality and increased review workload are a concern. We evaluated the feasibility of around-the-ears EEG arrays (cEEGrids) to capture 3-Hz short-lasting and ictal spike-and-wave discharges and assessed the performance of automated detection software in cEEGrids data. We compared patterns of bilateral synchronisation between short-lasting and ictal spike-and-wave discharges. METHODS: We recruited children with suspected generalised epilepsy undergoing routine video-EEG monitoring and performed simultaneous cEEGrids recordings. We used ASSYST software to detect short-lasting 3-Hz spike-and-wave discharges (1-3 s) and ictal spike-and-wave discharges in the cEEGrids data. We assessed data quality and sensitivity of cEEGrids for spike-and-wave discharges in routine EEG. We determined the sensitivity and false detection rate for automated spike-and-wave discharge detection in cEEGrids data. We compared bihemispheric synchrony across the onset of short-lasting and ictal spike-and-wave discharges using the mean phase coherence in the 2-4 Hz frequency band. RESULTS: We included nine children with absence epilepsy (median age = 11 y, range 8-15 y, nine females) and recorded 4 h and 27 min of cEEGrids data. The recordings from seven participants were suitable for quantitative analysis, containing 82 spike-and-wave discharges. The cEEGrids captured 58 % of all spike-and-wave discharges (median individual sensitivity: 100 %, range: 47-100 %). ASSYST detected 82 % of all spike-and-wave discharges (median: 100 %, range: 41-100 %) with a false detection rate of 48/h (median: 6/h, range: 0-154/h). The mean phase coherence significantly increased during short-lasting and ictal spike-and-wave discharges in the 500-ms pre-onset to 1-s post-onset interval. CONCLUSIONS: cEEGrids are of variable quality for monitoring spike-and-wave discharges in children with absence epilepsy. ASSYST could facilitate the detection of short-lasting and ictal spike-and-wave discharges with clear periodic structures but with low specificity. A similar course of bihemispheric synchrony between short-lasting and ictal spike-and-wave discharges indicates that cortico-thalamic driving may be relevant for both types of spike-and-wave discharges.


Subject(s)
Electroencephalography , Epilepsy, Absence , Humans , Epilepsy, Absence/physiopathology , Epilepsy, Absence/diagnosis , Child , Electroencephalography/methods , Female , Male , Adolescent
14.
J Magn Reson Imaging ; 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38826142

ABSTRACT

BACKGROUND: The number of focal liver lesions (FLLs) detected by imaging has increased worldwide, highlighting the need to develop a robust, objective system for automatically detecting FLLs. PURPOSE: To assess the performance of the deep learning-based artificial intelligence (AI) software in identifying and measuring lesions on contrast-enhanced magnetic resonance imaging (MRI) images in patients with FLLs. STUDY TYPE: Retrospective. SUBJECTS: 395 patients with 1149 FLLs. FIELD STRENGTH/SEQUENCE: The 1.5 T and 3 T scanners, including T1-, T2-, diffusion-weighted imaging, in/out-phase imaging, and dynamic contrast-enhanced imaging. ASSESSMENT: The diagnostic performance of AI, radiologist, and their combination was compared. Using 20 mm as the cut-off value, the lesions were divided into two groups, and then divided into four subgroups: <10, 10-20, 20-40, and ≥40 mm, to evaluate the sensitivity of radiologists and AI in the detection of lesions of different sizes. We compared the pathologic sizes of 122 surgically resected lesions with measurements obtained using AI and those made by radiologists. STATISTICAL TESTS: McNemar test, Bland-Altman analyses, Friedman test, Pearson's chi-squared test, Fisher's exact test, Dice coefficient, and intraclass correlation coefficients. A P-value <0.05 was considered statistically significant. RESULTS: The average Dice coefficient of AI in segmentation of liver lesions was 0.62. The combination of AI and radiologist outperformed the radiologist alone, with a significantly higher detection rate (0.894 vs. 0.825) and sensitivity (0.883 vs. 0.806). The AI showed significantly sensitivity than radiologists in detecting all lesions <20 mm (0.848 vs. 0.788). Both AI and radiologists achieved excellent detection performance for lesions ≥20 mm (0.867 vs. 0.881, P = 0.671). A remarkable agreement existed in the average tumor sizes among the three measurements (P = 0.174). DATA CONCLUSION: AI software based on deep learning exhibited practical value in automatically identifying and measuring liver lesions. TECHNICAL EFFICACY: Stage 2.

15.
Geroscience ; 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38869712

ABSTRACT

White matter hyperintensities of vascular origin (WMH) are commonly found in individuals over 60 and increase in prevalence with age. The significance of WMH is well-documented, with strong associations with cognitive impairment, risk of stroke, mental health, and brain structure deterioration. Consequently, careful monitoring is crucial for the early identification and management of individuals at risk. Luckily, WMH are detectable and quantifiable on standard MRI through visual assessment scales, but it is time-consuming and has high rater variability. Addressing this issue, the main aim of our study is to decipher the utility of quantitative measures of WMH, assessed with automatic tools, in establishing risk profiles for cerebrovascular deterioration. For this purpose, first, we work to determine the most precise WMH segmentation open access tool compared to clinician manual segmentations (LST-LPA, LST-LGA, SAMSEG, and BIANCA), offering insights into methodology and usability to balance clinical precision with practical application. The results indicated that supervised algorithms (LST-LPA and BIANCA) were superior, particularly in detecting small WMH, and can improve their consistency when used in parallel with unsupervised tools (LST-LGA and SAMSEG). Additionally, to investigate the behavior and real clinical utility of these tools, we tested them in a real-world scenario (N = 300; age > 50 y.o. and MMSE > 26), proposing an imaging biomarker for moderate vascular damage. The results confirmed its capacity to effectively identify individuals at risk comparing the cognitive and brain structural profiles of cognitively healthy adults above and below the resulted threshold.

16.
J Stomatol Oral Maxillofac Surg ; 125(5S2): 101914, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38750725

ABSTRACT

BACKGROUND: Midfacial fractures are among the most frequent facial fractures. Surgery is recommended within 2 weeks of injury, but this time frame is often extended because the fracture is missed on diagnostic imaging in the busy emergency medicine setting. Using deep learning technology, which has progressed markedly in various fields, we attempted to develop a system for the automatic detection of midfacial fractures. The purpose of this study was to use this system to diagnose fractures accurately and rapidly, with the intention of benefiting both patients and emergency room physicians. METHODS: One hundred computed tomography images that included midfacial fractures (e.g., maxillary, zygomatic, nasal, and orbital fractures) were prepared. In each axial image, the fracture area was surrounded by a rectangular region to create the annotation data. Eighty images were randomly classified as the training dataset (3736 slices) and 20 as the validation dataset (883 slices). Training and validation were performed using Single Shot MultiBox Detector (SSD) and version 8 of You Only Look Once (YOLOv8), which are object detection algorithms. RESULTS: The performance indicators for SSD and YOLOv8 were respectively: precision, 0.872 and 0.871; recall, 0.823 and 0.775; F1 score, 0.846 and 0.82; average precision, 0.899 and 0.769. CONCLUSIONS: The use of deep learning techniques allowed the automatic detection of midfacial fractures with good accuracy and high speed. The system developed in this study is promising for automated detection of midfacial fractures and may provide a quick and accurate solution for emergency medical care and other settings.


Subject(s)
Deep Learning , Facial Bones , Skull Fractures , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Skull Fractures/diagnostic imaging , Skull Fractures/diagnosis , Facial Bones/injuries , Facial Bones/diagnostic imaging , Orbital Fractures/diagnosis , Orbital Fractures/diagnostic imaging , Orbital Fractures/epidemiology
17.
JMIR Res Protoc ; 13: e56267, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38749026

ABSTRACT

BACKGROUND: There is an urgent need worldwide for qualified health professionals. High attrition rates among health professionals, combined with a predicted rise in life expectancy, further emphasize the need for additional health professionals. Work-related stress is a major concern among health professionals, affecting both the well-being of health professionals and the quality of patient care. OBJECTIVE: This scoping review aims to identify processes and methods for the automatic detection of work-related stress among health professionals using natural language processing (NLP) and text mining techniques. METHODS: This review follows Joanna Briggs Institute Methodology and PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines. The inclusion criteria for this scoping review encompass studies involving health professionals using NLP for work-related stress detection while excluding studies involving other professions or children. The review focuses on various aspects, including NLP applications for stress detection, criteria for stress identification, technical aspects of NLP, and implications of stress detection through NLP. Studies within health care settings using diverse NLP techniques are considered, including experimental and observational designs, aiming to provide a comprehensive understanding of NLP's role in detecting stress among health professionals. Studies published in English, German, or French from 2013 to present will be considered. The databases to be searched include MEDLINE (via PubMed), CINAHL, PubMed, Cochrane, ACM Digital Library, and IEEE Xplore. Sources of unpublished studies and gray literature to be searched will include ProQuest Dissertations & Theses and OpenGrey. Two reviewers will independently retrieve full-text studies and extract data. The collected data will be organized in tables, graphs, and a qualitative narrative summary. This review will use tables and graphs to present data on studies' distribution by year, country, activity field, and research methods. Results synthesis involves identifying, grouping, and categorizing. The final scoping review will include a narrative written report detailing the search and study selection process, a visual representation using a PRISMA-ScR flow diagram, and a discussion of implications for practice and research. RESULTS: We anticipate the outcomes will be presented in a systematic scoping review by June 2024. CONCLUSIONS: This review fills a literature gap by identifying automated work-related stress detection among health professionals using NLP and text mining, providing insights on an innovative approach, and identifying research needs for further systematic reviews. Despite promising outcomes, acknowledging limitations in the reviewed studies, including methodological constraints, sample biases, and potential oversight, is crucial to refining methodologies and advancing automatic stress detection among health professionals. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/56267.


Subject(s)
Health Personnel , Natural Language Processing , Occupational Stress , Humans , Health Personnel/psychology , Occupational Stress/diagnosis , Occupational Stress/psychology
18.
J Neurosci Methods ; 407: 110162, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38740142

ABSTRACT

BACKGROUND: Progress in advancing sleep research employing polysomnography (PSG) has been negatively impacted by the limited availability of widely available, open-source sleep-specific analysis tools. NEW METHOD: Here, we introduce Counting Sheep PSG, an EEGLAB-compatible software for signal processing, visualization, event marking and manual sleep stage scoring of PSG data for MATLAB. RESULTS: Key features include: (1) signal processing tools including bad channel interpolation, down-sampling, re-referencing, filtering, independent component analysis, artifact subspace reconstruction, and power spectral analysis, (2) customizable display of polysomnographic data and hypnogram, (3) event marking mode including manual sleep stage scoring, (4) automatic event detections including movement artifact, sleep spindles, slow waves and eye movements, and (5) export of main descriptive sleep architecture statistics, event statistics and publication-ready hypnogram. COMPARISON WITH EXISTING METHODS: Counting Sheep PSG was built on the foundation created by sleepSMG (https://sleepsmg.sourceforge.net/). The scope and functionalities of the current software have made significant advancements in terms of EEGLAB integration/compatibility, preprocessing, artifact correction, event detection, functionality and ease of use. By comparison, commercial software can be costly and utilize proprietary data formats and algorithms, thereby restricting the ability to distribute and share data and analysis results. CONCLUSIONS: The field of sleep research remains shackled by an industry that resists standardization, prevents interoperability, builds-in planned obsolescence, maintains proprietary black-box data formats and analysis approaches. This presents a major challenge for the field of sleep research. The need for free, open-source software that can read open-format data is essential for scientific advancement to be made in the field.


Subject(s)
Polysomnography , Signal Processing, Computer-Assisted , Sleep Stages , Software , Polysomnography/methods , Humans , Sleep Stages/physiology , Electroencephalography/methods , Artifacts
19.
Heliyon ; 10(10): e30957, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38803954

ABSTRACT

A self-driving car is necessary to implement traffic intelligence because it can vastly enhance both the safety of driving and the comfort of the driver by adjusting to the circumstances of the road ahead. Road hazards such as potholes can be a big challenge for autonomous vehicles, increasing the risk of crashes and vehicle damage. Real-time identification of road potholes is required to solve this issue. To this end, various approaches have been tried, including notifying the appropriate authorities, utilizing vibration-based sensors, and engaging in three-dimensional laser imaging. Unfortunately, these approaches have several drawbacks, such as large initial expenditures and the possibility of being discovered. Transfer learning is considered a potential answer to the pressing necessity of automating the process of pothole identification. A Convolutional Neural Network (CNN) is constructed to categorize potholes effectively using the VGG-16 pre-trained model as a transfer learning model throughout the training process. A Super-Resolution Generative Adversarial Network (SRGAN) is suggested to enhance the image's overall quality. Experiments conducted with the suggested approach of classifying road potholes revealed a high accuracy rate of 97.3%, and its effectiveness was tested using various criteria. The developed transfer learning technique obtained the best accuracy rate compared to many other deep learning algorithms.

20.
Primates ; 65(4): 265-279, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38758427

ABSTRACT

Individual identification plays a pivotal role in ecology and ethology, notably as a tool for complex social structures understanding. However, traditional identification methods often involve invasive physical tags and can prove both disruptive for animals and time-intensive for researchers. In recent years, the integration of deep learning in research has offered new methodological perspectives through the automatisation of complex tasks. Harnessing object detection and recognition technologies is increasingly used by researchers to achieve identification on video footage. This study represents a preliminary exploration into the development of a non-invasive tool for face detection and individual identification of Japanese macaques (Macaca fuscata) through deep learning. The ultimate goal of this research is, using identification done on the dataset, to automatically generate a social network representation of the studied population. The current main results are promising: (i) the creation of a Japanese macaques' face detector (Faster-RCNN model), reaching an accuracy of 82.2% and (ii) the creation of an individual recogniser for the Kojima Island macaque population (YOLOv8n model), reaching an accuracy of 83%. We also created a Kojima population social network by traditional methods, based on co-occurrences on videos. Thus, we provide a benchmark against which the automatically generated network will be assessed for reliability. These preliminary results are a testament to the potential of this approach to provide the scientific community with a tool for tracking individuals and social network studies in Japanese macaques.


Subject(s)
Deep Learning , Macaca fuscata , Animals , Macaca fuscata/physiology , Female , Male , Social Networking , Japan , Facial Recognition
SELECTION OF CITATIONS
SEARCH DETAIL