Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.596
Filter
1.
J Med Internet Res ; 26: e56127, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38963694

ABSTRACT

BACKGROUND: The endonasal endoscopic approach (EEA) is effective for pituitary adenoma resection. However, manual review of operative videos is time-consuming. The application of a computer vision (CV) algorithm could potentially reduce the time required for operative video review and facilitate the training of surgeons to overcome the learning curve of EEA. OBJECTIVE: This study aimed to evaluate the performance of a CV-based video analysis system, based on OpenCV algorithm, to detect surgical interruptions and analyze surgical fluency in EEA. The accuracy of the CV-based video analysis was investigated, and the time required for operative video review using CV-based analysis was compared to that of manual review. METHODS: The dominant color of each frame in the EEA video was determined using OpenCV. We developed an algorithm to identify events of surgical interruption if the alterations in the dominant color pixels reached certain thresholds. The thresholds were determined by training the current algorithm using EEA videos. The accuracy of the CV analysis was determined by manual review, and the time spent was reported. RESULTS: A total of 46 EEA operative videos were analyzed, with 93.6%, 95.1%, and 93.3% accuracies in the training, test 1, and test 2 data sets, respectively. Compared with manual review, CV-based analysis reduced the time required for operative video review by 86% (manual review: 166.8 and CV analysis: 22.6 minutes; P<.001). The application of a human-computer collaborative strategy increased the overall accuracy to 98.5%, with a 74% reduction in the review time (manual review: 166.8 and human-CV collaboration: 43.4 minutes; P<.001). Analysis of the different surgical phases showed that the sellar phase had the lowest frequency (nasal phase: 14.9, sphenoidal phase: 15.9, and sellar phase: 4.9 interruptions/10 minutes; P<.001) and duration (nasal phase: 67.4, sphenoidal phase: 77.9, and sellar phase: 31.1 seconds/10 minutes; P<.001) of surgical interruptions. A comparison of the early and late EEA videos showed that increased surgical experience was associated with a decreased number (early: 4.9 and late: 2.9 interruptions/10 minutes; P=.03) and duration (early: 41.1 and late: 19.8 seconds/10 minutes; P=.02) of surgical interruptions during the sellar phase. CONCLUSIONS: CV-based analysis had a 93% to 98% accuracy in detecting the number, frequency, and duration of surgical interruptions occurring during EEA. Moreover, CV-based analysis reduced the time required to analyze the surgical fluency in EEA videos compared to manual review. The application of CV can facilitate the training of surgeons to overcome the learning curve of endoscopic skull base surgery. TRIAL REGISTRATION: ClinicalTrials.gov NCT06156020; https://clinicaltrials.gov/study/NCT06156020.


Subject(s)
Algorithms , Pituitary Neoplasms , Humans , Pituitary Neoplasms/surgery , Cohort Studies , Video Recording , Endoscopy/methods , Endoscopy/statistics & numerical data , Pituitary Gland/surgery , Male , Female , Adenoma/surgery
2.
Sci Rep ; 14(1): 15063, 2024 07 01.
Article in English | MEDLINE | ID: mdl-38956444

ABSTRACT

Soybean is an essential crop to fight global food insecurity and is of great economic importance around the world. Along with genetic improvements aimed at boosting yield, soybean seed composition also changed. Since conditions during crop growth and development influences nutrient accumulation in soybean seeds, remote sensing offers a unique opportunity to estimate seed traits from the standing crops. Capturing phenological developments that influence seed composition requires frequent satellite observations at higher spatial and spectral resolutions. This study introduces a novel spectral fusion technique called multiheaded kernel-based spectral fusion (MKSF) that combines the higher spatial resolution of PlanetScope (PS) and spectral bands from Sentinel 2 (S2) satellites. The study also focuses on using the additional spectral bands and different statistical machine learning models to estimate seed traits, e.g., protein, oil, sucrose, starch, ash, fiber, and yield. The MKSF was trained using PS and S2 image pairs from different growth stages and predicted the potential VNIR1 (705 nm), VNIR2 (740 nm), VNIR3 (783 nm), SWIR1 (1610 nm), and SWIR2 (2190 nm) bands from the PS images. Our results indicate that VNIR3 prediction performance was the highest followed by VNIR2, VNIR1, SWIR1, and SWIR2. Among the seed traits, sucrose yielded the highest predictive performance with RFR model. Finally, the feature importance analysis revealed the importance of MKSF-generated vegetation indices from fused images.


Subject(s)
Glycine max , Seeds , Glycine max/growth & development , Glycine max/genetics , Seeds/growth & development , Machine Learning , Remote Sensing Technology/methods , Crops, Agricultural/growth & development
3.
Sci Rep ; 14(1): 15149, 2024 07 02.
Article in English | MEDLINE | ID: mdl-38956213

ABSTRACT

Dry eye syndrome (DES) is a tear film disorder caused by increased tear evaporation or decreased production. The heavy workload on the eye and the increased usage of digital screens may decrease blink frequency, leading to an increased evaporation rate and an upsurge in the incidence and severity of DES. This study aims to assess the severity of DES symptoms and the risk factors among university students. A cross-sectional study was conducted at Umm AlQura University to evaluate the severity of DES among students and explore its potential association with digital screen use. Validated questionnaires were used to assess the severity of DES and digital screen usage. The study included 457 participants, of which 13% had symptoms suggestive of severe DES. Furthermore, multiple risk factors had a significant association with the severity of DES, including gender, use of monitor filters, monitor and room brightness, and smoking habits. DES symptoms were prevalent among university students, particularly female students. Although there was no significant association with the duration of screen usage and collage distribution. Other factors however, such as the usage of screen monitors and the brightness of both the monitor and the room, were significantly associated with the severity of DES symptoms.


Subject(s)
Dry Eye Syndromes , Students , Humans , Dry Eye Syndromes/epidemiology , Dry Eye Syndromes/diagnosis , Female , Saudi Arabia/epidemiology , Male , Cross-Sectional Studies , Risk Factors , Universities , Young Adult , Adult , Surveys and Questionnaires , Severity of Illness Index , Adolescent , Prevalence
4.
Heliyon ; 10(12): e33039, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38988532

ABSTRACT

Objective: The aim of this study was to evaluate the impact of the COVID-19 pandemic on ocular health related to digital device usage among university students in Lebanon. Design: A cross-sectional design was utilized to examine the association between the pandemic and ocular health. Participants: A total of 255 university students in Lebanon participated in the study, selected based on their enrollment during the pandemic. Methods: An online survey assessed participants' digital device usage, awareness of digital eye strain, and experienced symptoms. The study addressed the relationship between symptom frequency and screen time, especially in their connection to the pandemic and online learning. Results: Prior to the pandemic, the majority of participants (73.0 %) were unaware of digital eye strain. Following the transition to online learning, nearly half of the participants (47.0 %) reported using digital devices for 12 or more hours. The majority (92.0 %) experienced a substantial increase in daily digital device usage for learning, with an average increase of 3-5 h. Symptoms of digital eye strain, including headache, burning of eyes, blurry vision, sensitivity to light, worsening of vision and dryness of the eyes intensified in both frequency and severity during the pandemic and online learning period. Conclusions: The study emphasizes the importance of promoting healthy habits and implementing preventive measures to reduce the prevalence of digital eye strain symptoms among university students. Healthcare professionals and public health authorities should educate individuals on strategies to alleviate digital eye strain, considering the persistent reliance on digital devices beyond the pandemic.

5.
Front Transplant ; 3: 1305468, 2024.
Article in English | MEDLINE | ID: mdl-38993786

ABSTRACT

Two common obstacles limiting the performance of data-driven algorithms in digital histopathology classification tasks are the lack of expert annotations and the narrow diversity of datasets. Multi-instance learning (MIL) can address the former challenge for the analysis of whole slide images (WSI), but performance is often inferior to full supervision. We show that the inclusion of weak annotations can significantly enhance the effectiveness of MIL while keeping the approach scalable. An analysis framework was developed to process periodic acid-Schiff (PAS) and Sirius Red (SR) slides of renal biopsies. The workflow segments tissues into coarse tissue classes. Handcrafted and deep features were extracted from these tissues and combined using a soft attention model to predict several slide-level labels: delayed graft function (DGF), acute tubular injury (ATI), and Remuzzi grade components. A tissue segmentation quality metric was also developed to reduce the adverse impact of poorly segmented instances. The soft attention model was trained using 5-fold cross-validation on a mixed dataset and tested on the QUOD dataset containing n = 373 PAS and n = 195 SR biopsies. The average ROC-AUC over different prediction tasks was found to be 0.598 ± 0.011 , significantly higher than using only ResNet50 ( 0.545 ± 0.012 ), only handcrafted features ( 0.542 ± 0.011 ), and the baseline ( 0.532 ± 0.012 ) of state-of-the-art performance. In conjunction with soft attention, weighting tissues by segmentation quality has led to further improvement ( A U C = 0.618 ± 0.010 ) . Using an intuitive visualisation scheme, we show that our approach may also be used to support clinical decision making as it allows pinpointing individual tissues relevant to the predictions.

6.
Photodiagnosis Photodyn Ther ; : 104277, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39004111

ABSTRACT

BACKGROUND: This study aimed to investigate the choroidal vascularity index (CVI) in patients with computer vision syndrome (CVS) combined with accommodative lead. METHODS: This retrospective case-control study enrolled patients diagnosed with CVS and accommodative lead at the XXX Hospital affiliated to XXX University between July 2022 and May 2023. The control group included individuals without any ocular diseases. Ophthalmic assessments included basic visual acuity, refraction, ocular biometric parameters, and CVI. RESULTS: A total of 85 participants were included in the study, with 45 in the CVS group and 40 in the control group. The central corneal thickness of CVS group was found to be significantly thinner compared to the control group in both the right eye (532.40±30.93 vs. 545.78±19.99 µm, P=0.019) and left eye (533.96±29.57 vs. 547.56±20.39, P=0.014). In comparison to the control group, the CVS group exhibited lower CVI in the superior (0.40±0.08 vs. 0.43±0.09, P=0.001), temporal (0.40±0.08 vs. 0.44±0.10, P<0.001), inferior (0.41±0.08 vs. 0.46±0.08, P<0.001), and nasal (0.41±0.08 vs. 0.44±0.08, P=0.001) quadrants. Similar differences were observed in all four quadrants within the 1-3 mm radius, and in the temporal (P=0.004) and inferior (P=0.002) quadrants within the 1-6 mm and 3-6 mm radii (all P<0.05). CONCLUSION: Compared to individuals without ocular issues, patients with CVS and accommodative lead were found to have thinner corneal central thickness and lower CVI.

7.
Artif Intell Med ; 154: 102923, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38970987

ABSTRACT

Computerized cognitive training (CCT) is a scalable, well-tolerated intervention that has promise for slowing cognitive decline. The effectiveness of CCT is often affected by a lack of effective engagement. Mental fatigue is a the primary factor for compromising effective engagement in CCT, particularly in older adults at risk for dementia. There is a need for scalable, automated measures that can constantly monitor and reliably detect mental fatigue during CCT. Here, we develop and validate a novel Recurrent Video Transformer (RVT) method for monitoring real-time mental fatigue in older adults with mild cognitive impairment using their video-recorded facial gestures during CCT. The RVT model achieved the highest balanced accuracy (79.58%) and precision (0.82) compared to the prior models for binary and multi-class classification of mental fatigue. We also validated our model by significantly relating to reaction time across CCT tasks (Waldχ2=5.16,p=0.023). By leveraging dynamic temporal information, the RVT model demonstrates the potential to accurately measure real-time mental fatigue, laying the foundation for future CCT research aiming to enhance effective engagement by timely prevention of mental fatigue.

8.
BMC Oral Health ; 24(1): 772, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38987714

ABSTRACT

Integrating artificial intelligence (AI) into medical and dental applications can be challenging due to clinicians' distrust of computer predictions and the potential risks associated with erroneous outputs. We introduce the idea of using AI to trigger second opinions in cases where there is a disagreement between the clinician and the algorithm. By keeping the AI prediction hidden throughout the diagnostic process, we minimize the risks associated with distrust and erroneous predictions, relying solely on human predictions. The experiment involved 3 experienced dentists, 25 dental students, and 290 patients treated for advanced caries across 6 centers. We developed an AI model to predict pulp status following advanced caries treatment. Clinicians were asked to perform the same prediction without the assistance of the AI model. The second opinion framework was tested in a 1000-trial simulation. The average F1-score of the clinicians increased significantly from 0.586 to 0.645.


Subject(s)
Artificial Intelligence , Dental Caries , Humans , Dental Caries/therapy , Referral and Consultation , Patient Care Planning , Algorithms
9.
Sensors (Basel) ; 24(13)2024 Jun 21.
Article in English | MEDLINE | ID: mdl-39000823

ABSTRACT

Unmanned aerial vehicle (UAV)-based object detection methods are widely used in traffic detection due to their high flexibility and extensive coverage. In recent years, with the increasing complexity of the urban road environment, UAV object detection algorithms based on deep learning have gradually become a research hotspot. However, how to further improve algorithmic efficiency in response to the numerous and rapidly changing road elements, and thus achieve high-speed and accurate road object detection, remains a challenging issue. Given this context, this paper proposes the high-efficiency multi-object detection algorithm for UAVs (HeMoDU). HeMoDU reconstructs a state-of-the-art, deep-learning-based object detection model and optimizes several aspects to improve computational efficiency and detection accuracy. To validate the performance of HeMoDU in urban road environments, this paper uses the public urban road datasets VisDrone2019 and UA-DETRAC for evaluation. The experimental results show that the HeMoDU model effectively improves the speed and accuracy of UAV object detection.

10.
Sensors (Basel) ; 24(13)2024 Jun 25.
Article in English | MEDLINE | ID: mdl-39000900

ABSTRACT

In recent years, the technological landscape has undergone a profound metamorphosis catalyzed by the widespread integration of drones across diverse sectors. Essential to the drone manufacturing process is comprehensive testing, typically conducted in controlled laboratory settings to uphold safety and privacy standards. However, a formidable challenge emerges due to the inherent limitations of GPS signals within indoor environments, posing a threat to the accuracy of drone positioning. This limitation not only jeopardizes testing validity but also introduces instability and inaccuracies, compromising the assessment of drone performance. Given the pivotal role of precise GPS-derived data in drone autopilots, addressing this indoor-based GPS constraint is imperative to ensure the reliability and resilience of unmanned aerial vehicles (UAVs). This paper delves into the implementation of an Indoor Positioning System (IPS) leveraging computer vision. The proposed system endeavors to detect and localize UAVs within indoor environments through an enhanced vision-based triangulation approach. A comparative analysis with alternative positioning methodologies is undertaken to ascertain the efficacy of the proposed system. The results obtained showcase the efficiency and precision of the designed system in detecting and localizing various types of UAVs, underscoring its potential to advance the field of indoor drone navigation and testing.

11.
Sensors (Basel) ; 24(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-39000914

ABSTRACT

The acquisition of the body temperature of animals kept in captivity in biology laboratories is crucial for several studies in the field of animal biology. Traditionally, the acquisition process was carried out manually, which does not guarantee much accuracy or consistency in the acquired data and was painful for the animal. The process was then switched to a semi-manual process using a thermal camera, but it still involved manually clicking on each part of the animal's body every 20 s of the video to obtain temperature values, making it a time-consuming, non-automatic, and difficult process. This project aims to automate this acquisition process through the automatic recognition of parts of a lizard's body, reading the temperature in these parts based on a video taken with two cameras simultaneously: an RGB camera and a thermal camera. The first camera detects the location of the lizard's various body parts using artificial intelligence techniques, and the second camera allows reading of the respective temperature of each part. Due to the lack of lizard datasets, either in the biology laboratory or online, a dataset had to be created from scratch, containing the identification of the lizard and six of its body parts. YOLOv5 was used to detect the lizard and its body parts in RGB images, achieving a precision of 90.00% and a recall of 98.80%. After initial calibration, the RGB and thermal camera images are properly localised, making it possible to know the lizard's position, even when the lizard is at the same temperature as its surrounding environment, through a coordinate conversion from the RGB image to the thermal image. The thermal image has a colour temperature scale with the respective maximum and minimum temperature values, which is used to read each pixel of the thermal image, thus allowing the correct temperature to be read in each part of the lizard.


Subject(s)
Artificial Intelligence , Body Temperature , Lizards , Animals , Lizards/physiology , Body Temperature/physiology , Video Recording/methods , Image Processing, Computer-Assisted/methods
12.
Sensors (Basel) ; 24(13)2024 Jul 04.
Article in English | MEDLINE | ID: mdl-39001127

ABSTRACT

Compressive sensing (CS) is recognized for its adeptness at compressing signals, making it a pivotal technology in the context of sensor data acquisition. With the proliferation of image data in Internet of Things (IoT) systems, CS is expected to reduce the transmission cost of signals captured by various sensor devices. However, the quality of CS-reconstructed signals inevitably degrades as the sampling rate decreases, which poses a challenge in terms of the inference accuracy in downstream computer vision (CV) tasks. This limitation imposes an obstacle to the real-world application of existing CS techniques, especially for reducing transmission costs in sensor-rich environments. In response to this challenge, this paper contributes a CV-oriented adaptive CS framework based on saliency detection to the field of sensing technology that enables sensor systems to intelligently prioritize and transmit the most relevant data. Unlike existing CS techniques, the proposal prioritizes the accuracy of reconstructed images for CV purposes, not only for visual quality. The primary objective of this proposal is to enhance the preservation of information critical for CV tasks while optimizing the utilization of sensor data. This work conducts experiments on various realistic scenario datasets collected by real sensor devices. Experimental results demonstrate superior performance compared to existing CS sampling techniques across the STL10, Intel, and Imagenette datasets for classification and KITTI for object detection. Compared with the baseline uniform sampling technique, the average classification accuracy shows a maximum improvement of 26.23%, 11.69%, and 18.25%, respectively, at specific sampling rates. In addition, even at very low sampling rates, the proposal is demonstrated to be robust in terms of classification and detection as compared to state-of-the-art CS techniques. This ensures essential information for CV tasks is retained, improving the efficacy of sensor-based data acquisition systems.

13.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001152

ABSTRACT

The search for structural and microstructural defects using simple human vision is associated with significant errors in determining voids, large pores, and violations of the integrity and compactness of particle packing in the micro- and macrostructure of concrete. Computer vision methods, in particular convolutional neural networks, have proven to be reliable tools for the automatic detection of defects during visual inspection of building structures. The study's objective is to create and compare computer vision algorithms that use convolutional neural networks to identify and analyze damaged sections in concrete samples from different structures. Networks of the following architectures were selected for operation: U-Net, LinkNet, and PSPNet. The analyzed images are photos of concrete samples obtained by laboratory tests to assess the quality in terms of the defection of the integrity and compactness of the structure. During the implementation process, changes in quality metrics such as macro-averaged precision, recall, and F1-score, as well as IoU (Jaccard coefficient) and accuracy, were monitored. The best metrics were demonstrated by the U-Net model, supplemented by the cellular automaton algorithm: precision = 0.91, recall = 0.90, F1 = 0.91, IoU = 0.84, and accuracy = 0.90. The developed segmentation algorithms are universal and show a high quality in highlighting areas of interest under any shooting conditions and different volumes of defective zones, regardless of their localization. The automatization of the process of calculating the damage area and a recommendation in the "critical/uncritical" format can be used to assess the condition of concrete of various types of structures, adjust the formulation, and change the technological parameters of production.

14.
Patient Saf Surg ; 18(1): 24, 2024 Jul 21.
Article in English | MEDLINE | ID: mdl-39034409

ABSTRACT

BACKGROUND: Retained surgical items (RSI) are preventable events that pose a significant risk to patient safety. Current strategies for preventing RSIs rely heavily on manual instrument counting methods, which are prone to human error. This study evaluates the feasibility and performance of a deep learning-based computer vision model for automated surgical tool detection and counting. METHODS: A novel dataset of 1,004 images containing 13,213 surgical tools across 11 categories was developed. The dataset was split into training, validation, and test sets at a 60:20:20 ratio. An artificial intelligence (AI) model was trained on the dataset, and the model's performance was evaluated using standard object detection metrics, including precision and recall. To simulate a real-world surgical setting, model performance was also evaluated in a dynamic surgical video of instruments being moved in real-time. RESULTS: The model demonstrated high precision (98.5%) and recall (99.9%) in distinguishing surgical tools from the background. It also exhibited excellent performance in differentiating between various surgical tools, with precision ranging from 94.0 to 100% and recall ranging from 97.1 to 100% across 11 tool categories. The model maintained strong performance on a subset of test images containing overlapping tools (precision range: 89.6-100%, and recall range 97.2-98.2%). In a real-time surgical video analysis, the model maintained a correct surgical tool count in all non-transition frames, with a median inference speed of 40.4 frames per second (interquartile range: 4.9). CONCLUSION: This study demonstrates that using a deep learning-based computer vision model for automated surgical tool detection and counting is feasible. The model's high precision and real-time inference capabilities highlight its potential to serve as an AI safeguard to potentially improve patient safety and reduce manual burden on surgical staff. Further validation in clinical settings is warranted.

15.
Sci Rep ; 14(1): 16817, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39039136

ABSTRACT

Planting potatoes through plastic film with incomplete or excessive soil coverage over seed holes significantly impairs yield. Existing covering methods rely solely on mechanical transmissions, leading to bulky and inconsistent soil coverage of the seed holes. This paper reports an innovative method using a precise soil covering device based on the YOLOv4-tiny real-time object detection system to accurately identify potato plastic film holes and cover them with soil. The system adopts a lightweight and high-precision detection scheme, balancing increased network depth with reduced computation. It can identify holes in the plastic film in real-time and with high accuracy. To verify the effectiveness of YOLOv4-tiny real-time object detection system, a precise soil covering device based on this detection system has been designed and applied to a double crank multi-rod hill-drop planter. Field tests revealed that the system's average accuracy rate for detecting holes is approximately 98%, with an average processing time of 15.15 ms per frame. This fast and accurate performance, combined with the device's robust real-time operation and anti-interference capabilities during soil covering, effectively reduce the problems of soil cover omission and repeated covering caused by existing mechanical transmission methods. The findings reported in this paper are valuable for the development of autonomous potato plastic film precise soil covering devices for commercial use.

16.
Data Brief ; 55: 110679, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39044903

ABSTRACT

Digital image datasets for Precision Agriculture (PA) still need to be available. Many problems in this field of science have been studied to find solutions, such as detecting weeds, counting fruits and trees, and detecting diseases and pests, among others. One of the main fields of research in PA is detecting different crop types with aerial images. Crop detection is vital in PA to establish crop inventories, planting areas, and crop yields and to have information available for food markets and public entities that provide technical help to small farmers. This work proposes public access to a digital image dataset for detecting green onion and foliage flower crops located in the rural area of Medellín City - Colombia. This dataset consists of 245 images with their respective labels: green onion (Allium fistulosum), foliage flowers (Solidago Canadensis and Aster divaricatus), and non-crop areas prepared for planting. A total of 4315 instances were obtained, which were divided into subsets for training, validation, and testing. The classes in the images were labeled with the polygon method, which allows training machine learning algorithms for detection using bounding boxes or segmentation in the COCO format.

17.
Data Brief ; 55: 110564, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39044911

ABSTRACT

Seasonal vegetables play a crucial role in both nutrition and commerce in Bangladesh. Recognizing this significance, our research introduces the 'SeasVeg' dataset, comprising images of ten varieties of seasonal vegetables sourced from Dhaka and Pabna regions. These include Carica papaya, Momordica dioica, Abelmoschus esculentus, Lablab purpureus, Trichosanthes cucumerina, Trichosanthes dioica, Solanum lycopersicum, Brassica oleracea, Momordica charantia, and Raphanus sativus. Our dataset encompasses 4500 images, 1500 original and 3000 augmented, meticulously captured under natural light conditions to ensure authenticity. While our primary focus lies in leveraging machine learning and deep learning techniques for advancements in agriculture science, particularly in aiding healthcare aspects with seasonal vegetables and nutrition's, we acknowledge the versatile utility of our dataset. Beyond healthcare, it serves as a valuable educational resource, facilitating children's and toddlers' learning to identify these vital vegetables. This dual functionality broadens the dataset's appeal and underscores its societal impact beyond the realm of healthcare. Besides, the research culminates in the implementation of machine learning models, achieving noteworthy accuracy. We get the highest 99 % accuracy with the ResNet50 pre-trained CNN model and a good 94 % accuracy with the InceptionV3 pre-trained CNN model when it comes to the computer-aided vegetable classification. However, the 'SeasVeg' dataset represents not only a significant stride in healthcare innovation but also a promising tool for educational endeavors, catering to diverse stakeholders and fostering interdisciplinary collaboration.

18.
J Neurol Sci ; 463: 123089, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38991323

ABSTRACT

BACKGROUND: The core clinical sign of Parkinson's disease (PD) is bradykinesia, for which a standard test is finger tapping: the clinician observes a person repetitively tap finger and thumb together. That requires an expert eye, a scarce resource, and even experts show variability and inaccuracy. Existing applications of technology to finger tapping reduce the tapping signal to one-dimensional measures, with researcher-defined features derived from those measures. OBJECTIVES: (1) To apply a deep learning neural network directly to video of finger tapping, without human-defined measures/features, and determine classification accuracy for idiopathic PD versus controls. (2) To visualise the features learned by the model. METHODS: 152 smartphone videos of 10s finger tapping were collected from 40 people with PD and 37 controls. We down-sampled pixel dimensions and videos were split into 1 s clips. A 3D convolutional neural network was trained on these clips. RESULTS: For discriminating PD from controls, our model showed training accuracy 0.91, and test accuracy 0.69, with test precision 0.73, test recall 0.76 and test AUROC 0.76. We also report class activation maps for the five most predictive features. These show the spatial and temporal sections of video upon which the network focuses attention to make a prediction, including an apparent dropping thumb movement distinct for the PD group. CONCLUSIONS: A deep learning neural network can be applied directly to standard video of finger tapping, to distinguish PD from controls, without a requirement to extract a one-dimensional signal from the video, or pre-define tapping features.

19.
Article in English | MEDLINE | ID: mdl-38992406

ABSTRACT

Artificial intelligence (AI) refers to computer-based methodologies which use data to teach a computer to solve pre-defined tasks; these methods can be applied to identify patterns in large multi-modal data sources. AI applications in inflammatory bowel disease (IBD) includes predicting response to therapy, disease activity scoring of endoscopy, drug discovery, and identifying bowel damage in images. As a complex disease with entangled relationships between genomics, metabolomics, microbiome, and the environment, IBD stands to benefit greatly from methodologies that can handle this complexity. We describe current applications, critical challenges, and propose future directions of AI in IBD.

20.
PeerJ ; 12: e17686, 2024.
Article in English | MEDLINE | ID: mdl-39006015

ABSTRACT

In the present investigation, we employ a novel and meticulously structured database assembled by experts, encompassing macrofungi field-collected in Brazil, featuring upwards of 13,894 photographs representing 505 distinct species. The purpose of utilizing this database is twofold: firstly, to furnish training and validation for convolutional neural networks (CNNs) with the capacity for autonomous identification of macrofungal species; secondly, to develop a sophisticated mobile application replete with an advanced user interface. This interface is specifically crafted to acquire images, and, utilizing the image recognition capabilities afforded by the trained CNN, proffer potential identifications for the macrofungal species depicted therein. Such technological advancements democratize access to the Brazilian Funga, thereby enhancing public engagement and knowledge dissemination, and also facilitating contributions from the populace to the expanding body of knowledge concerning the conservation of macrofungal species of Brazil.


Subject(s)
Deep Learning , Fungi , Brazil , Fungi/classification , Fungi/isolation & purification , Biodiversity , Neural Networks, Computer , Databases, Factual
SELECTION OF CITATIONS
SEARCH DETAIL
...