Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Data Brief ; 55: 110601, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38993233

ABSTRACT

The dataset provides data obtained with eye-tracking while 55 volunteers solved 3 distinct neuropsychological tests on a screen inside a closed room. Among the 55 volunteers, 22 were women and 33 were men, all with ages ranging between 9 and 50, and 5 of whom were diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) [1]. The eye-tracker used for the collection of the data was an EyeTribe, which has a sampling rate of 60 Hz and an average visual angle between 0.5 and 1, which correspond to an on-screen error between 0.5 and 1cm (0.1969 to 0.393 inches aprox) respectively, when the distance to the user is around 60cm (23.62 in) [2], which was the case during the collection of these data. The neuropsychological tests were implemented in a software named NEURO-INNOVA KIDS® [3], which are the following: a domino test adapted from the D-48 intelligence test [4], an adaptation of the MASMI test consisting of unfolded cubes [5], the figures series completion test adapted from [6], and the Poppelreuter figures test [7]. Before each of the tests, a calibration process was performed, ensuring that the visual angle error was less than or equal to 0.5 cm (0.1969 in), which is considered an acceptable calibration. The collective mean duration of the four administered tests amounted to 20 minutes. This dataset exhibits significant promise for potential utilization due to the extensive prevalence of these neuropsychological assessments among healthcare practitioners for evaluating diverse cognitive faculties in individuals. Moreover, it has been empirically established that poor performance on these tests is associated with attention deficits [8].

2.
Diagnostics (Basel) ; 14(12)2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38928692

ABSTRACT

This paper introduces a novel one-dimensional convolutional neural network that utilizes clinical data to accurately detect choledocholithiasis, where gallstones obstruct the common bile duct. Swift and precise detection of this condition is critical to preventing severe complications, such as biliary colic, jaundice, and pancreatitis. This cutting-edge model was rigorously compared with other machine learning methods commonly used in similar problems, such as logistic regression, linear discriminant analysis, and a state-of-the-art random forest, using a dataset derived from endoscopic retrograde cholangiopancreatography scans performed at Olive View-University of California, Los Angeles Medical Center. The one-dimensional convolutional neural network model demonstrated exceptional performance, achieving 90.77% accuracy and 92.86% specificity, with an area under the curve of 0.9270. While the paper acknowledges potential areas for improvement, it emphasizes the effectiveness of the one-dimensional convolutional neural network architecture. The results suggest that this one-dimensional convolutional neural network approach could serve as a plausible alternative to endoscopic retrograde cholangiopancreatography, considering its disadvantages, such as the need for specialized equipment and skilled personnel and the risk of postoperative complications. The potential of the one-dimensional convolutional neural network model to significantly advance the clinical diagnosis of this gallstone-related condition is notable, offering a less invasive, potentially safer, and more accessible alternative.

3.
Micromachines (Basel) ; 14(9)2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37763967

ABSTRACT

The present work describes the training and subsequent implementation on an FPGA board of an LSTM neural network for the modeling and prediction of the exceedances of criteria pollutants such as nitrogen dioxide (NO2), carbon monoxide (CO), and particulate matter (PM10 and PM2.5). Understanding the behavior of pollutants and assessing air quality in specific geographical regions is crucial. Overexposure to these pollutants can cause harm to both natural ecosystems and living organisms, including humans. Therefore, it is essential to develop a solution that can accurately evaluate pollution levels. One potential approach is to implement a modified LSTM neural network on an FPGA board. This implementation obtained an 11% improvement compared to the original LSTM network, demonstrating that the proposed architecture is able to maintain its functionality despite reducing the number of neurons in its initial layers. It shows the feasibility of integrating a prediction network into a limited system such as an FPGA board, but easily coupled to a different system. Importantly, this implementation does not compromise the prediction accuracy for both 24 h and 72 h time frames, highlighting an opportunity for further enhancement and refinement.

4.
Diagnostics (Basel) ; 12(12)2022 Dec 02.
Article in English | MEDLINE | ID: mdl-36553037

ABSTRACT

Glaucoma is an eye disease that gradually deteriorates vision. Much research focuses on extracting information from the optic disc and optic cup, the structure used for measuring the cup-to-disc ratio. These structures are commonly segmented with deeplearning techniques, primarily using Encoder-Decoder models, which are hard to train and time-consuming. Object detection models using convolutional neural networks can extract features from fundus retinal images with good precision. However, the superiority of one model over another for a specific task is still being determined. The main goal of our approach is to compare object detection model performance to automate segment cups and discs on fundus images. This study brings the novelty of seeing the behavior of different object detection models in the detection and segmentation of the disc and the optical cup (Mask R-CNN, MS R-CNN, CARAFE, Cascade Mask R-CNN, GCNet, SOLO, Point_Rend), evaluated on Retinal Fundus Images for Glaucoma Analysis (REFUGE), and G1020 datasets. Reported metrics were Average Precision (AP), F1-score, IoU, and AUCPR. Several models achieved the highest AP with a perfect 1.000 when the threshold for IoU was set up at 0.50 on REFUGE, and the lowest was Cascade Mask R-CNN with an AP of 0.997. On the G1020 dataset, the best model was Point_Rend with an AP of 0.956, and the worst was SOLO with 0.906. It was concluded that the methods reviewed achieved excellent performance with high precision and recall values, showing efficiency and effectiveness. The problem of how many images are needed was addressed with an initial value of 100, with excellent results. Data augmentation, multi-scale handling, and anchor box size brought improvements. The capability to translate knowledge from one database to another shows promising results too.

5.
J Air Waste Manag Assoc ; 72(10): 1095-1112, 2022 10.
Article in English | MEDLINE | ID: mdl-35816429

ABSTRACT

Atmospheric pollution refers to the presence of substances in the air such as particulate matter (PM) which has a negative impact in population ́s health exposed to it. This makes it a topic of current interest. Since the Metropolitan Zone of the Valley of Mexico's geographic characteristics do not allow proper ventilation and due to its population's density a significant quantity of poor air quality events are registered. This paper proposes a methodology to improve the forecasting of PM10 and PM2.5, in largely populated areas, using a recurrent long-term/short-term memory (LSTM) network optimized by the Ant Colony Optimization (ACO) algorithm. The experimental results show an improved performance in reducing the error by around 13.00% in RMSE and 14.82% in MAE using as reference the averaged results obtained by the LSTM deep neural network. Overall, the current study proposes a methodology to be studied in the future to improve different forecasting techniques in real-life applications where there is no need to respond in real time.Implications: This contribution presents a methodology to deal with the highly non-linear modeling of airborne particulate matter (both PM10 and PM2.5). Most linear approaches to this modeling problem are often not accurate enough when dealing with this type of data. In addition, most machine learning methods require extensive training or have problems when dealing with noise embedded in the time-series data. The proposed methodology deals with this data in three stages: preprocessing, modeling, and optimization. In the preprocessing stage, data is acquired and imputed any missing data. This ensures that the modeling process is robust even when there are errors in the acquired data and is invalid, or the data is missing. In the modeling stage, a recurrent deep neural network called LSTM (Long-Short Term Memory) is used, which shows that regardless of the monitoring station and the geographical characteristics of the site, the resulting model shows accurate and robust results. Furthermore, the optimization stage deals with enhancing the capability of the data modeling by using swarm intelligence algorithms (Ant Colony Optimization, in this case). The results presented in this study were compared with other works that presented traditional algorithms, such as multi-layer perceptron, traditional deep neural networks, and common spatiotemporal models, which show the feasibility of the methodology presented in this contribution. Lastly, the advantages of using this methodology are highlighted.


Subject(s)
Air Pollutants , Particulate Matter , Air Pollutants/analysis , Environmental Monitoring/methods , Intelligence , Neural Networks, Computer , Particulate Matter/analysis
6.
Micromachines (Basel) ; 13(6)2022 May 31.
Article in English | MEDLINE | ID: mdl-35744504

ABSTRACT

Artificial intelligence techniques for pneumatic robot manipulators have become of deep interest in industrial applications, such as non-high voltage environments, clean operations, and high power-to-weight ratio tasks. The principal advantages of this type of actuator are the implementation of clean energies, low cost, and easy maintenance. The disadvantages of working with pneumatic actuators are that they have non-linear characteristics. This paper proposes an intelligent controller embedded in a programmable logic device to minimize the non-linearities of the air behavior into a 3-degrees-of-freedom robot with pneumatic actuators. In this case, the device is suitable due to several electric valves, direct current motors signals, automatic controllers, and several neural networks. For every degree of freedom, three neurons adjust the gains for each controller. The learning process is constantly tuning the gain value to reach the minimum of the mean square error. Results plot a more appropriate behavior for a transitive time when the neurons work with the automatic controllers with a minimum mean error of ±1.2 mm.

SELECTION OF CITATIONS
SEARCH DETAIL
...