Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Int J Med Inform ; 192: 105636, 2024 Sep 24.
Artículo en Inglés | MEDLINE | ID: mdl-39357217

RESUMEN

BACKGROUND: The integration of Hospital Information Systems (HIS) into healthcare delivery has significantly enhanced patient care and operational efficiency. Nonetheless, the rapid acceleration of digital transformation has led to a substantial increase in the volume of data managed by these systems. This emphasizes the need for robust mechanisms for data management and quality assurance. OBJECTIVE: This study addresses data quality issues related to patient identifiers within the Hospital Information System (HIS) of a regional German hospital, focusing on improving the accuracy and consistency of these administrative data entries. METHODS: Employing a combination of data analysis and expert interviews, this study reviews and programmatically cleanses a dataset with over 2,000,000 patient data entries extracted from the HIS. The areas of investigation are patient admissions, discharges, and geographical data. RESULTS: The analysis revealed that roughly 25% of the dataset was rendered unusable by errors and inconsistencies. By implementing a thorough data cleansing process, we significantly enhanced the utility of the dataset. In doing so, we identified the primary issues affecting data quality, including ambiguities among similar variables and a gap between the intended and actual use of the system. CONCLUSION: The findings highlight the critical importance of enhancing data quality in healthcare information systems. This study shows the necessity of a careful review of data extracted from the HIS before it can be reliably utilized for machine learning tasks, thereby rendering the data more usable for both clinical and analytical purposes.

2.
J Clin Med ; 13(13)2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38999454

RESUMEN

Background: Disease-modifying antirheumatic drugs (bDMARDs) have shown efficacy in treating Rheumatoid Arthritis (RA). Predicting treatment outcomes for RA is crucial as approximately 30% of patients do not respond to bDMARDs and only half achieve a sustained response. This study aims to leverage machine learning to predict both initial response at 6 months and sustained response at 12 months using baseline clinical data. Methods: Baseline clinical data were collected from 154 RA patients treated at the University Hospital in Erlangen, Germany. Five machine learning models were compared: Extreme Gradient Boosting (XGBoost), Adaptive Boosting (AdaBoost), K-nearest neighbors (KNN), Support Vector Machines (SVM), and Random Forest. Nested cross-validation was employed to ensure robustness and avoid overfitting, integrating hyperparameter tuning within its process. Results: XGBoost achieved the highest accuracy for predicting initial response (AUC-ROC of 0.91), while AdaBoost was the most effective for sustained response (AUC-ROC of 0.84). Key predictors included the Disease Activity Score-28 using erythrocyte sedimentation rate (DAS28-ESR), with higher scores at baseline associated with lower response chances at 6 and 12 months. Shapley additive explanations (SHAP) identified the most important baseline features and visualized their directional effects on treatment response and sustained response. Conclusions: These findings can enhance RA treatment plans and support clinical decision-making, ultimately improving patient outcomes by predicting response before starting medication.

3.
Sensors (Basel) ; 23(8)2023 Apr 07.
Artículo en Inglés | MEDLINE | ID: mdl-37112142

RESUMEN

The advancement of embedded sensor systems allowed the monitoring of complex processes based on connected devices. As more and more data are produced by these sensor systems, and as the data are used in increasingly vital areas of applications, it is of growing importance to also track the data quality of these systems. We propose a framework to fuse sensor data streams and associated data quality attributes into a single meaningful and interpretable value that represents the current underlying data quality. Based on the definition of data quality attributes and metrics to determine real-valued figures representing the quality of the attributes, the fusion algorithms are engineered. Methods based on maximum likelihood estimation (MLE) and fuzzy logic are used to perform data quality fusion by utilizing domain knowledge and sensor measurements. Two data sets are used to verify the proposed fusion framework. First, the methods are applied to a proprietary data set targeting sample rate inaccuracies of a micro-electro-mechanical system (MEMS) accelerometer and second, to the publicly available Intel Lab Data set. The algorithms are verified against their expected behavior based on data exploration and correlation analysis. We prove that both fusion approaches are capable of detecting data quality issues and providing an interpretable data quality indicator.

4.
Animals (Basel) ; 13(5)2023 Feb 23.
Artículo en Inglés | MEDLINE | ID: mdl-36899661

RESUMEN

Automated monitoring systems have become increasingly important for zoological institutions in the study of their animals' behavior. One crucial processing step for such a system is the re-identification of individuals when using multiple cameras. Deep learning approaches have become the standard methodology for this task. Especially video-based methods promise to achieve a good performance in re-identification, as they can leverage the movement of an animal as an additional feature. This is especially important for applications in zoos, where one has to overcome specific challenges such as changing lighting conditions, occlusions or low image resolutions. However, large amounts of labeled data are needed to train such a deep learning model. We provide an extensively annotated dataset including 13 individual polar bears shown in 1431 sequences, which is an equivalent of 138,363 images. PolarBearVidID is the first video-based re-identification dataset for a non-human species to date. Unlike typical human benchmark re-identification datasets, the polar bears were filmed in a range of unconstrained poses and lighting conditions. Additionally, a video-based re-identification approach is trained and tested on this dataset. The results show that the animals can be identified with a rank-1 accuracy of 96.6%. We thereby show that the movement of individual animals is a characteristic feature and it can be utilized for re-identification.

5.
Sensors (Basel) ; 22(14)2022 Jul 18.
Artículo en Inglés | MEDLINE | ID: mdl-35891027

RESUMEN

Efficient handwriting trajectory reconstruction (TR) requires specific writing surfaces for detecting movements of digital pens. Although several motion-based solutions have been developed to remove the necessity of writing surfaces, most of them are based on classical sensor fusion methods limited, by sensor error accumulation over time, to tracing only single strokes. In this work, we present an approach to map the movements of an IMU-enhanced digital pen to relative displacement data. Training data is collected by means of a tablet. We propose several pre-processing and data-preparation methods to synchronize data between the pen and the tablet, which are of different sampling rates, and train a convolutional neural network (CNN) to reconstruct multiple strokes without the need of writing segmentation or post-processing correction of the predicted trajectory. The proposed system learns the relative displacement of the pen tip over time from the recorded raw sensor data, achieving a normalized error rate of 0.176 relative to unit-scaled tablet ground truth (GT) trajectory. To test the effectiveness of the approach, we train a neural network for character recognition from the reconstructed trajectories, which achieved a character error rate of 19.51%. Finally, a joint model is implemented that makes use of both the IMU data and the generated trajectories, which outperforms the sensor-only-based recognition approach by 0.75%.


Asunto(s)
Escritura Manual , Accidente Cerebrovascular , Humanos , Movimiento , Redes Neurales de la Computación , Comprimidos
6.
Front Hum Neurosci ; 16: 806330, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35572006

RESUMEN

The Stroop test evaluates the ability to inhibit cognitive interference. This interference occurs when the processing of one stimulus characteristic affects the simultaneous processing of another attribute of the same stimulus. Eye movements are an indicator of the individual attention load required for inhibiting cognitive interference. We used an eye tracker to collect eye movements data from more than 60 subjects each performing four different but similar tasks (some with cognitive interference and some without). After the extraction of features related to fixations, saccades and gaze trajectory, we trained different Machine Learning models to recognize tasks performed in the different conditions (i.e., with interference, without interference). The models achieved good classification performances when distinguishing between similar tasks performed with or without cognitive interference. This suggests the presence of characterizing patterns common among subjects, which can be captured by machine learning algorithms despite the individual variability of visual behavior.

7.
Animals (Basel) ; 12(6)2022 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-35327089

RESUMEN

The monitoring of animals under human care is a crucial tool for biologists and zookeepers to keep track of the animals' physical and psychological health. Additionally, it enables the analysis of observed behavioral changes and helps to unravel underlying reasons. Enhancing our understanding of animals ensures and improves ex situ animal welfare as well as in situ conservation. However, traditional observation methods are time- and labor-intensive, as they require experts to observe the animals on-site during long and repeated sessions and manually score their behavior. Therefore, the development of automated observation systems would greatly benefit researchers and practitioners in this domain. We propose an automated framework for basic behavior monitoring of individual animals under human care. Raw video data are processed to continuously determine the position of the individuals within the enclosure. The trajectories describing their travel patterns are presented, along with fundamental analysis, through a graphical user interface (GUI). We evaluate the performance of the framework on captive polar bears (Ursus maritimus). We show that the framework can localize and identify individual polar bears with an F1 score of 86.4%. The localization accuracy of the framework is 19.9±7.6 cm, outperforming current manual observation methods. Furthermore, we provide a bounding-box-labeled dataset of the two polar bears housed in Nuremberg Zoo.

8.
Sensors (Basel) ; 21(21)2021 Oct 29.
Artículo en Inglés | MEDLINE | ID: mdl-34770517

RESUMEN

Smart sensors are an integral part of the Fourth Industrial Revolution and are widely used to add safety measures to human-robot interaction applications. With the advancement of machine learning methods in resource-constrained environments, smart sensor systems have become increasingly powerful. As more data-driven approaches are deployed on the sensors, it is of growing importance to monitor data quality at all times of system operation. We introduce a smart capacitive sensor system with an embedded data quality monitoring algorithm to enhance the safety of human-robot interaction scenarios. The smart capacitive skin sensor is capable of detecting the distance and angle of objects nearby by utilizing consumer-grade sensor electronics. To further acknowledge the safety aspect of the sensor, a dedicated layer to monitor data quality in real-time is added to the embedded software of the sensor. Two learning algorithms are used to implement the sensor functionality: (1) a fully connected neural network to infer the position and angle of objects nearby and (2) a one-class SVM to account for the data quality assessment based on out-of-distribution detection. We show that the sensor performs well under normal operating conditions within a range of 200 mm and also detects abnormal operating conditions in terms of poor data quality successfully. A mean absolute distance error of 11.6mm was achieved without data quality indication. The overall performance of the sensor system could be further improved to 7.5mm by monitoring the data quality, adding an additional layer of safety for human-robot interaction.


Asunto(s)
Robótica , Algoritmos , Exactitud de los Datos , Electrónica , Humanos , Monitoreo Fisiológico
9.
Front Neurol ; 11: 577362, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33224092

RESUMEN

Patients with Alzheimer's disease (AD) and Parkinson's disease (PD) develop a progressive decline of visual function. This condition aggravates overall cognitive and motor abilities, is a risk factor for developing hallucinations, and can have a significant influence on general quality of life. Visual problems are common complaints of patients with PD and AD in the early stages of the disease, but they also occur during normal aging, making it difficult to differentiate between normal and pathological conditions. In this respect, their real incidence has remained largely underestimated, and no rehabilitative approaches have been standardized. With the aim to increase awareness for ocular and visual disorders, we collected the main neurophthalmologic and orthoptic parameters, including optical coherence tomography (OCT), in six patients with a diagnosis of PD, six patients with a diagnosis of early AD, and eight control subjects in an easily assessable outpatient setting. We also evaluated the patient's ability to recognize changes in facial expression. Our study demonstrates that visual problems, including blurred vision, diplopia, reading discomfort, photophobia, and glare, are commonly reported in patients with PD and AD. Moreover, abnormal eye alignment and vergence insufficiency were documented in all patients during examination. Despite the small size of the sample, we demonstrated greater ganglion cell and retinal nerve fibers layer (RNFL) damage and a defect of facial emotion recognition in AD/PD patients with respect to a comparable group of normal elderly persons, with peculiarities depending upon the disease. Ocular defects or visual discomfort could be correctly evaluated in these patients and possibly corrected by means of lens, orthoptic exercises, and visual rehabilitation. Such a practical approach may help to ameliorate motor autonomy, reading ability, and may also reduce the risk of falls, with a positive impact in daily living activities.

10.
Sci Rep ; 10(1): 16335, 2020 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33005008

RESUMEN

Visual attention refers to the human brain's ability to select relevant sensory information for preferential processing, improving performance in visual and cognitive tasks. It proceeds in two phases. One in which visual feature maps are acquired and processed in parallel. Another where the information from these maps is merged in order to select a single location to be attended for further and more complex computations and reasoning. Its computational description is challenging, especially if the temporal dynamics of the process are taken into account. Numerous methods to estimate saliency have been proposed in the last 3 decades. They achieve almost perfect performance in estimating saliency at the pixel level, but the way they generate shifts in visual attention fully depends on winner-take-all (WTA) circuitry. WTA is implemented by the biological hardware in order to select a location with maximum saliency, towards which to direct overt attention. In this paper we propose a gravitational model to describe the attentional shifts. Every single feature acts as an attractor and the shifts are the result of the joint effects of the attractors. In the current framework, the assumption of a single, centralized saliency map is no longer necessary, though still plausible. Quantitative results on two large image datasets show that this model predicts shifts more accurately than winner-take-all.


Asunto(s)
Atención/fisiología , Modelos Neurológicos , Percepción Visual/fisiología , Movimientos Oculares/fisiología , Humanos , Estimulación Luminosa
11.
IEEE Trans Pattern Anal Mach Intell ; 42(12): 2983-2995, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-31180885

RESUMEN

The understanding of the mechanisms behind focus of attention in a visual scene is a problem of great interest in visual perception and computer vision. In this paper, we describe a model of scanpath as a dynamic process which can be interpreted as a variational law somehow related to mechanics, where the focus of attention is subject to a gravitational field. The distributed virtual mass that drives eye movements is associated with the presence of details and motion in the video. Unlike most current models, the proposed approach does not estimate directly the saliency map, but the prediction of eye movements allows us to integrate over time the positions of interest. The process of inhibition-of-return is also supported in the same dynamic model with the purpose of simulating fixations and saccades. The differential equations of motion of the proposed model are numerically integrated to simulate scanpaths on both images and videos. Experimental results for the tasks of saliency and scanpath prediction on a wide collection of datasets are presented to support the theory. Top level performances are achieved especially in the prediction of scanpaths, which is the primary purpose of the proposed model.


Asunto(s)
Atención/fisiología , Movimientos Oculares/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos , Fijación Ocular/fisiología , Gravitación , Humanos , Modelos Estadísticos
12.
Prog Brain Res ; 249: 183-188, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31325977

RESUMEN

Eye movements are an essential part of human vision as they drive the fovea and, consequently, selective visual attention toward a region of interest in space. Free visual exploration is an inherently stochastic process depending on image statistics but also individual variability of cognitive and attentive state. We propose a theory of free visual exploration entirely formulated within the framework of physics and based on the general Principle of Least Action. Within this framework, differential laws describing eye movements emerge in accordance with bottom-up functional principles. In addition, we integrate top-down semantic information captured by deep convolutional neural networks pre-trained for the classification of common objects. To stress the model, we used a wide collection of images including basic features as well as high level semantic content. Results in a task of saliency prediction validate the theory.


Asunto(s)
Atención/fisiología , Movimientos Oculares/fisiología , Modelos Teóricos , Redes Neurales de la Computación , Percepción Visual/fisiología , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA