Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(23)2023 Nov 25.
Article in English | MEDLINE | ID: mdl-38067779

ABSTRACT

Modern embedded systems have achieved relatively high processing power. They can be used for edge computing and computer vision, where data are collected and processed locally, without the need for network communication for decision-making and data analysis purposes. Face detection, face recognition, and pose detection algorithms can be executed with acceptable performance on embedded systems and are used for home security and monitoring. However, popular machine learning frameworks, such as MediaPipe, require relatively high usage of CPU while running, even when idle with no subject in the scene. Combined with the still present false detections, this wastes CPU time, elevates the power consumption and overall system temperature, and generates unnecessary data. In this study, a low-cost low-resolution infrared thermal sensor array was used to control the execution of MediaPipe's pose detection algorithm using single-board computers, which only runs when the thermal camera detects a possible subject in its field of view. A lightweight algorithm with several filtering layers was developed, which allowed the effective detection and isolation of a person in the thermal image. The resulting hybrid computer vision proved effective in reducing the average CPU workload, especially in environments with low activity, almost eliminating MediaPipe's false detections, and reaching up to 30% power saving in the best-case scenario.


Subject(s)
Algorithms , Workload , Humans , Computers , Machine Learning
2.
Curr Med Imaging ; 2023 Oct 19.
Article in English | MEDLINE | ID: mdl-37881081

ABSTRACT

BACKGROUND: Artificial intelligence-based aided diagnostic systems for pulmonary nodules can be divided into subtasks such as nodule detection, segmentation, and benign and malignant differentiation. Most current studies are limited to single-target tasks. However, aided diagnosis aims to distinguish benign from malignant pulmonary nodules, which requires the fusion of multiple-scale features and comprehensive discrimination based on the results of multiple learning tasks. OBJECTIVE: This study focuses on the aspects of model design, network structure, and constraints and proposes a novel model that integrates the learning tasks of pulmonary nodule detection, segmentation, and classification under weakly supervised conditions. METHODS: The main innovations include the following three aspects: (1) a two-dimensional sequence detection model based on a ConvLSTM (Convolutional Long Short-Term Memory) network and U-shaped structure network is proposed to obtain the context space features of image slices fully; (2) a differential diagnosis of benign and malignant pulmonary nodules based on multitask learning is proposed, which uses the annotated data of different types of tasks to mine the potential common features among tasks; and (3) an optimization strategy incorporating prior knowledge of computed tomography images and dynamic weight adjustment of multiple tasks is proposed to ensure that each task can efficiently complete training and learning. RESULTS: Experiments on the LIDC-IDRI and LUNA16 datasets showed that our proposed method achieved a final competition performance metric score of 87.80% for nodule detection and a Dice similarity coefficient score of 83.95% for pulmonary nodule segmentation. CONCLUSION: The cross-validation results of the LIDC-IDRI and LUNA16 datasets show that our model achieved 87.80% of the final competition performance metric score for nodule detection and 83.95% of the DSC score for pulmonary nodule segmentation, representing the optimal result for that dataset.

3.
Brief Bioinform ; 24(3)2023 05 19.
Article in English | MEDLINE | ID: mdl-36932655

ABSTRACT

Determining drug-drug interactions (DDIs) is an important part of pharmacovigilance and has a vital impact on public health. Compared with drug trials, obtaining DDI information from scientific articles is a faster and lower cost but still a highly credible approach. However, current DDI text extraction methods consider the instances generated from articles to be independent and ignore the potential connections between different instances in the same article or sentence. Effective use of external text data could improve prediction accuracy, but existing methods cannot extract key information from external data accurately and reasonably, resulting in low utilization of external data. In this study, we propose a DDI extraction framework, instance position embedding and key external text for DDI (IK-DDI), which adopts instance position embedding and key external text to extract DDI information. The proposed framework integrates the article-level and sentence-level position information of the instances into the model to strengthen the connections between instances generated from the same article or sentence. Moreover, we introduce a comprehensive similarity-matching method that uses string and word sense similarity to improve the matching accuracy between the target drug and external text. Furthermore, the key sentence search method is used to obtain key information from external data. Therefore, IK-DDI can make full use of the connection between instances and the information contained in external text data to improve the efficiency of DDI extraction. Experimental results show that IK-DDI outperforms existing methods on both macro-averaged and micro-averaged metrics, which suggests our method provides complete framework that can be used to extract relationships between biomedical entities and process external text data.


Subject(s)
Data Mining , Pharmacovigilance , Data Mining/methods , Drug Interactions , Benchmarking , Drug Delivery Systems
4.
Emerg Med Int ; 2022: 3561147, 2022.
Article in English | MEDLINE | ID: mdl-35615106

ABSTRACT

Objective. Electrocardiogram (ECG) is an important diagnostic tool that has been the subject of much research in recent years. Owing to a lack of well-labeled ECG record databases, most of this work has focused on heartbeat arrhythmia detection based on ECG signal quality. Approach. A record quality filter was designed to judge ECG signal quality, and a random forest method, a multilayer perceptron, and a residual neural network (RESNET)-based convolutional neural network were implemented to provide baselines for ECG record classification according to three different principles. A new multimodel method was constructed by fusing the random forest and RESNET approaches. Main Results. Owing to its ability to combine discriminative human-crafted features with RESNET deep features, the proposed new method showed over 88% classification accuracy and yielded the best results in comparison with alternative methods. Significance. A new multimodel fusion method was presented for abnormal cardiovascular detection based on ECG data. The experimental results show that separable convolution and multiscale convolution are vital for ECG record classification and are effective for use with one-dimensional ECG sequences.

5.
Comput Methods Programs Biomed ; 178: 135-143, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31416542

ABSTRACT

BACKGROUND AND OBJECTIVE: Electrocardiogram (ECG) is an important diagnostic tool for the diagnosis of heart disorders. Useful features and well-designed classification method are crucial for automatic diagnosis. However, most of the contributions were in single lead or two-lead ECG signal and only features from single lead were used to classify the ECG beats. In this paper, a cascaded classification system is proposed to extract features and classify heartbeats in order to improve the performance of ECG beat classification via multi-lead ECG. METHODS: In contrast with most of the literatures, ten signal features were chosen and run on each of the 12 leads separately. Based on these features, we developed a novel feature fusion method combining information from all available leads, and then implemented a cascaded classifier utilizing random forest (RF) and multilayer perceptron (MLP). Besides, in order to reduce the feature space dimension, principal component analysis (PCA) was applied in the method. MATERIALS: Four open source databases including MIT-BIH Arrythmia Database, QT Database, MIT-BIH Supraventricular Arrhythmia Database and St. Petersburg Institute of Cardiological Technics 12-lead Arrhythmia Database (INCART Database) were used in this work. These four databases are different in classes of beats, volume of dataset, number of individual volunteers. Except INCART database, in which the recording format is 12-lead, the other three databases consist of 2-lead recordings. Above all, they all have annotations for every single beat including the type of each beat. CONCLUSIONS: Extensive experimental results shown that the average accuracy achieved 99.3%, 99.8%, 97.6% and 99.6% on four databases respectively. Compared with most state-of-the-art methods, our work has better performance and strong generalization capability.


Subject(s)
Arrhythmias, Cardiac/diagnosis , Electrocardiography , Heart Rate , Signal Processing, Computer-Assisted , Wavelet Analysis , Algorithms , Databases, Factual , Humans , Neural Networks, Computer , Pattern Recognition, Automated , Principal Component Analysis , Reproducibility of Results , Software
6.
Med Biol Eng Comput ; 57(7): 1567-1580, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31025248

ABSTRACT

Lung cancer is one of the most diagnosable forms of cancer worldwide. The early diagnoses of pulmonary nodules in computed tomography (CT) chest scans are crucial for potential patients. Recent researches have showed that the methods based on deep learning have made a significant progress for the medical diagnoses. However, the achievements on identification of pulmonary nodules are not yet satisfactory enough to be adopted in clinical practice. It is largely caused by either the existence of many false positives or the heavy time of processing. With the development of fully convolutional networks (FCNs), in this study, we proposed a new method of identifying the pulmonary nodules. The method segments the suspected nodules from their environments and then removes the false positives. Especially, it optimizes the network architecture for the identification of nodules rapidly and accurately. In order to remove the false positives, the suspected nodules are reduced using the 2D models. Furthermore, according to the significant differences between nodules and non-nodules in 3D shapes, the false positives are eliminated by integrating into the 3D models and classified via 3D CNNs. The experiments on 1000 patients indicate that our proposed method achieved 97.78% sensitivity rate for segmentation and 90.1% accuracy rate for detection. The maximum response time was less than 30 s and the average time was about 15 s. Graphical Abstract This paper has proposed a new method of identifying the pulmonary nodules. The method segments the suspected nodules from CT images and removes the false positives. As shown in the above, the proposed approach consists of three stages. In stage I, raw data are filtered and normalized. The clean normalized data are then segmented in stage II to extract the suspected nodular lesions through 2D FCNs. Stage III is to remove some false positives generated at stage II via 3D CNNs and outputs the final results. The experiments on 1000 patients indicate that our proposed method has achieved 97.78% sensitivity rate for segmentation and 90.1% accuracy rate for detection. The maximum response time was less than 30 s and the average time was about 15 s.


Subject(s)
Imaging, Three-Dimensional/methods , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed/methods , Algorithms , Databases, Factual , Humans , Lung Neoplasms/pathology , Sensitivity and Specificity , Solitary Pulmonary Nodule/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...