Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Med Imaging Graph ; 110: 102310, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37979340

RESUMO

Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Carcinoma de Células Escamosas/patologia , Tomografia Computadorizada por Raios X/métodos , Curva ROC
2.
Sensors (Basel) ; 23(17)2023 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-37688059

RESUMO

Many "Industry 4.0" applications rely on data-driven methodologies such as Machine Learning and Deep Learning to enable automatic tasks and implement smart factories. Among these applications, the automatic quality control of manufacturing materials is of utmost importance to achieve precision and standardization in production. In this regard, most of the related literature focused on combining Deep Learning with Nondestructive Testing techniques, such as Infrared Thermography, requiring dedicated settings to detect and classify defects in composite materials. Instead, the research described in this paper aims at understanding whether deep neural networks and transfer learning can be applied to plain images to classify surface defects in carbon look components made with Carbon Fiber Reinforced Polymers used in the automotive sector. To this end, we collected a database of images from a real case study, with 400 images to test binary classification (defect vs. no defect) and 1500 for the multiclass classification (components with no defect vs. recoverable vs. non-recoverable). We developed and tested ten deep neural networks as classifiers, comparing ten different pre-trained CNNs as feature extractors. Specifically, we evaluated VGG16, VGG19, ResNet50 version 2, ResNet101 version 2, ResNet152 version 2, Inception version 3, MobileNet version 2, NASNetMobile, DenseNet121, and Xception, all pre-trainined with ImageNet, combined with fully connected layers to act as classifiers. The best classifier, i.e., the network based on DenseNet121, achieved a 97% accuracy in classifying components with no defects, recoverable components, and non-recoverable components, demonstrating the viability of the proposed methodology to classify surface defects from images taken with a smartphone in varying conditions, without the need for dedicated settings. The collected images and the source code of the experiments are available in two public, open-access repositories, making the presented research fully reproducible.

3.
Diagnostics (Basel) ; 13(10)2023 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-37238168

RESUMO

Knowledge about the anatomical structures of the left heart, specifically the atrium (LA) and ventricle (i.e., endocardium-Vendo-and epicardium-LVepi) is essential for the evaluation of cardiac functionality. Manual segmentation of cardiac structures from echocardiography is the baseline reference, but results are user-dependent and time-consuming. With the aim of supporting clinical practice, this paper presents a new deep-learning (DL)-based tool for segmenting anatomical structures of the left heart from echocardiographic images. Specifically, it was designed as a combination of two convolutional neural networks, the YOLOv7 algorithm and a U-Net, and it aims to automatically segment an echocardiographic image into LVendo, LVepi and LA. The DL-based tool was trained and tested on the Cardiac Acquisitions for Multi-Structure Ultrasound Segmentation (CAMUS) dataset of the University Hospital of St. Etienne, which consists of echocardiographic images from 450 patients. For each patient, apical two- and four-chamber views at end-systole and end-diastole were acquired and annotated by clinicians. Globally, our DL-based tool was able to segment LVendo, LVepi and LA, providing Dice similarity coefficients equal to 92.63%, 85.59%, and 87.57%, respectively. In conclusion, the presented DL-based tool proved to be reliable in automatically segmenting the anatomical structures of the left heart and supporting the cardiological clinical practice.

4.
Sensors (Basel) ; 23(4)2023 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-36850537

RESUMO

Although face recognition technology is currently integrated into industrial applications, it has open challenges, such as verification and identification from arbitrary poses. Specifically, there is a lack of research about face recognition in surveillance videos using, as reference images, mugshots taken from multiple Points of View (POVs) in addition to the frontal picture and the right profile traditionally collected by national police forces. To start filling this gap and tackling the scarcity of databases devoted to the study of this problem, we present the Face Recognition from Mugshots Database (FRMDB). It includes 28 mugshots and 5 surveillance videos taken from different angles for 39 distinct subjects. The FRMDB is intended to analyze the impact of using mugshots taken from multiple points of view on face recognition on the frames of the surveillance videos. To validate the FRMDB and provide a first benchmark on it, we ran accuracy tests using two CNNs, namely VGG16 and ResNet50, pre-trained on the VGGFace and VGGFace2 datasets for the extraction of face image features. We compared the results to those obtained from a dataset from the related literature, the Surveillance Cameras Face Database (SCFace). In addition to showing the features of the proposed database, the results highlight that the subset of mugshots composed of the frontal picture and the right profile scores the lowest accuracy result among those tested. Therefore, additional research is suggested to understand the ideal number of mugshots for face recognition on frames from surveillance videos.


Assuntos
Reconhecimento Facial , Humanos , Benchmarking , Bases de Dados Factuais , Gravação de Videoteipe
5.
J Pathol Inform ; 14: 100183, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36687531

RESUMO

Computational pathology targets the automatic analysis of Whole Slide Images (WSI). WSIs are high-resolution digitized histopathology images, stained with chemical reagents to highlight specific tissue structures and scanned via whole slide scanners. The application of different parameters during WSI acquisition may lead to stain color heterogeneity, especially considering samples collected from several medical centers. Dealing with stain color heterogeneity often limits the robustness of methods developed to analyze WSIs, in particular Convolutional Neural Networks (CNN), the state-of-the-art algorithm for most computational pathology tasks. Stain color heterogeneity is still an unsolved problem, although several methods have been developed to alleviate it, such as Hue-Saturation-Contrast (HSC) color augmentation and stain augmentation methods. The goal of this paper is to present Data-Driven Color Augmentation (DDCA), a method to improve the efficiency of color augmentation methods by increasing the reliability of the samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data. DDCA is applied to HSC color augmentation, stain augmentation and H&E-adversarial networks in colon and prostate cancer classification tasks. DDCA is then compared with 11 state-of-the-art baseline methods to handle color heterogeneity, showing that it can substantially improve classification performance on unseen data including heterogeneous color variations.

6.
Comput Methods Programs Biomed ; 227: 107191, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36335750

RESUMO

BACKGROUND AND OBJECTIVE: Alzheimer's disease accounts for approximately 70% of all dementia cases. Cortical and hippocampal atrophy caused by Alzheimer's disease can be appreciated easily from a T1-weighted structural magnetic resonance scan. Since a timely therapeutic intervention during the initial stages of the syndrome has a positive impact on both disease progression and quality of life of affected subjects, Alzheimer's disease diagnosis is crucial. Thus, this study relies on the development of a robust yet lightweight 3D framework, Brain-on-Cloud, dedicated to efficient learning of Alzheimer's disease-related features from 3D structural magnetic resonance whole-brain scans by improving our recent convolutional long short-term memory-based framework with the integration of a set of data handling techniques in addition to the tuning of the model hyper-parameters and the evaluation of its diagnostic performance on independent test data. METHODS: For this objective, four serial experiments were conducted on a scalable GPU cloud service. They were compared and the hyper-parameters of the best experiment were tuned until reaching the best-performing configuration. In parallel, two branches were designed. In the first branch of Brain-on-Cloud, training, validation and testing were performed on OASIS-3. In the second branch, unenhanced data from ADNI-2 were employed as independent test set, and the diagnostic performance of Brain-on-Cloud was evaluated to prove its robustness and generalization capability. The prediction scores were computed for each subject and stratified according to age, sex and mini mental state examination. RESULTS: In its best guise, Brain-on-Cloud is able to discriminate Alzheimer's disease with an accuracy of 92% and 76%, sensitivity of 94% and 82%, and area under the curve of 96% and 92% on OASIS-3 and independent ADNI-2 test data, respectively. CONCLUSIONS: Brain-on-Cloud shows to be a reliable, lightweight and easily-reproducible framework for automatic diagnosis of Alzheimer's disease from 3D structural magnetic resonance whole-brain scans, performing well without segmenting the brain into its portions. Preserving the brain anatomy, its application and diagnostic ability can be extended to other cognitive disorders. Due to its cloud nature, computational lightness and fast execution, it can also be applied in real-time diagnostic scenarios providing prompt clinical decision support.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Qualidade de Vida , Neuroimagem/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Espectroscopia de Ressonância Magnética
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1556-1560, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085720

RESUMO

Non-Small Cell Lung Cancer (NSCLC) represents up to 85% of all malignant lung nodules. Adenocarcinoma and squamous cell carcinoma account for 90% of all NSCLC histotypes. The standard diagnostic procedure for NSCLC histotype characterization implies cooperation of 3D Computed Tomography (CT), especially in the form of low-dose CT, and lung biopsy. Since lung biopsy is invasive and challenging (especially for deeply-located lung cancers and for those close to blood vessels or airways), there is the necessity to develop non-invasive procedures for NSCLC histology classification. Thus, this study aims to propose Cloud-YLung for NSCLC histology classification directly from 3D CT whole-lung scans. With this aim, data were selected from the openly-accessible NSCLC-Radiomics dataset and a modular pipeline was designed. Automatic feature extraction and classification were accomplished by means of a Convolutional Long Short-Term Memory (ConvLSTM)-based neural network trained from scratch on a scalable GPU cloud service to ensure a machine-independent reproducibility of the entire framework. Results show that Cloud- YLung performs well in discriminating both NSCLC histotypes, achieving a test accuracy of 75% and AUC of 84%. Cloud-YLung is not only lung nodule segmentation free but also the first that makes use of a ConvLSTM-based neural network to automatically extract high-throughput features from 3D CT whole-lung scans and classify them. Clinical relevance- Cloud-YLung is a promising framework to non-invasively classify NSCLC histotypes. Preserving the lung anatomy, its application could be extended to other pulmonary pathologies using 3D CT whole-lung scans.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Humanos , Pulmão/patologia , Neoplasias Pulmonares/diagnóstico , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1288-1291, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086141

RESUMO

Atrial fibrillation (AF) is a common supraventricular arrhythmia. Its automatic identification by standard 12-lead electrocardiography (ECG) is still challenging. Recently, deep learning provided new instruments able to mimic the diagnostic ability of clinicians but only in case of binary classification (AF vs. normal sinus rhythm-NSR). However, binary classification is far from the real scenarios, where AF has to be discriminated also from several other physiological and pathological conditions. The aim of this work is to present a new AF multiclass classifier based on a convolutional neural network (CNN), able to discriminate AF from NSR, premature atrial contraction (PAC) and premature ventricular contraction (PVC). Overall, 2796 12-lead ECG recordings were selected from the open-source "PhysioNet/Computing in Cardiology Challenge 2021" database, to construct a dataset constituted by four balanced classes, namely AF class, PAC class, PVC class, and NSR class. Each lead of each ECG recording was decomposed into spectrogram by continuous wavelet transform and saved as 2D grayscale images, used to feed a 6-layers CNN. Considering the same CNN architecture, a multiclass classifiers (all classes) and three binary classifiers (AF class, PAC class, and PVC class vs. NSR class) were created and validated by a stratified shuffle split cross-validation of 10 splits. Performance was quantified in terms of area under the curve (AUC) of the receiver operating characteristic. Multiclass classifier performance was high (AF class: 96.6%; PAC class: 95.3%; PVC class: 92.8%; NSR class: 97.4%) and preferable to binary classifiers. Thus, our CNN AF multiclass classifier proved to be an efficient tool for AF discrimination from physiological and pathological confounders. Clinical Relevance-Our CNN AF multiclass classifier proved to be suitable for AF discrimination in real scenarios.


Assuntos
Fibrilação Atrial , Complexos Ventriculares Prematuros , Humanos , Fibrilação Atrial/diagnóstico , Eletrocardiografia/métodos , Redes Neurais de Computação , Análise de Ondaletas
9.
Comput Biol Med ; 146: 105691, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35691714

RESUMO

Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico , Redes Neurais de Computação , Estudos Prospectivos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
10.
Data Brief ; 33: 106587, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33318975

RESUMO

The automatic detection of violence and crimes in videos is gaining attention, specifically as a tool to unburden security officers and authorities from the need to watch hours of footages to identify event lasting few seconds. So far, most of the available datasets was composed of few clips, in low resolution, often built on too specific cases (e.g. hockey fight). While high resolution datasets are emerging, there is still the need of datasets to test the robustness of violence detection techniques to false positives, due to behaviours which might resemble violent actions. To this end, we propose a dataset composed of 350 clips (MP4 video files, 1920 × 1080 pixels, 30 fps), labelled as non-violent (120 clips) when representing non-violent behaviours, and violent (230 clips) when representing violent behaviours. In particular, the non-violent clips include behaviours (hugs, claps, exulting, etc.) that can cause false positives in the violence detection task, due to fast movements and the similarity with violent behaviours. The clips were performed by non-professional actors, varying from 2 to 4 per clip.

11.
Data Brief ; 33: 106455, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33195774

RESUMO

Surface Electromyography (EMG) and Inertial Measurement Unit (IMU) sensors are gaining the attention of the research community as data sources for automatic sign language recognition. In this regard, we provide a dataset of EMG and IMU data collected using the Myo Gesture Control Armband, during the execution of the 26 gestures of the Italian Sign Language alphabet. For each gesture, 30 data acquisitions were executed, composing a total of 780 samples included in the dataset. The gestures were performed by the same subject (male, 24 years old) in lab settings. EMG and IMU data were collected in a 2 seconds time window, at a sampling frequency of 200 Hz.

12.
Math Biosci Eng ; 16(5): 6034-6046, 2019 06 29.
Artigo em Inglês | MEDLINE | ID: mdl-31499751

RESUMO

Fetal heart rate (FHR) monitoring can serve as a benchmark to identify high-risk fetuses. Fetal phonocardiogram (FPCG) is the recording of the fetal heart sounds (FHS) by means of a small acoustic sensor placed on maternal abdomen. Being heavily contaminated by noise, FPCG processing implies mandatory filtering to make FPCG clinically usable. Aim of the present study was to perform a comparative analysis of filters based on Wavelet transform (WT) characterized by different combinations of mothers Wavelet and thresholding settings. By combining three mothers Wavelet (4th-order Coiflet, 4th-order Daubechies and 8th-order Symlet), two thresholding rules (Soft and Hard) and three thresholding algorithms (Universal, Rigorous and Minimax), 18 different WT-based filters were obtained and applied to 37 simulated and 119 experimental FPCG data (PhysioNet/PhysioBank). Filters performance was evaluated in terms of reliability in FHR estimation from filtered FPCG and noise reduction quantified by the signal-to-noise ratio (SNR). The filter obtained by combining the 4th-order Coiflet mother Wavelet with the Soft thresholding rule and the Universal thresholding algorithm was found to be optimal in both simulated and experimental FPCG data, since able to maintain FHR with respect to reference (138.7[137.7; 140.8] bpm vs. 140.2[139.7; 140.7] bpm, P > 0.05, in simulated FPCG data; 139.6[113.4; 144.2] bpm vs. 140.5[135.2; 146.3] bpm, P > 0.05, in experimental FPCG data) while strongly incrementing SNR (25.9[20.4; 31.3] dB vs. 0.7[-0.2; 2.9] dB, P < 10-14 , in simulated FPCG data; 22.9[20.1; 25.7] dB vs. 15.6[13.8; 16.7] dB, P < 10-37, in experimental FPCG data). In conclusion, the WT-based filter obtained combining the 4th-order Coiflet mother Wavelet with the thresholding settings constituted by the Soft rule and the Universal algorithm provides the optimal WT-based filter for FPCG filtering according to evaluation criteria based on both noise and clinical features.


Assuntos
Fonocardiografia/métodos , Diagnóstico Pré-Natal/métodos , Análise de Ondaletas , Acústica , Algoritmos , Cardiotocografia/métodos , Simulação por Computador , Feminino , Frequência Cardíaca Fetal , Ruídos Cardíacos , Humanos , Gravidez , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...