Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Comput Med Imaging Graph ; 110: 102310, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37979340

RESUMO

Non-Small Cell Lung Cancer (NSCLC) accounts for about 85% of all lung cancers. Developing non-invasive techniques for NSCLC histology characterization may not only help clinicians to make targeted therapeutic treatments but also prevent subjects from undergoing lung biopsy, which is challenging and could lead to clinical implications. The motivation behind the study presented here is to develop an advanced on-cloud decision-support system, named LUCY, for non-small cell LUng Cancer histologY characterization directly from thorax Computed Tomography (CT) scans. This aim was pursued by selecting thorax CT scans of 182 LUng ADenocarcinoma (LUAD) and 186 LUng Squamous Cell carcinoma (LUSC) subjects from four openly accessible data collections (NSCLC-Radiomics, NSCLC-Radiogenomics, NSCLC-Radiomics-Genomics and TCGA-LUAD), in addition to the implementation and comparison of two end-to-end neural networks (the core layer of whom is a convolutional long short-term memory layer), the performance evaluation on test dataset (NSCLC-Radiomics-Genomics) from a subject-level perspective in relation to NSCLC histological subtype location and grade, and the dynamic visual interpretation of the achieved results by producing and analyzing one heatmap video for each scan. LUCY reached test Area Under the receiver operating characteristic Curve (AUC) values above 77% in all NSCLC histological subtype location and grade groups, and a best AUC value of 97% on the entire dataset reserved for testing, proving high generalizability to heterogeneous data and robustness. Thus, LUCY is a clinically-useful decision-support system able to timely, non-invasively and reliably provide visually-understandable predictions on LUAD and LUSC subjects in relation to clinically-relevant information.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Carcinoma de Células Escamosas/patologia , Tomografia Computadorizada por Raios X/métodos , Curva ROC
2.
Sensors (Basel) ; 23(17)2023 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-37688059

RESUMO

Many "Industry 4.0" applications rely on data-driven methodologies such as Machine Learning and Deep Learning to enable automatic tasks and implement smart factories. Among these applications, the automatic quality control of manufacturing materials is of utmost importance to achieve precision and standardization in production. In this regard, most of the related literature focused on combining Deep Learning with Nondestructive Testing techniques, such as Infrared Thermography, requiring dedicated settings to detect and classify defects in composite materials. Instead, the research described in this paper aims at understanding whether deep neural networks and transfer learning can be applied to plain images to classify surface defects in carbon look components made with Carbon Fiber Reinforced Polymers used in the automotive sector. To this end, we collected a database of images from a real case study, with 400 images to test binary classification (defect vs. no defect) and 1500 for the multiclass classification (components with no defect vs. recoverable vs. non-recoverable). We developed and tested ten deep neural networks as classifiers, comparing ten different pre-trained CNNs as feature extractors. Specifically, we evaluated VGG16, VGG19, ResNet50 version 2, ResNet101 version 2, ResNet152 version 2, Inception version 3, MobileNet version 2, NASNetMobile, DenseNet121, and Xception, all pre-trainined with ImageNet, combined with fully connected layers to act as classifiers. The best classifier, i.e., the network based on DenseNet121, achieved a 97% accuracy in classifying components with no defects, recoverable components, and non-recoverable components, demonstrating the viability of the proposed methodology to classify surface defects from images taken with a smartphone in varying conditions, without the need for dedicated settings. The collected images and the source code of the experiments are available in two public, open-access repositories, making the presented research fully reproducible.

3.
Sensors (Basel) ; 23(4)2023 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-36850537

RESUMO

Although face recognition technology is currently integrated into industrial applications, it has open challenges, such as verification and identification from arbitrary poses. Specifically, there is a lack of research about face recognition in surveillance videos using, as reference images, mugshots taken from multiple Points of View (POVs) in addition to the frontal picture and the right profile traditionally collected by national police forces. To start filling this gap and tackling the scarcity of databases devoted to the study of this problem, we present the Face Recognition from Mugshots Database (FRMDB). It includes 28 mugshots and 5 surveillance videos taken from different angles for 39 distinct subjects. The FRMDB is intended to analyze the impact of using mugshots taken from multiple points of view on face recognition on the frames of the surveillance videos. To validate the FRMDB and provide a first benchmark on it, we ran accuracy tests using two CNNs, namely VGG16 and ResNet50, pre-trained on the VGGFace and VGGFace2 datasets for the extraction of face image features. We compared the results to those obtained from a dataset from the related literature, the Surveillance Cameras Face Database (SCFace). In addition to showing the features of the proposed database, the results highlight that the subset of mugshots composed of the frontal picture and the right profile scores the lowest accuracy result among those tested. Therefore, additional research is suggested to understand the ideal number of mugshots for face recognition on frames from surveillance videos.


Assuntos
Reconhecimento Facial , Humanos , Benchmarking , Bases de Dados Factuais , Gravação de Videoteipe
4.
J Pathol Inform ; 14: 100183, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36687531

RESUMO

Computational pathology targets the automatic analysis of Whole Slide Images (WSI). WSIs are high-resolution digitized histopathology images, stained with chemical reagents to highlight specific tissue structures and scanned via whole slide scanners. The application of different parameters during WSI acquisition may lead to stain color heterogeneity, especially considering samples collected from several medical centers. Dealing with stain color heterogeneity often limits the robustness of methods developed to analyze WSIs, in particular Convolutional Neural Networks (CNN), the state-of-the-art algorithm for most computational pathology tasks. Stain color heterogeneity is still an unsolved problem, although several methods have been developed to alleviate it, such as Hue-Saturation-Contrast (HSC) color augmentation and stain augmentation methods. The goal of this paper is to present Data-Driven Color Augmentation (DDCA), a method to improve the efficiency of color augmentation methods by increasing the reliability of the samples used for training computational pathology models. During CNN training, a database including over 2 million H&E color variations collected from private and public datasets is used as a reference to discard augmented data with color distributions that do not correspond to realistic data. DDCA is applied to HSC color augmentation, stain augmentation and H&E-adversarial networks in colon and prostate cancer classification tasks. DDCA is then compared with 11 state-of-the-art baseline methods to handle color heterogeneity, showing that it can substantially improve classification performance on unseen data including heterogeneous color variations.

5.
Comput Methods Programs Biomed ; 227: 107191, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36335750

RESUMO

BACKGROUND AND OBJECTIVE: Alzheimer's disease accounts for approximately 70% of all dementia cases. Cortical and hippocampal atrophy caused by Alzheimer's disease can be appreciated easily from a T1-weighted structural magnetic resonance scan. Since a timely therapeutic intervention during the initial stages of the syndrome has a positive impact on both disease progression and quality of life of affected subjects, Alzheimer's disease diagnosis is crucial. Thus, this study relies on the development of a robust yet lightweight 3D framework, Brain-on-Cloud, dedicated to efficient learning of Alzheimer's disease-related features from 3D structural magnetic resonance whole-brain scans by improving our recent convolutional long short-term memory-based framework with the integration of a set of data handling techniques in addition to the tuning of the model hyper-parameters and the evaluation of its diagnostic performance on independent test data. METHODS: For this objective, four serial experiments were conducted on a scalable GPU cloud service. They were compared and the hyper-parameters of the best experiment were tuned until reaching the best-performing configuration. In parallel, two branches were designed. In the first branch of Brain-on-Cloud, training, validation and testing were performed on OASIS-3. In the second branch, unenhanced data from ADNI-2 were employed as independent test set, and the diagnostic performance of Brain-on-Cloud was evaluated to prove its robustness and generalization capability. The prediction scores were computed for each subject and stratified according to age, sex and mini mental state examination. RESULTS: In its best guise, Brain-on-Cloud is able to discriminate Alzheimer's disease with an accuracy of 92% and 76%, sensitivity of 94% and 82%, and area under the curve of 96% and 92% on OASIS-3 and independent ADNI-2 test data, respectively. CONCLUSIONS: Brain-on-Cloud shows to be a reliable, lightweight and easily-reproducible framework for automatic diagnosis of Alzheimer's disease from 3D structural magnetic resonance whole-brain scans, performing well without segmenting the brain into its portions. Preserving the brain anatomy, its application and diagnostic ability can be extended to other cognitive disorders. Due to its cloud nature, computational lightness and fast execution, it can also be applied in real-time diagnostic scenarios providing prompt clinical decision support.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/patologia , Qualidade de Vida , Neuroimagem/métodos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Espectroscopia de Ressonância Magnética
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1556-1560, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085720

RESUMO

Non-Small Cell Lung Cancer (NSCLC) represents up to 85% of all malignant lung nodules. Adenocarcinoma and squamous cell carcinoma account for 90% of all NSCLC histotypes. The standard diagnostic procedure for NSCLC histotype characterization implies cooperation of 3D Computed Tomography (CT), especially in the form of low-dose CT, and lung biopsy. Since lung biopsy is invasive and challenging (especially for deeply-located lung cancers and for those close to blood vessels or airways), there is the necessity to develop non-invasive procedures for NSCLC histology classification. Thus, this study aims to propose Cloud-YLung for NSCLC histology classification directly from 3D CT whole-lung scans. With this aim, data were selected from the openly-accessible NSCLC-Radiomics dataset and a modular pipeline was designed. Automatic feature extraction and classification were accomplished by means of a Convolutional Long Short-Term Memory (ConvLSTM)-based neural network trained from scratch on a scalable GPU cloud service to ensure a machine-independent reproducibility of the entire framework. Results show that Cloud- YLung performs well in discriminating both NSCLC histotypes, achieving a test accuracy of 75% and AUC of 84%. Cloud-YLung is not only lung nodule segmentation free but also the first that makes use of a ConvLSTM-based neural network to automatically extract high-throughput features from 3D CT whole-lung scans and classify them. Clinical relevance- Cloud-YLung is a promising framework to non-invasively classify NSCLC histotypes. Preserving the lung anatomy, its application could be extended to other pulmonary pathologies using 3D CT whole-lung scans.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Neoplasias Pulmonares , Carcinoma Pulmonar de Células não Pequenas/diagnóstico por imagem , Humanos , Pulmão/patologia , Neoplasias Pulmonares/diagnóstico , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
7.
Comput Biol Med ; 146: 105691, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35691714

RESUMO

Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Pulmão/diagnóstico por imagem , Neoplasias Pulmonares/diagnóstico , Redes Neurais de Computação , Estudos Prospectivos , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X/métodos
8.
Data Brief ; 33: 106587, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33318975

RESUMO

The automatic detection of violence and crimes in videos is gaining attention, specifically as a tool to unburden security officers and authorities from the need to watch hours of footages to identify event lasting few seconds. So far, most of the available datasets was composed of few clips, in low resolution, often built on too specific cases (e.g. hockey fight). While high resolution datasets are emerging, there is still the need of datasets to test the robustness of violence detection techniques to false positives, due to behaviours which might resemble violent actions. To this end, we propose a dataset composed of 350 clips (MP4 video files, 1920 × 1080 pixels, 30 fps), labelled as non-violent (120 clips) when representing non-violent behaviours, and violent (230 clips) when representing violent behaviours. In particular, the non-violent clips include behaviours (hugs, claps, exulting, etc.) that can cause false positives in the violence detection task, due to fast movements and the similarity with violent behaviours. The clips were performed by non-professional actors, varying from 2 to 4 per clip.

9.
Data Brief ; 33: 106455, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33195774

RESUMO

Surface Electromyography (EMG) and Inertial Measurement Unit (IMU) sensors are gaining the attention of the research community as data sources for automatic sign language recognition. In this regard, we provide a dataset of EMG and IMU data collected using the Myo Gesture Control Armband, during the execution of the 26 gestures of the Italian Sign Language alphabet. For each gesture, 30 data acquisitions were executed, composing a total of 780 samples included in the dataset. The gestures were performed by the same subject (male, 24 years old) in lab settings. EMG and IMU data were collected in a 2 seconds time window, at a sampling frequency of 200 Hz.

10.
Artif Intell Med ; 96: 217-231, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30827696

RESUMO

Telerehabilitation in older adults is most needed in the patient environments, rather than in formal ambulatories or hospitals. Supporting such practices brings significant advantages to patients, their family, formal and informal caregivers, clinicians, and researchers. This paper presents a focus group with experts in physiotherapy and telerehabilitation, debating on the requirements, current techniques and technologies developed to facilitate and enhance the effectiveness of telerehabilitation, and the still open challenges. Particular emphasis is given to (i) the body-parts requiring the most rehabilitation, (ii) the typical environments, initial causes, and general conditions, (iii) the values and parameters to be observed, (iv) common errors and limitations of current practices and technological solutions, and (v) the envisioned and desired technological support. Consequently, it has been performed a systematic review of the state of the art, investigating what types of systems and support currently cope with telerehabilitation practices and possible matches with the outcomes of the focus group. Technological solutions based on video analysis, wearable devices, robotic support, distributed sensing, and gamified telerehabilitation are examined. Particular emphasis is given to solutions implementing agent-based approaches, analyzing and discussing strength, limitations, and future challenges. By doing so, it has been possible to relate functional requirements expressed by professional physiotherapists and researchers, with the need for extending multi-agent systems (MAS) peculiarities at the sensing level in wearable solutions establishing new research challenges. In particular, to be employed in safety-critical cyber-physical scenarios with user-sensor and sensor-sensor interactions, MAS are requested to handle timing constraints, scarcity of resources and new communication means, crucial to providing real-time feedback and coaching. Therefore, MAS pillars such as the negotiation protocol and the agent's internal scheduler have been investigated, proposing solutions to achieve the aforementioned real-time compliance.


Assuntos
Telerreabilitação/organização & administração , Dispositivos Eletrônicos Vestíveis , Atitude do Pessoal de Saúde , Meio Ambiente , Europa (Continente) , Feminino , Grupos Focais , Humanos , Masculino , Fisioterapeutas/psicologia , Robótica/métodos , Fatores de Tempo , Gravação de Videoteipe/métodos
11.
Artif Intell Med ; 96: 154-166, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30442433

RESUMO

Personal Health Systems (PHS) are mobile solutions tailored to monitoring patients affected by chronic non communicable diseases. In general, a patient affected by a chronic disease can generate large amounts of events: for example, in Type 1 Diabetic patients generate several glucose events per day, ranging from at least 6 events per day (under normal monitoring) to 288 per day when wearing a continuous glucose monitor (CGM) that samples the blood every 5 minutes for several days. Just by itself, without considering other physiological parameters, it would be impossible for medical doctors to individually and accurately follow every patient, highlighting the need of simple approaches towards querying physiological time series. Achieving this with current technology is not an easy task, as on one hand it cannot be expected that medical doctors have the technical knowledge to query databases and on the other hand these time series include thousands of events, which requires to re-think the way data is indexed. Anyhow, handling data streams efficiently is not enough. Domain experts' knowledge must be explicitly included into PHSs in a way that it can be easily readed and modified by medical staffs. Logic programming represents the perfect programming paradygm to accomplish this task. In this work, an Event Calculus-based reasoning framework to standardize and express domain-knowledge in the form of monitoring rules is suggested, and applied to three different use cases. However, if online monitoring has to be achieved, the reasoning performance must improve dramatically. For this reason, three promising mechanisms to index the Event Calculus Knowledge Base are proposed. All of them are based on different types of tree indexing structures: k-d trees, interval trees and red-black trees. The paper then compares and analyzes the performance of the three indexing techniques, by computing the time needed to check different type of rules (and eventually generating alerts), when the number of recorded events (e.g. values of physiological parameters) increases. The results show that customized jREC performs much better when the event average inter-arrival time is little compared to the checked rule time-window. Instead, where the events are more sparse, the use of k-d trees with standard EC is advisable. Finally, the Multi-Agent paradigm helps to wrap the various components of the system: the reasoning engines represent the agent minds, and the sensors are its body. The said agents have been developed in MAGPIE, a mobile event based Java agent platform.


Assuntos
Árvores de Decisões , Gestão da Informação/organização & administração , Monitorização Ambulatorial/métodos , Dispositivos Eletrônicos Vestíveis , Doença Crônica , Humanos , Monitorização Ambulatorial/instrumentação , Doenças não Transmissíveis
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...