Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Med Image Anal ; 93: 103090, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38241763

ABSTRACT

Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website.


Subject(s)
Magnetic Resonance Imaging , Neuroimaging , Humans , Child , Adolescent , Young Adult , Adult , Middle Aged , Aged , Aged, 80 and over , Brain/diagnostic imaging , Algorithms , Artifacts
2.
Nat Commun ; 14(1): 6874, 2023 10 28.
Article in English | MEDLINE | ID: mdl-37898607

ABSTRACT

Full Laboratory Automation is revolutionizing work habits in an increasing number of clinical microbiology facilities worldwide, generating huge streams of digital images for interpretation. Contextually, deep learning architectures are leading to paradigm shifts in the way computers can assist with difficult visual interpretation tasks in several domains. At the crossroads of these epochal trends, we present a system able to tackle a core task in clinical microbiology, namely the global interpretation of diagnostic bacterial culture plates, including presumptive pathogen identification. This is achieved by decomposing the problem into a hierarchy of complex subtasks and addressing them with a multi-network architecture we call DeepColony. Working on a large stream of clinical data and a complete set of 32 pathogens, the proposed system is capable of effectively assist plate interpretation with a surprising degree of accuracy in the widespread and demanding framework of Urinary Tract Infections. Moreover, thanks to the rich species-related generated information, DeepColony can be used for developing trustworthy clinical decision support services in laboratory automation ecosystems from local to global scale.


Subject(s)
Ecosystem , Urinary Tract Infections , Humans , Bacteria , Automation, Laboratory
3.
Data Brief ; 51: 109627, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37822886

ABSTRACT

The position and orientation of the camera in relation to the subject(s) in a movie scene, namely camera "level" and camera "angle", are essential features in the film-making process due to their influence on the viewer's perception of the scene. We provide a database containing camera feature annotations on camera angle and camera level, for about 25,000 image frames. Frames are sampled from a wide range of movies, freely available images, and shots from cinematographic websites, and are annotated on the following five categories - Overhead, High, Neutral, Low, and Dutch - for what concerns camera angle, and on six different classes of camera level: Aerial, Eye, Shoulder, Hip, Knee, and Ground level. This dataset is an extension of the Cinescale dataset [1], which contains movie frames and related annotations regarding shot scale. The CineScale2 database enables AI-driven interpretation of shot scale data and opens to a large set of research activities related to the automatic visual analysis of cinematic material, such as movie stylistic analysis, video recommendation, and media psychology. To these purposes, we also provide the model and the code for building a Convolutional Neural Network (CNN) architecture for automated camera feature recognition. All the material is provided on the the project website; video frames can be also provided upon requests to authors, for research purposes under fair use.

4.
Int J Cardiol ; 370: 435-441, 2023 Jan 01.
Article in English | MEDLINE | ID: mdl-36343794

ABSTRACT

BACKGROUND: The predictive role of chest radiographs in patients with suspected coronary artery disease (CAD) is underestimated and may benefit from artificial intelligence (AI) applications. OBJECTIVES: To train, test, and validate a deep learning (DL) solution for detecting significant CAD based on chest radiographs. METHODS: Data of patients referred for angina and undergoing chest radiography and coronary angiography were analysed retrospectively. A deep convolutional neural network (DCNN) was designed to detect significant CAD from posteroanterior/anteroposterior chest radiographs. The DCNN was trained for severe CAD binary classification (absence/presence). Coronary angiography reports were the ground truth. Stenosis severity of ≥70% for non-left main vessels and ≥ 50% for left main defined severe CAD. RESULTS: Information of 7728 patients was reviewed. Severe CAD was present in 4091 (53%). Patients were randomly divided for algorithm training (70%; n = 5454) and fine-tuning/model validation (10%; n = 773). Internal clinical validation (model testing) was performed with the remaining patients (20%; n = 1501). At binary logistic regression, DCNN prediction was the strongest severe CAD predictor (p < 0.0001; OR: 1.040; CI: 1.032-1.048). Using a high sensitivity operating cut-point, the DCNN had a sensitivity of 0.90 to detect significant CAD (specificity 0.31; AUC 0.73; 95% CI DeLong, 0.69-0.76). Adding to the AI chest radiograph interpretation angina status improved the prediction (AUC 0.77; 95% CI DeLong, 0.74-0.80). CONCLUSION: AI-read chest radiographs could be used to pre-test significant CAD probability in patients referred for suspected angina. Further studies are required to externally validate our algorithm, develop a clinically applicable tool, and support CAD screening in broader settings.


Subject(s)
Coronary Artery Disease , Deep Learning , Humans , Coronary Artery Disease/diagnostic imaging , Retrospective Studies , Artificial Intelligence , Coronary Angiography , Angina Pectoris
5.
IEEE Trans Technol Soc ; 3(4): 272-289, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36573115

ABSTRACT

This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.

6.
Data Brief ; 39: 107476, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34712753

ABSTRACT

We provide a database aimed at real-time quantitative analysis of 3D reconstruction and alignment methods, containing 3140 point clouds from 10 subjects/objects. These scenes are acquired with a high-resolution 3D scanner. It contains depth maps that produce point clouds with more than 500k points on average. This dataset is useful to develop new models and alignment strategies to automatically reconstruct 3D scenes from data acquired with optical scanners or benchmarking purposes.

7.
Data Brief ; 36: 107002, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33997191

ABSTRACT

We provide a database containing shot scale annotations (i.e., the apparent distance of the camera from the subject of a filmed scene) for more than 792,000 image frames. Frames belong to 124 full movies from the entire filmographies by 6 important directors: Martin Scorsese, Jean-Luc Godard, Béla Tarr, Federico Fellini, Michelangelo Antonioni, and Ingmar Bergman. Each frame, extracted from videos at 1 frame per second, is annotated on the following scale categories: Extreme Close Up (ECU), Close Up (CU), Medium Close Up (MCU), Medium Shot (MS), Medium Long Shot (MLS), Long Shot (LS), Extreme Long Shot (ELS), Foreground Shot (FS), and Insert Shots (IS). Two independent coders annotated all frames from the 124 movies, whilst a third one checked their coding and made decisions in cases of disagreement. The CineScale database enables AI-driven interpretation of shot scale data and opens to a large set of research activities related to the automatic visual analysis of cinematic material, such as the automatic recognition of the director's style, or the unfolding of the relationship between shot scale and the viewers' emotional experience. To these purposes, we also provide the model and the code for building a Convolutional Neural Network (CNN) architecture for automated shot scale recognition. All this material is provided through the project website, where video frames can also be requested to authors, for research purposes under fair use.

8.
Med Image Anal ; 71: 102046, 2021 07.
Article in English | MEDLINE | ID: mdl-33862337

ABSTRACT

In this work we design an end-to-end deep learning architecture for predicting, on Chest X-rays images (CXR), a multi-regional score conveying the degree of lung compromise in COVID-19 patients. Such semi-quantitative scoring system, namely Brixia score, is applied in serial monitoring of such patients, showing significant prognostic value, in one of the hospitals that experienced one of the highest pandemic peaks in Italy. To solve such a challenging visual task, we adopt a weakly supervised learning strategy structured to handle different tasks (segmentation, spatial alignment, and score estimation) trained with a "from-the-part-to-the-whole" procedure involving different datasets. In particular, we exploit a clinical dataset of almost 5,000 CXR annotated images collected in the same hospital. Our BS-Net demonstrates self-attentive behavior and a high degree of accuracy in all processing stages. Through inter-rater agreement tests and a gold standard comparison, we show that our solution outperforms single human annotators in rating accuracy and consistency, thus supporting the possibility of using this tool in contexts of computer-assisted monitoring. Highly resolved (super-pixel level) explainability maps are also generated, with an original technique, to visually help the understanding of the network activity on the lung areas. We also consider other scores proposed in literature and provide a comparison with a recently proposed non-specific approach. We eventually test the performance robustness of our model on an assorted public COVID-19 dataset, for which we also provide Brixia score annotations, observing good direct generalization and fine-tuning capabilities that highlight the portability of BS-Net in other clinical settings. The CXR dataset along with the source code and the trained model are publicly released for research purposes.


Subject(s)
COVID-19 , Deep Learning , Radiography, Thoracic , COVID-19/diagnostic imaging , Humans , SARS-CoV-2 , X-Rays
9.
Int J Cosmet Sci ; 43(4): 405-418, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33848366

ABSTRACT

OBJECTIVE: The first objective of this study was to apply computer vision and machine learning techniques to quantify the effects of haircare treatments on hair assembly and to identify correctly whether unknown tresses were treated or not. The second objective was to explore and compare the performance of human assessment with that obtained from artificial intelligence (AI) algorithms. METHODS: Machine learning was applied to a data set of hair tress images (virgin and bleached), both untreated and treated with a shampoo and conditioner set, aimed at increasing hair volume whilst improving alignment and reducing the flyway of the hair. The automatic quantification of the following hair image features was conducted: local and global hair volumes and hair alignment. These features were assessed at three time points: t0 (no treatment), t1 (two treatments) and t2 (three treatments). Classifier tests were applied to test the accuracy of the machine learning. A sensory test (paired comparison of t0 vs t2 ) and an online front image-based survey (paired comparison of t0 vs t1 , t1 vs t2 , t0 vs t2 ) were conducted to compare human assessment with that of the algorithms. RESULTS: The automatic image analysis identified changes to hair volume and alignment which enabled the successful application of the classification tests, especially when the hair images were grouped into untreated and treated groups. The human assessment of hair presented in pairs confirmed the automatic image analysis. The image assessment for both virgin hair and bleached only partially agreed with the analysis of the subset of images used in the online survey. One hypothesis is that treatments changed somewhat the shape of the hair tress, with the effect being more pronounced in bleached hair. This made human assessment of flat images more challenging than when viewed directly in 3D. Overall, the bleached hair exhibited effects of higher magnitude than the virgin hair. CONCLUSIONS: This study illustrated the capacity of artificial intelligence for hair image detection and classification, and for image analysis of hair assembly features following treatments. The human assessment partially confirmed the image analysis and highlighted the challenges imposed by the presentation mode.


OBJECTIF: Le premier objectif de cette étude était d'appliquer des techniques de vision par ordinateur et d'apprentissage automatique pour quantifier les effets des traitements capillaires sur l'organisation des cheveux et pour identifier précisément si des cheveux d'origine inconnue ont été traités ou non. Le deuxième objectif était d'explorer et de comparer les performances obtenues par évaluation humaine avec celles obtenues à partir d'algorithmes d'intelligence artificielle (IA). MÉTHODES: L'apprentissage automatique a été appliqué à un ensemble de données d'images de cheveux (vierges et décolorés), à la fois non traités et traités avec une association de shampooing et après shampooing visant à augmenter le volume des cheveux tout en améliorant l'alignement des fibres capillaires et en réduisant les frisottis. La quantification automatique des caractéristiques suivantes de l'image capillaire a été réalisée : volumes capillaires locaux et globaux et alignement des cheveux. Ces caractéristiques ont été évaluées à trois moments : t0 (pas de traitement), t1 (deux traitements), t2 (trois traitements). Des tests de classification ont été appliqués pour tester la précision de l'apprentissage automatique. Un test sensoriel (comparaison par paire de t0 vs t2) et une enquête en ligne basée sur l'image frontale (comparaison par paire de t0 vs t1, t1 vs t2, t0 vs t2) ont été menés pour comparer l'évaluation humaine avec celle des algorithmes. RÉSULTATS: L'analyse automatique des images a identifié des changements dans le volume et l'alignement des cheveux qui ont permis la validation des tests de classification, en particulier lorsque les images de cheveux ont été rassemblés en groupes non traités et traités. L'évaluation humaine des cheveux présentés par paires a confirmé l'analyse automatique des images. L'évaluation des images pour les cheveux vierges et décolorés n'était que partiellement en accord avec l'analyse du sous-ensemble d'images utilisées dans l'enquête en ligne. Une hypothèse est que les traitements ont quelque peu changé la forme de la chevelure, l'effet étant plus prononcé avec les cheveux décolorés. Cela a rendu l'évaluation humaine des images plates plus difficile que lorsqu'elles sont visualisées directement en 3D. Dans l'ensemble, les cheveux décolorés ont présenté des effets de plus grande ampleur que les cheveux vierges. CONCLUSION: Cette étude a illustré la capacité de l'intelligence artificielle pour la détection et la classification d'images capillaires, et pour l'analyse d'images des caractéristiques d'organisation des cheveux après traitements. Le bilan humain a partiellement confirmé l'analyse d'image et mis en évidence les enjeux posés par le mode de présentation.


Subject(s)
Artificial Intelligence , Hair/chemistry , Algorithms , Humans , Proof of Concept Study
10.
Data Brief ; 34: 106635, 2021 Feb.
Article in English | MEDLINE | ID: mdl-33364270

ABSTRACT

The provided database of 260 ECG signals was collected from patients with out-of-hospital cardiac arrest while treated by the emergency medical services. Each ECG signal contains a 9 second waveform showing ventricular fibrillation, followed by 1 min of post-shock waveform. Patients' ECGs are made available in multiple formats. All ECGs recorded during the prehospital treatment are provided in PFD files, after being anonymized, printed in paper, and scanned. For each ECG, the dataset also includes the whole digitized waveform (9 s pre- and 1 min post-shock each) and numerous features in temporal and frequency domain extracted from the 9 s episode immediately prior to the first defibrillation shock. Based on the shock outcome, each ECG file has been annotated by three expert cardiologists, - using majority decision -, as successful (56 cases), unsuccessful (195 cases), or indeterminable (9 cases). The code for preprocessing, for feature extraction, and for limiting the investigation to different temporal intervals before the shock is also provided. These data could be reused to design algorithms to predict shock outcome based on ventricular fibrillation analysis, with the goal to optimize the defibrillation strategy (immediate defibrillation versus cardiopulmonary resuscitation and/or drug administration) for enhancing resuscitation.

11.
Data Brief ; 31: 105964, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32671161

ABSTRACT

The published database is composed of 1,080 images taken from 120 hair tresses made of medium blond, fine Caucasian hair with the aim to facilitate quantitative and qualitative studies about shampoo and conditioner efficacies. Two types of hair tresses were used: Caucasian hair which had not been subjected to oxidation with bleaching agents - virgin (60 tresses); and Caucasian hair, previously subjected to light oxidative bleaching - lightly bleached (remaining 60 tresses). Since cosmetic products such as shampoos and conditioners are often designed to subtly augment hair assembly features via the carefully balanced cumulative effects of deposited actives, each tress was subjected to consecutive washing+conditioning+drying cycles referred to as cosmetic treatment. The shampoo and conditioner used for this project were specifically selected for their suitability for fine hair. Each tress was photographed at three different time-points: before the cosmetic treatment; after two cosmetic treatments, and after an additional third cosmetic treatment. At each time-point, each tress was photographed from three different angles (-45, 0, and +45°), resulting in a total number of nine images for each tress. For each image in the database, we also provide a corresponding hair segmentation mask, which identifies the hair location area in the original image.

12.
J Neurosci Methods ; 328: 108319, 2019 12 01.
Article in English | MEDLINE | ID: mdl-31585315

ABSTRACT

BACKGROUND: Deep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g., fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data. NEW METHOD: In this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced Rank Regression with Ridge Regularisation we establish a multivariate link between imaging data and the fully connected layer (fc7) of a CNN. We exploit the reconstructed fc7 features by performing an object image classification task on two datasets: one of the largest fMRI databases, taken from different scanners from more than two hundred subjects watching different movie clips, and another with fMRI data taken while watching static images. RESULTS: The fc7 features could be significantly reconstructed from the imaging data, and led to significant decoding performance. COMPARISON WITH EXISTING METHODS: The decoding based on reconstructed fc7 outperformed the decoding based on imaging data alone. CONCLUSION: In this work we show how to improve fMRI-based decoding benefiting from the mapping between functional data and CNN features. The potential advantage of the proposed method is twofold: the extraction of stimuli representations by means of an automatic procedure (unsupervised) and the embedding of high-dimensional neuroimaging data onto a space designed for visual object discrimination, leading to a more manageable space from dimensionality point of view.


Subject(s)
Brain Mapping/methods , Brain/physiology , Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Transfer, Psychology , Visual Perception/physiology , Adult , Brain/diagnostic imaging , Humans
13.
J Imaging ; 5(5)2019 May 08.
Article in English | MEDLINE | ID: mdl-34460490

ABSTRACT

Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial-spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends.

14.
Comput Methods Programs Biomed ; 156: 13-24, 2018 Mar.
Article in English | MEDLINE | ID: mdl-29428064

ABSTRACT

BACKGROUND AND OBJECTIVE: The recent introduction of Full Laboratory Automation systems in clinical microbiology opens to the availability of streams of high definition images representing bacteria culturing plates. This creates new opportunities to support diagnostic decisions through image analysis and interpretation solutions, with an expected high impact on the efficiency of the laboratory workflow and related quality implications. Starting from images acquired under different illumination settings (top-light and back-light), the objective of this work is to design and evaluate a method for the detection and classification of diagnostically relevant hemolysis effects associated with specific bacteria growing on blood agar plates. The presence of hemolysis is an important factor to assess the virulence of pathogens, and is a fundamental sign of the presence of certain types of bacteria. METHODS: We introduce a two-stage approach. Firstly, the implementation of a highly accurate alignment of same-plate image scans, acquired using top-light and back-light illumination, enables the joint spatially coherent exploitation of the available data. Secondly, from each segmented portion of the image containing at least one bacterial colony, specifically designed image features are extracted to feed a SVM classification system, allowing detection and discrimination among different types of hemolysis. RESULTS: The fine alignment solution aligns more than 98.1% images with a residual error of less than 0.13 mm. The hemolysis classification block achieves a 88.3% precision with a recall of 98.6%. CONCLUSIONS: The results collected from different clinical scenarios (urinary infections and throat swab screening) together with accurate error analysis demonstrate the suitability of our system for robust hemolysis detection and classification, which remains feasible even in challenging conditions (low contrast or illumination changes).


Subject(s)
Agar/chemistry , Hemolysis , Urinary Tract Infections/blood , Algorithms , Bacteria , Electronic Data Processing , Humans , Lighting , Models, Statistical , Programming Languages , Reproducibility of Results , Signal Processing, Computer-Assisted , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...