Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
Med Image Anal ; 95: 103209, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38781754

ABSTRACT

Case-based explanations are an intuitive method to gain insight into the decision-making process of deep learning models in clinical contexts. However, medical images cannot be shared as explanations due to privacy concerns. To address this problem, we propose a novel method for disentangling identity and medical characteristics of images and apply it to anonymize medical images. The disentanglement mechanism replaces some feature vectors in an image while ensuring that the remaining features are preserved, obtaining independent feature vectors that encode the images' identity and medical characteristics. We also propose a model to manufacture synthetic privacy-preserving identities to replace the original image's identity and achieve anonymization. The models are applied to medical and biometric datasets, demonstrating their capacity to generate realistic-looking anonymized images that preserve their original medical content. Additionally, the experiments show the network's inherent capacity to generate counterfactual images through the replacement of medical features.


Subject(s)
Data Anonymization , Humans , Deep Learning
3.
NPJ Precis Oncol ; 8(1): 56, 2024 Mar 05.
Article in English | MEDLINE | ID: mdl-38443695

ABSTRACT

Considering the profound transformation affecting pathology practice, we aimed to develop a scalable artificial intelligence (AI) system to diagnose colorectal cancer from whole-slide images (WSI). For this, we propose a deep learning (DL) system that learns from weak labels, a sampling strategy that reduces the number of training samples by a factor of six without compromising performance, an approach to leverage a small subset of fully annotated samples, and a prototype with explainable predictions, active learning features and parallelisation. Noting some problems in the literature, this study is conducted with one of the largest WSI colorectal samples dataset with approximately 10,500 WSIs. Of these samples, 900 are testing samples. Furthermore, the robustness of the proposed method is assessed with two additional external datasets (TCGA and PAIP) and a dataset of samples collected directly from the proposed prototype. Our proposed method predicts, for the patch-based tiles, a class based on the severity of the dysplasia and uses that information to classify the whole slide. It is trained with an interpretable mixed-supervision scheme to leverage the domain knowledge introduced by pathologists through spatial annotations. The mixed-supervision scheme allowed for an intelligent sampling strategy effectively evaluated in several different scenarios without compromising the performance. On the internal dataset, the method shows an accuracy of 93.44% and a sensitivity between positive (low-grade and high-grade dysplasia) and non-neoplastic samples of 0.996. On the external test samples varied with TCGA being the most challenging dataset with an overall accuracy of 84.91% and a sensitivity of 0.996.

4.
Sensors (Basel) ; 24(2)2024 Jan 14.
Article in English | MEDLINE | ID: mdl-38257608

ABSTRACT

Deep learning has rapidly increased in popularity, leading to the development of perception solutions for autonomous driving. The latter field leverages techniques developed for computer vision in other domains for accomplishing perception tasks such as object detection. However, the black-box nature of deep neural models and the complexity of the autonomous driving context motivates the study of explainability in these models that perform perception tasks. Moreover, this work explores explainable AI techniques for the object detection task in the context of autonomous driving. An extensive and detailed comparison is carried out between gradient-based and perturbation-based methods (e.g., D-RISE). Moreover, several experimental setups are used with different backbone architectures and different datasets to observe the influence of these aspects in the explanations. All the techniques explored consist of saliency methods, making their interpretation and evaluation primarily visual. Nevertheless, numerical assessment methods are also used. Overall, D-RISE and guided backpropagation obtain more localized explanations. However, D-RISE highlights more meaningful regions, providing more human-understandable explanations. To the best of our knowledge, this is the first approach to obtaining explanations focusing on the regression of the bounding box coordinates.

5.
medRxiv ; 2023 Oct 03.
Article in English | MEDLINE | ID: mdl-37873343

ABSTRACT

Pulse oximeters measure peripheral arterial oxygen saturation (SpO 2 ) noninvasively, while the gold standard (SaO 2 ) involves arterial blood gas measurement. There are known racial and ethnic disparities in their performance. BOLD is a new comprehensive dataset that aims to underscore the importance of addressing biases in pulse oximetry accuracy, which disproportionately affect darker-skinned patients. The dataset was created by harmonizing three Electronic Health Record databases (MIMIC-III, MIMIC-IV, eICU-CRD) comprising Intensive Care Unit stays of US patients. Paired SpO 2 and SaO 2 measurements were time-aligned and combined with various other sociodemographic and parameters to provide a detailed representation of each patient. BOLD includes 49,099 paired measurements, within a 5-minute window and with oxygen saturation levels between 70-100%. Minority racial and ethnic groups account for ∼25% of the data - a proportion seldom achieved in previous studies. The codebase is publicly available. Given the prevalent use of pulse oximeters in the hospital and at home, we hope that BOLD will be leveraged to develop debiasing algorithms that can result in more equitable healthcare solutions.

6.
PLoS One ; 18(8): e0289365, 2023.
Article in English | MEDLINE | ID: mdl-37535564

ABSTRACT

BACKGROUND: Breast cancer therapy improved significantly, allowing for different surgical approaches for the same disease stage, therefore offering patients different aesthetic outcomes with similar locoregional control. The purpose of the CINDERELLA trial is to evaluate an artificial-intelligence (AI) cloud-based platform (CINDERELLA platform) vs the standard approach for patient education prior to therapy. METHODS: A prospective randomized international multicentre trial comparing two methods for patient education prior to therapy. After institutional ethics approval and a written informed consent, patients planned for locoregional treatment will be randomized to the intervention (CINDERELLA platform) or controls. The patients in the intervention arm will use the newly designed web-application (CINDERELLA platform, CINDERELLA APProach) to access the information related to surgery and/or radiotherapy. Using an AI system, the platform will provide the patient with a picture of her own aesthetic outcome resulting from the surgical procedure she chooses, and an objective evaluation of this aesthetic outcome (e.g., good/fair). The control group will have access to the standard approach. The primary objectives of the trial will be i) to examine the differences between the treatment arms with regards to patients' pre-treatment expectations and the final aesthetic outcomes and ii) in the experimental arm only, the agreement of the pre-treatment AI-evaluation (output) and patient's post-therapy self-evaluation. DISCUSSION: The project aims to develop an easy-to-use cost-effective AI-powered tool that improves shared decision-making processes. We assume that the CINDERELLA APProach will lead to higher satisfaction, better psychosocial status, and wellbeing of breast cancer patients, and reduce the need for additional surgeries to improve aesthetic outcome.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Female , Humans , Breast Neoplasms/surgery , Cloud Computing , Intelligence , Patient Satisfaction , Prospective Studies
7.
Bioengineering (Basel) ; 10(4)2023 Mar 24.
Article in English | MEDLINE | ID: mdl-37106588

ABSTRACT

Breast cancer conservative treatment (BCCT) is a form of treatment commonly used for patients with early breast cancer. This procedure consists of removing the cancer and a small margin of surrounding tissue, while leaving the healthy tissue intact. In recent years, this procedure has become increasingly common due to identical survival rates and better cosmetic outcomes than other alternatives. Although significant research has been conducted on BCCT, there is no gold-standard for evaluating the aesthetic results of the treatment. Recent works have proposed the automatic classification of cosmetic results based on breast features extracted from digital photographs. The computation of most of these features requires the representation of the breast contour, which becomes key to the aesthetic evaluation of BCCT. State-of-the-art methods use conventional image processing tools that automatically detect breast contours based on the shortest path applied to the Sobel filter result in a 2D digital photograph of the patient. However, because the Sobel filter is a general edge detector, it treats edges indistinguishably, i.e., it detects too many edges that are not relevant to breast contour detection and too few weak breast contours. In this paper, we propose an improvement to this method that replaces the Sobel filter with a novel neural network solution to improve breast contour detection based on the shortest path. The proposed solution learns effective representations for the edges between the breasts and the torso wall. We obtain state of the art results on a dataset that was used for developing previous models. Furthermore, we tested these models on a new dataset that contains more variable photographs and show that this new approach shows better generalization capabilities as the previously developed deep models do not perform so well when faced with a different dataset for testing. The main contribution of this paper is to further improve the capabilities of models that perform the objective classification of BCCT aesthetic results automatically by improving upon the current standard technique for detecting breast contours in digital photographs. To that end, the models introduced are simple to train and test on new datasets which makes this approach easily reproducible.

8.
Sensors (Basel) ; 23(6)2023 Mar 14.
Article in English | MEDLINE | ID: mdl-36991803

ABSTRACT

Semantic segmentation consists of classifying each pixel according to a set of classes. Conventional models spend as much effort classifying easy-to-segment pixels as they do classifying hard-to-segment pixels. This is inefficient, especially when deploying to situations with computational constraints. In this work, we propose a framework wherein the model first produces a rough segmentation of the image, and then patches of the image estimated as hard to segment are refined. The framework is evaluated in four datasets (autonomous driving and biomedical), across four state-of-the-art architectures. Our method accelerates inference time by four, with additional gains for training time, at the cost of some output quality.

9.
Sci Rep ; 13(1): 3970, 2023 03 09.
Article in English | MEDLINE | ID: mdl-36894572

ABSTRACT

Cervical cancer is the fourth most common female cancer worldwide and the fourth leading cause of cancer-related death in women. Nonetheless, it is also among the most successfully preventable and treatable types of cancer, provided it is early identified and properly managed. As such, the detection of pre-cancerous lesions is crucial. These lesions are detected in the squamous epithelium of the uterine cervix and are graded as low- or high-grade intraepithelial squamous lesions, known as LSIL and HSIL, respectively. Due to their complex nature, this classification can become very subjective. Therefore, the development of machine learning models, particularly directly on whole-slide images (WSI), can assist pathologists in this task. In this work, we propose a weakly-supervised methodology for grading cervical dysplasia, using different levels of training supervision, in an effort to gather a bigger dataset without the need of having all samples fully annotated. The framework comprises an epithelium segmentation step followed by a dysplasia classifier (non-neoplastic, LSIL, HSIL), making the slide assessment completely automatic, without the need for manual identification of epithelial areas. The proposed classification approach achieved a balanced accuracy of 71.07% and sensitivity of 72.18%, at the slide-level testing on 600 independent samples, which are publicly available upon reasonable request.


Subject(s)
Carcinoma, Squamous Cell , Squamous Intraepithelial Lesions , Uterine Cervical Dysplasia , Uterine Cervical Neoplasms , Female , Humans , Cervix Uteri/diagnostic imaging , Cervix Uteri/pathology , Uterine Cervical Dysplasia/pathology , Uterine Cervical Neoplasms/diagnosis , Hyperplasia/pathology , Squamous Intraepithelial Lesions/pathology , Carcinoma, Squamous Cell/pathology , Neoplasm Grading
10.
Mod Pathol ; 36(4): 100086, 2023 04.
Article in English | MEDLINE | ID: mdl-36788085

ABSTRACT

Training machine learning models for artificial intelligence (AI) applications in pathology often requires extensive annotation by human experts, but there is little guidance on the subject. In this work, we aimed to describe our experience and provide a simple, useful, and practical guide addressing annotation strategies for AI development in computational pathology. Annotation methodology will vary significantly depending on the specific study's objectives, but common difficulties will be present across different settings. We summarize key aspects and issue guiding principles regarding team interaction, ground-truth quality assessment, different annotation types, and available software and hardware options and address common difficulties while annotating. This guide was specifically designed for pathology annotation, intending to help pathologists, other researchers, and AI developers with this process.


Subject(s)
Artificial Intelligence , Pathologists , Humans , Software , Machine Learning
12.
Med Image Anal ; 83: 102690, 2023 01.
Article in English | MEDLINE | ID: mdl-36446314

ABSTRACT

Breast cancer is the most common and lethal form of cancer in women. Recent efforts have focused on developing accurate neural network-based computer-aided diagnosis systems for screening to help anticipate this disease. The ultimate goal is to reduce mortality and improve quality of life after treatment. Due to the difficulty in collecting and annotating data in this domain, data scarcity is - and will continue to be - a limiting factor. In this work, we present a unified view of different regularization methods that incorporate domain-known symmetries in the model. Three general strategies were followed: (i) data augmentation, (ii) invariance promotion in the loss function, and (iii) the use of equivariant architectures. Each of these strategies encodes different priors on the functions learned by the model and can be readily introduced in most settings. Empirically we show that the proposed symmetry-based regularization procedures improve generalization to unseen examples. This advantage is verified in different scenarios, datasets and model architectures. We hope that both the principle of symmetry-based regularization and the concrete methods presented can guide development towards more data-efficient methods for breast cancer screening as well as other medical imaging domains.


Subject(s)
Breast Neoplasms , Early Detection of Cancer , Female , Humans , Breast Neoplasms/diagnostic imaging , Quality of Life
13.
Sci Rep ; 12(1): 20732, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36456605

ABSTRACT

Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.


Subject(s)
Radiology , Humans , Radiography , Radiologists , Diagnosis, Computer-Assisted , Computers
14.
BMC Med Inform Decis Mak ; 22(1): 314, 2022 11 29.
Article in English | MEDLINE | ID: mdl-36447207

ABSTRACT

BACKGROUND: The standard configuration's set of twelve electrocardiogram (ECG) leads is optimal for the medical diagnosis of diverse cardiac conditions. However, it requires ten electrodes on the patient's limbs and chest, which is uncomfortable and cumbersome. Interlead conversion methods can reconstruct missing leads and enable more comfortable acquisitions, including in wearable devices, while still allowing for adequate diagnoses. Currently, methodologies for interlead ECG conversion either require multiple reference (input) leads and/or require input signals to be temporally aligned considering the ECG landmarks. METHODS: Unlike the methods in the literature, this paper studies the possibility of converting ECG signals into all twelve standard configuration leads using signal segments from only one reference lead, without temporal alignment (blindly-segmented). The proposed methodology is based on a deep learning encoder-decoder U-Net architecture, which is compared with adaptations based on convolutional autoencoders and label refinement networks. Moreover, the method is explored for conversion with one single shared encoder or multiple individual encoders for each lead. RESULTS: Despite the more challenging settings, the proposed methodology was able to attain state-of-the-art level performance in multiple target leads, and both lead I and lead II seem especially suitable to convert certain sets of leads. In cross-database tests, the methodology offered promising results despite acquisition setup differences. Furthermore, results show that the presence of medical conditions does not have a considerable effect on the method's performance. CONCLUSIONS: This study shows the feasibility of converting ECG signals using single-lead blindly-segmented inputs. Although the results are promising, further efforts should be devoted towards the improvement of the methodologies, especially the robustness to diverse acquisition setups, in order to be applicable to cardiac health monitoring in wearable devices and less obtrusive clinical scenarios.


Subject(s)
Heart Diseases , Wearable Electronic Devices , Humans , Electrocardiography , Electrodes , Databases, Factual
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 588-593, 2022 07.
Article in English | MEDLINE | ID: mdl-36085930

ABSTRACT

Manual assessment of fragments during the pro-cessing of pathology specimens is critical to ensure that the material available for slide analysis matches that captured during grossing without losing valuable material during this process. However, this step is still performed manually, resulting in lost time and delays in making the complete case available for evaluation by the pathologist. To overcome this limitation, we developed an autonomous system that can detect and count the number of fragments contained on each slide. We applied and compared two different methods: conventional machine learning methods and deep convolutional network methods. For conventional machine learning methods, we tested a two-stage approach with a supervised classifier followed by unsupervised hierarchical clustering. In addition, Fast R-CNN and YOLOv5, two state-of-the-art deep learning models for detection, were used and compared. All experiments were performed on a dataset comprising 1276 images of colorec-tal biopsy and polypectomy specimens manually labeled for fragment/set detection. The best results were obtained with the YOLOv5 architecture with a map@0.5 of 0.977 for fragment/set detection.


Subject(s)
Machine Learning , Neural Networks, Computer , Biopsy , Quality Control
16.
Cancers (Basel) ; 14(10)2022 May 18.
Article in English | MEDLINE | ID: mdl-35626093

ABSTRACT

Colorectal cancer (CRC) diagnosis is based on samples obtained from biopsies, assessed in pathology laboratories. Due to population growth and ageing, as well as better screening programs, the CRC incidence rate has been increasing, leading to a higher workload for pathologists. In this sense, the application of AI for automatic CRC diagnosis, particularly on whole-slide images (WSI), is of utmost relevance, in order to assist professionals in case triage and case review. In this work, we propose an interpretable semi-supervised approach to detect lesions in colorectal biopsies with high sensitivity, based on multiple-instance learning and feature aggregation methods. The model was developed on an extended version of the recent, publicly available CRC dataset (the CRC+ dataset with 4433 WSI), using 3424 slides for training and 1009 slides for evaluation. The proposed method attained 90.19% classification ACC, 98.8% sensitivity, 85.7% specificity, and a quadratic weighted kappa of 0.888 at slide-based evaluation. Its generalisation capabilities are also studied on two publicly available external datasets.

17.
Eur Surg Res ; 63(1): 3-8, 2022.
Article in English | MEDLINE | ID: mdl-34038908

ABSTRACT

INTRODUCTION: Breast volume estimation is considered crucial for breast cancer surgery planning. A single, easy, and reproducible method to estimate breast volume is not available. This study aims to evaluate, in patients proposed for mastectomy, the accuracy of the calculation of breast volume from a low-cost 3D surface scan (Microsoft Kinect) compared to the breast MRI and water displacement technique. MATERIAL AND METHODS: Patients with a Tis/T1-T3 breast cancer proposed for mastectomy between July 2015 and March 2017 were assessed for inclusion in the study. Breast volume calculations were performed using a 3D surface scan and the breast MRI and water displacement technique. Agreement between volumes obtained with both methods was assessed with the Spearman and Pearson correlation coefficients. RESULTS: Eighteen patients with invasive breast cancer were included in the study and submitted to mastectomy. The level of agreement of the 3D breast volume compared to surgical specimens and breast MRI volumes was evaluated. For mastectomy specimen volume, an average (standard deviation) of 0.823 (0.027) and 0.875 (0.026) was obtained for the Pearson and Spearman correlations, respectively. With respect to MRI annotation, we obtained 0.828 (0.038) and 0.715 (0.018). DISCUSSION: Although values obtained by both methodologies still differ, the strong linear correlation coefficient suggests that 3D breast volume measurement using a low-cost surface scan device is feasible and can approximate both the MRI breast volume and mastectomy specimen with sufficient accuracy. CONCLUSION: 3D breast volume measurement using a depth-sensor low-cost surface scan device is feasible and can parallel MRI breast and mastectomy specimen volumes with enough accuracy. Differences between methods need further development to reach clinical applicability. A possible approach could be the fusion of breast MRI and the 3D surface scan to harmonize anatomic limits and improve volume delimitation.


Subject(s)
Breast Neoplasms , Breast/diagnostic imaging , Breast/surgery , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/surgery , Female , Humans , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Mastectomy/methods
18.
Sci Rep ; 11(1): 14358, 2021 07 13.
Article in English | MEDLINE | ID: mdl-34257363

ABSTRACT

Most oncological cases can be detected by imaging techniques, but diagnosis is based on pathological assessment of tissue samples. In recent years, the pathology field has evolved to a digital era where tissue samples are digitised and evaluated on screen. As a result, digital pathology opened up many research opportunities, allowing the development of more advanced image processing techniques, as well as artificial intelligence (AI) methodologies. Nevertheless, despite colorectal cancer (CRC) being the second deadliest cancer type worldwide, with increasing incidence rates, the application of AI for CRC diagnosis, particularly on whole-slide images (WSI), is still a young field. In this review, we analyse some relevant works published on this particular task and highlight the limitations that hinder the application of these works in clinical practice. We also empirically investigate the feasibility of using weakly annotated datasets to support the development of computer-aided diagnosis systems for CRC from WSI. Our study underscores the need for large datasets in this field and the use of an appropriate learning methodology to gain the most benefit from partially annotated datasets. The CRC WSI dataset used in this study, containing 1,133 colorectal biopsy and polypectomy samples, is available upon reasonable request.


Subject(s)
Colorectal Neoplasms/diagnosis , Computational Biology/methods , Diagnosis, Computer-Assisted/instrumentation , Diagnosis, Computer-Assisted/methods , Diagnostic Imaging/trends , Image Processing, Computer-Assisted/methods , Adenoma/diagnosis , Algorithms , Artificial Intelligence , Biomedical Engineering/methods , Biopsy , Diagnosis, Computer-Assisted/trends , Diagnostic Imaging/instrumentation , Feasibility Studies , Humans , Image Interpretation, Computer-Assisted/methods , Learning , Machine Learning , Software
19.
PeerJ Comput Sci ; 7: e457, 2021.
Article in English | MEDLINE | ID: mdl-33981833

ABSTRACT

Cervical cancer is the fourth leading cause of cancer-related deaths in women, especially in low to middle-income countries. Despite the outburst of recent scientific advances, there is no totally effective treatment, especially when diagnosed in an advanced stage. Screening tests, such as cytology or colposcopy, have been responsible for a substantial decrease in cervical cancer deaths. Cervical cancer automatic screening via Pap smear is a highly valuable cell imaging-based detection tool, where cells must be classified as being within one of a multitude of ordinal classes, ranging from abnormal to normal. Current approaches to ordinal inference for neural networks are found to not sufficiently take advantage of the ordinal problem or to be too uncompromising. A non-parametric ordinal loss for neuronal networks is proposed that promotes the output probabilities to follow a unimodal distribution. This is done by imposing a set of different constraints over all pairs of consecutive labels which allows for a more flexible decision boundary relative to approaches from the literature. Our proposed loss is contrasted against other methods from the literature by using a plethora of deep architectures. A first conclusion is the benefit of using non-parametric ordinal losses against parametric losses in cervical cancer risk prediction. Additionally, the proposed loss is found to be the top-performer in several cases. The best performing model scores an accuracy of 75.6% for seven classes and 81.3% for four classes.

20.
Sensors (Basel) ; 21(5)2021 Mar 06.
Article in English | MEDLINE | ID: mdl-33800810

ABSTRACT

In recent years, deep neural networks have shown significant progress in computer vision due to their large generalization capacity; however, the overfitting problem ubiquitously threatens the learning process of these highly nonlinear architectures. Dropout is a recent solution to mitigate overfitting that has witnessed significant success in various classification applications. Recently, many efforts have been made to improve the Standard dropout using an unsupervised merit-based semantic selection of neurons in the latent space. However, these studies do not consider the task-relevant information quality and quantity and the diversity of the latent kernels. To solve the challenge of dropping less informative neurons in deep learning, we propose an efficient end-to-end dropout algorithm that selects the most informative neurons with the highest correlation with the target output considering the sparsity in its selection procedure. First, to promote activation diversity, we devise an approach to select the most diverse set of neurons by making use of determinantal point process (DPP) sampling. Furthermore, to incorporate task specificity into deep latent features, a mutual information (MI)-based merit function is developed. Leveraging the proposed MI with DPP sampling, we introduce the novel DPPMI dropout that adaptively adjusts the retention rate of neurons based on their contribution to the neural network task. Empirical studies on real-world classification benchmarks including, MNIST, SVHN, CIFAR10, CIFAR100, demonstrate the superiority of our proposed method over recent state-of-the-art dropout algorithms in the literature.

SELECTION OF CITATIONS
SEARCH DETAIL
...