Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 90: 102972, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37742374

ABSTRACT

By focusing on metabolic and morphological tissue properties respectively, FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) modalities include complementary and synergistic information for cancerous lesion delineation and characterization (e.g. for outcome prediction), in addition to usual clinical variables. This is especially true in Head and Neck Cancer (HNC). The goal of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge was to develop and compare modern image analysis methods to best extract and leverage this information automatically. We present here the post-analysis of HECKTOR 2nd edition, at the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2021. The scope of the challenge was substantially expanded compared to the first edition, by providing a larger population (adding patients from a new clinical center) and proposing an additional task to the challengers, namely the prediction of Progression-Free Survival (PFS). To this end, the participants were given access to a training set of 224 cases from 5 different centers, each with a pre-treatment FDG-PET/CT scan and clinical variables. Their methods were subsequently evaluated on a held-out test set of 101 cases from two centers. For the segmentation task (Task 1), the ranking was based on a Borda counting of their ranks according to two metrics: mean Dice Similarity Coefficient (DSC) and median Hausdorff Distance at 95th percentile (HD95). For the PFS prediction task, challengers could use the tumor contours provided by experts (Task 3) or rely on their own (Task 2). The ranking was obtained according to the Concordance index (C-index) calculated on the predicted risk scores. A total of 103 teams registered for the challenge, for a total of 448 submissions and 29 papers. The best method in the segmentation task obtained an average DSC of 0.759, and the best predictions of PFS obtained a C-index of 0.717 (without relying on the provided contours) and 0.698 (using the expert contours). An interesting finding was that best PFS predictions were reached by relying on DL approaches (with or without explicit tumor segmentation, 4 out of the 5 best ranked) compared to standard radiomics methods using handcrafted features extracted from delineated tumors, and by exploiting alternative tumor contours (automated and/or larger volumes encompassing surrounding tissues) rather than relying on the expert contours. This second edition of the challenge confirmed the promising performance of fully automated primary tumor delineation in PET/CT images of HNC patients, although there is still a margin for improvement in some difficult cases. For the first time, the prediction of outcome was also addressed and the best methods reached relatively good performance (C-index above 0.7). Both results constitute another step forward toward large-scale outcome prediction studies in HNC.

2.
Head Neck Tumor Chall (2022) ; 13626: 1-30, 2023.
Article in English | MEDLINE | ID: mdl-37195050

ABSTRACT

This paper presents an overview of the third edition of the HEad and neCK TumOR segmentation and outcome prediction (HECKTOR) challenge, organized as a satellite event of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2022. The challenge comprises two tasks related to the automatic analysis of FDG-PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the fully automatic segmentation of H&N primary Gross Tumor Volume (GTVp) and metastatic lymph nodes (GTVn) from FDG-PET/CT images. Task 2 is the fully automatic prediction of Recurrence-Free Survival (RFS) from the same FDG-PET/CT and clinical data. The data were collected from nine centers for a total of 883 cases consisting of FDG-PET/CT images and clinical information, split into 524 training and 359 test cases. The best methods obtained an aggregated Dice Similarity Coefficient (DSCagg) of 0.788 in Task 1, and a Concordance index (C-index) of 0.682 in Task 2.

3.
Eur Radiol Exp ; 7(1): 16, 2023 03 22.
Article in English | MEDLINE | ID: mdl-36947346

ABSTRACT

BACKGROUND: Radiomics, the field of image-based computational medical biomarker research, has experienced rapid growth over the past decade due to its potential to revolutionize the development of personalized decision support models. However, despite its research momentum and important advances toward methodological standardization, the translation of radiomics prediction models into clinical practice only progresses slowly. The lack of physicians leading the development of radiomics models and insufficient integration of radiomics tools in the clinical workflow contributes to this slow uptake. METHODS: We propose a physician-centered vision of radiomics research and derive minimal functional requirements for radiomics research software to support this vision. Free-to-access radiomics tools and frameworks were reviewed to identify best practices and reveal the shortcomings of existing software solutions to optimally support physician-driven radiomics research in a clinical environment. RESULTS: Support for user-friendly development and evaluation of radiomics prediction models via machine learning was found to be missing in most tools. QuantImage v2 (QI2) was designed and implemented to address these shortcomings. QI2 relies on well-established existing tools and open-source libraries to realize and concretely demonstrate the potential of a one-stop tool for physician-driven radiomics research. It provides web-based access to cohort management, feature extraction, and visualization and supports "no-code" development and evaluation of machine learning models against patient-specific outcome data. CONCLUSIONS: QI2 fills a gap in the radiomics software landscape by enabling "no-code" radiomics research, including model validation, in a clinical environment. Further information about QI2, a public instance of the system, and its source code is available at https://medgift.github.io/quantimage-v2-info/ . Key points As domain experts, physicians play a key role in the development of radiomics models. Existing software solutions do not support physician-driven research optimally. QuantImage v2 implements a physician-centered vision for radiomics research. QuantImage v2 is a web-based, "no-code" radiomics research platform.


Subject(s)
Cloud Computing , Computational Biology , Radiology , Radiology/instrumentation , Radiology/methods , Research , Software , Models, Theoretical , Forecasting , Carcinoma/diagnostic imaging , Lung Neoplasms/diagnostic imaging , Humans , Machine Learning
4.
Eur J Hybrid Imaging ; 6(1): 33, 2022 Oct 30.
Article in English | MEDLINE | ID: mdl-36309636

ABSTRACT

BACKGROUND: Quality and reproducibility of radiomics studies are essential requirements for the standardisation of radiomics models. As recent data-driven respiratory gating (DDG) [18F]-FDG has shown superior diagnostic performance in lung cancer, we evaluated the impact of DDG on the reproducibility of radiomics features derived from [18F]-FDG PET/CT in comparison to free-breathing flow (FB) imaging. METHODS: Twenty four lung nodules from 20 patients were delineated. Radiomics features were derived on FB flow PET/CT and on the corresponding DDG reconstruction using the QuantImage v2 platform. Lin's concordance factor (Cb) and the mean difference percentage (DIFF%) were calculated for each radiomics feature using the delineated nodules which were also classified by anatomical localisation and volume. Non-reproducible radiomics features were defined as having a bias correction factor Cb < 0.8 and/or a mean difference percentage DIFF% > 10. RESULTS: In total 141 features were computed on each concordance analysis, 10 of which were non-reproducible on all pulmonary lesions. Those were first-order features from Laplacian of Gaussian (LoG)-filtered images (sigma = 1 mm): Energy, Kurtosis, Minimum, Range, Root Mean Squared, Skewness and Variance; Texture features from Gray Level Cooccurence Matrix (GLCM): Cluster Prominence and Difference Variance; First-order Standardised Uptake Value (SUV) feature: Kurtosis. Pulmonary lesions located in the superior lobes had only stable radiomics features, the ones from the lower parts had 25 non-reproducible radiomics features. Pulmonary lesions with a greater size (defined as long axis length > median) showed a higher reproducibility (9 non-reproducible features) than smaller ones (20 non-reproducible features). CONCLUSION: Calculated on all pulmonary lesions, 131 out of 141 radiomics features can be used interchangeably between DDG and FB PET/CT acquisitions. Radiomics features derived from pulmonary lesions located inferior to the superior lobes are subject to greater variability as well as pulmonary lesions of smaller size.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4731-4735, 2022 07.
Article in English | MEDLINE | ID: mdl-36086273

ABSTRACT

The prediction of cancer characteristics, treatment planning and patient outcome from medical images generally requires tumor delineation. In Head and Neck cancer (H&N), the automatic segmentation and differentiation of primary Gross Tumor Volumes (GTVt) and malignant lymph nodes (GTVn) is a necessary step for large-scale radiomics studies to predict patient outcome such as Progression Free Survival (PFS). Detecting malignant lymph nodes is also a crucial step for Tumor-Node-Metastases (TNM) staging and to support the decision to resect the nodes. In turn, automatic TNM staging and patient outcome prediction can greatly benefit patient care by helping clinicians to find the best personalized treatment. We propose the first model to automatically individually segment GTVt and GTVn in PET/CT images. A bi-modal 3D U-Net model is trained for multi-class and multi-components segmentation on the multi-centric HECKTOR 2020 dataset containing 254 cases. The dataset has been specifically re-annotated by experts to obtain ground truth GTVn contours. The results show promising segmentation performance for the automation of radiomics pipelines and their validation on large-scale studies for which manual annotations are not available. An average test Dice Similarity Coefficients (DSC) of 0.717 is obtained for the segmentation of GTVt. The GTVn segmentation is evaluated with an aggregated DSC to account for the cases without GTVn, which is estimated at 0.729 on the test set.


Subject(s)
Head and Neck Neoplasms , Positron Emission Tomography Computed Tomography , Head and Neck Neoplasms/diagnostic imaging , Humans , Lymph Nodes/diagnostic imaging
6.
Clin Transl Radiat Oncol ; 33: 153-158, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35243026

ABSTRACT

A vast majority of studies in the radiomics field are based on contours originating from radiotherapy planning. This kind of delineation (e.g. Gross Tumor Volume, GTV) is often larger than the true tumoral volume, sometimes including parts of other organs (e.g. trachea in Head and Neck, H&N studies) and the impact of such over-segmentation was little investigated so far. In this paper, we propose to evaluate and compare the performance between models using two contour types: those from radiotherapy planning, and those specifically delineated for radiomics studies. For the latter, we modified the radiotherapy contours to fit the true tumoral volume. The two contour types were compared when predicting Progression-Free Survival (PFS) using Cox models based on radiomics features extracted from FluoroDeoxyGlucose-Positron Emission Tomography (FDG-PET) and CT images of 239 patients with oropharyngeal H&N cancer collected from five centers, the data from the 2020 HECKTOR challenge. Using Dedicated contours demonstrated better performance for predicting PFS, where Harell's concordance indices of 0.61 and 0.69 were achieved for Radiotherapy and Dedicated contours, respectively. Using automatically Resegmented contours based on a fixed intensity range was associated with a C-index of 0.63. These results illustrate the importance of using clean dedicated contours that are close to the true tumoral volume in radiomics studies, even when tumor contours are already available from radiotherapy treatment planning.

7.
Med Image Anal ; 77: 102336, 2022 04.
Article in English | MEDLINE | ID: mdl-35016077

ABSTRACT

This paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (H&N) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in H&N cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs.


Subject(s)
Head and Neck Neoplasms , Positron Emission Tomography Computed Tomography , Fluorodeoxyglucose F18 , Head and Neck Neoplasms/diagnostic imaging , Humans , Positron Emission Tomography Computed Tomography/methods , Positron-Emission Tomography/methods , Tumor Burden
8.
Med Image Anal ; 65: 101756, 2020 10.
Article in English | MEDLINE | ID: mdl-32623274

ABSTRACT

Locally Rotation Invariant (LRI) image analysis was shown to be fundamental in many applications and in particular in medical imaging where local structures of tissues occur at arbitrary rotations. LRI constituted the cornerstone of several breakthroughs in texture analysis, including Local Binary Patterns (LBP), Maximum Response 8 (MR8) and steerable filterbanks. Whereas globally rotation invariant Convolutional Neural Networks (CNN) were recently proposed, LRI was very little investigated in the context of deep learning. LRI designs allow learning filters accounting for all orientations, which enables a drastic reduction of trainable parameters and training data when compared to standard 3D CNNs. In this paper, we propose and compare several methods to obtain LRI CNNs with directional sensitivity. Two methods use orientation channels (responses to rotated kernels), either by explicitly rotating the kernels or using steerable filters. These orientation channels constitute a locally rotation equivariant representation of the data. Local pooling across orientations yields LRI image analysis. Steerable filters are used to achieve a fine and efficient sampling of 3D rotations as well as a reduction of trainable parameters and operations, thanks to a parametric representations involving solid Spherical Harmonics (SH),which are products of SH with associated learned radial profiles. Finally, we investigate a third strategy to obtain LRI based on rotational invariants calculated from responses to a learned set of solid SHs. The proposed methods are evaluated and compared to standard CNNs on 3D datasets including synthetic textured volumes composed of rotated patterns, and pulmonary nodule classification in CT. The results show the importance of LRI image analysis while resulting in a drastic reduction of trainable parameters, outperforming standard 3D CNNs trained with rotational data augmentation.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Diagnostic Imaging , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...