Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Histopathology ; 84(2): 343-355, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37872676

RESUMO

BACKGROUND: Diagnosis of head and neck (HN) squamous dysplasias and carcinomas is critical for patient care, cure, and follow-up. It can be challenging, especially for grading intraepithelial lesions. Despite recent simplification in the last WHO grading system, the inter- and intraobserver variability remains substantial, particularly for nonspecialized pathologists, exhibiting the need for new tools to support pathologists. METHODS: In this study we investigated the potential of deep learning to assist the pathologist with automatic and reliable classification of HN lesions following the 2022 WHO classification system. We created, for the first time, a large-scale database of histological samples (>2000 slides) intended for developing an automatic diagnostic tool. We developed and trained a weakly supervised model performing classification from whole-slide images (WSI). We evaluated our model on both internal and external test sets and we defined and validated a new confidence score to assess the predictions that can be used to identify difficult cases. RESULTS: Our model demonstrated high classification accuracy across all lesion types on both internal and external test sets (respectively average area under the curve [AUC]: 0.878 (95% confidence interval [CI]: [0.834-0.918]) and 0.886 (95% CI: [0.813-0.947])) and the confidence score allowed for accurate differentiation between reliable and uncertain predictions. CONCLUSION: Our results demonstrate that the model, associated with confidence measurements, can help in the difficult task of classifying HN squamous lesions by limiting variability and detecting ambiguous cases, taking us one step closer to a wider adoption of AI-based assistive tools.


Assuntos
Carcinoma de Células Escamosas , Aprendizado Profundo , Humanos , Pescoço , Hiperplasia , Cabeça
2.
J Clin Med ; 12(2)2023 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-36675435

RESUMO

INTRODUCTION: Glaucoma and non-arteritic anterior ischemic optic neuropathy (NAION) are optic neuropathies that can both lead to irreversible blindness. Several studies have compared optical coherence tomography angiography (OCTA) findings in glaucoma and NAION in the presence of similar functional and structural damages with contradictory results. The goal of this study was to use a deep learning system to differentiate OCTA in glaucoma and NAION. MATERIAL AND METHODS: Sixty eyes with glaucoma (including primary open angle glaucoma, angle-closure glaucoma, normal tension glaucoma, pigmentary glaucoma, pseudoexfoliative glaucoma and juvenile glaucoma), thirty eyes with atrophic NAION and forty control eyes (NC) were included. All patients underwent OCTA imaging and automatic segmentation was used to analyze the macular superficial capillary plexus (SCP) and the radial peripapillary capillary (RPC) plexus. We used the classic convolutional neural network (CNN) architecture of ResNet50. Attribution maps were obtained using the "Integrated Gradients" method. RESULTS: The best performances were obtained with the SCP + RPC model achieving a mean area under the receiver operating characteristics curve (ROC AUC) of 0.94 (95% CI 0.92-0.96) for glaucoma, 0.90 (95% CI 0.86-0.94) for NAION and 0.96 (95% CI 0.96-0.97) for NC. CONCLUSION: This study shows that deep learning architecture can classify NAION, glaucoma and normal OCTA images with a good diagnostic performance and may outperform the specialist assessment.

3.
Med Image Anal ; 73: 102167, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34333217

RESUMO

While pap test is the most common diagnosis methods for cervical cancer, their results are highly dependent on the ability of the cytotechnicians to detect abnormal cells on the smears using brightfield microscopy. In this paper, we propose an explainable region classifier in whole slide images that could be used by cyto-pathologists to handle efficiently these big images (100,000x100,000 pixels). We create a dataset that simulates pap smears regions and uses a loss, we call classification under regression constraint, to train an efficient region classifier (about 66.8% accuracy on severity classification, 95.2% accuracy on normal/abnormal classification and 0.870 KAPPA score). We explain how we benefit from this loss to obtain a model focused on sensitivity and, then, we show that it can be used to perform weakly supervised localization (accuracy of 80.4%) of the cell that is mostly responsible for the malignancy of regions of whole slide images. We extend our method to perform a more general detection of abnormal cells (66.1% accuracy) and ensure that at least one abnormal cell will be detected if malignancy is present. Finally, we experiment our solution on a small real clinical slide dataset, highlighting the relevance of our proposed solution, adapting it to be as easily integrated in a pathology laboratory workflow as possible, and extending it to make a slide-level prediction.


Assuntos
Detecção Precoce de Câncer , Neoplasias do Colo do Útero , Computadores , Diagnóstico por Computador , Feminino , Humanos , Interpretação de Imagem Assistida por Computador , Neoplasias do Colo do Útero/diagnóstico por imagem
4.
Front Oncol ; 11: 596499, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33763347

RESUMO

Juvenile-onset recurrent respiratory papillomatosis (JoRRP) is a condition characterized by the repeated growth of benign exophytic papilloma in the respiratory tract. The course of the disease remains unpredictable: some children experience minor symptoms, while others require multiple interventions due to florid growth. Our study aimed to identify histologic severity risk factors in patients with JoRRP. Forty-eight children from two French pediatric centers were included retrospectively. Criteria for a severe disease were: annual rate of surgical endoscopy ≥ 5, spread to the lung, carcinomatous transformation or death. We conducted a multi-stage study with image analysis. First, with Hematoxylin and eosin (HE) digital slides of papilloma, we searched for morphological patterns associated with a severe JoRRP using a deep-learning algorithm. Then, immunohistochemistry with antibody against p53 and p63 was performed on sections of FFPE samples of laryngeal papilloma obtained between 2008 and 2018. Immunostainings were quantified according to the staining intensity through two automated workflows: one using machine learning, the other using deep learning. Twenty-four patients had severe disease. For the HE analysis, no significative results were obtained with cross-validation. For immunostaining with anti-p63 antibody, we found similar results between the two image analysis methods. Using machine learning, we found 23.98% of stained nuclei for medium intensity for mild JoRRP vs. 36.1% for severe JoRRP (p = 0.041); and for medium and strong intensity together, 24.14% for mild JoRRP vs. 36.9% for severe JoRRP (p = 0.048). Using deep learning, we found 58.32% for mild JoRRP vs. 67.45% for severe JoRRP (p = 0.045) for medium and strong intensity together. Regarding p53, we did not find any significant difference in the number of nuclei stained between the two groups of patients. In conclusion, we highlighted that immunochemistry with the anti-p63 antibody is a potential biomarker to predict the severity of the JoRRP.

5.
J Clin Med ; 9(10)2020 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-33066661

RESUMO

Background. In recent years, deep learning has been increasingly applied to a vast array of ophthalmological diseases. Inherited retinal diseases (IRD) are rare genetic conditions with a distinctive phenotype on fundus autofluorescence imaging (FAF). Our purpose was to automatically classify different IRDs by means of FAF images using a deep learning algorithm. Methods. In this study, FAF images of patients with retinitis pigmentosa (RP), Best disease (BD), Stargardt disease (STGD), as well as a healthy comparable group were used to train a multilayer deep convolutional neural network (CNN) to differentiate FAF images between each type of IRD and normal FAF. The CNN was trained and validated with 389 FAF images. Established augmentation techniques were used. An Adam optimizer was used for training. For subsequent testing, the built classifiers were then tested with 94 untrained FAF images. Results. For the inherited retinal disease classifiers, global accuracy was 0.95. The precision-recall area under the curve (PRC-AUC) averaged 0.988 for BD, 0.999 for RP, 0.996 for STGD, and 0.989 for healthy controls. Conclusions. This study describes the use of a deep learning-based algorithm to automatically detect and classify inherited retinal disease in FAF. Hereby, the created classifiers showed excellent results. With further developments, this model may be a diagnostic tool and may give relevant information for future therapeutic approaches.

6.
J Cell Biol ; 202(1): 163-77, 2013 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-23836933

RESUMO

In migrating cells, integrin-based focal adhesions (FAs) assemble in protruding lamellipodia in association with rapid filamentous actin (F-actin) assembly and retrograde flow. How dynamic F-actin is coupled to FA is not known. We analyzed the role of vinculin in integrating F-actin and FA dynamics by vinculin gene disruption in primary fibroblasts. Vinculin slowed F-actin flow in maturing FA to establish a lamellipodium-lamellum border and generate high extracellular matrix (ECM) traction forces. In addition, vinculin promoted nascent FA formation and turnover in lamellipodia and inhibited the frequency and rate of FA maturation. Characterization of a vinculin point mutant that specifically disrupts F-actin binding showed that vinculin-F-actin interaction is critical for these functions. However, FA growth rate correlated with F-actin flow speed independently of vinculin. Thus, vinculin functions as a molecular clutch, organizing leading edge F-actin, generating ECM traction, and promoting FA formation and turnover, but vinculin is dispensible for FA growth.


Assuntos
Actinas/metabolismo , Adesões Focais/metabolismo , Mapeamento de Interação de Proteínas/métodos , Proteólise , Vinculina/metabolismo , Substituição de Aminoácidos , Animais , Movimento Celular , Clonagem Molecular , Matriz Extracelular/metabolismo , Fibroblastos/metabolismo , Adesões Focais/genética , Proteínas de Fluorescência Verde/metabolismo , Camundongos , Camundongos Endogâmicos C57BL , Mutação Puntual , Ligação Proteica , Transporte Proteico , Pseudópodes/metabolismo , Vinculina/genética
7.
IEEE Trans Image Process ; 19(1): 74-84, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19709977

RESUMO

This paper presents a general method for detecting curvilinear structures, like filaments or edges, in noisy images. This method relies on a novel technique, the feature-adapted beamlet transform (FABT) which is the main contribution of this paper. It combines the well-known Beamlet transform (BT), introduced by Donoho , with local filtering techniques in order to improve both detection performance and accuracy of the BT. Moreover, as the desired feature detector is chosen to belong to the class of steerable filters, our transform requires only O(Nlog(N)) operations, where N = n(2) is the number of pixels. Besides providing a fast implementation of the FABT on discrete grids, we present a statistically controlled method for curvilinear objects detection. To extract significant objects, we propose an algorithm in four steps: 1) compute the FABT, 2) normalize beamlet coefficients, 3) select meaningful beamlets thanks to a fast energy-based minimization, and 4) link beamlets together in order to get a list of objects. We present an evaluation on both synthetic and real data, and demonstrate substantial improvements of our method over classical feature detectors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...