Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Database (Oxford) ; 20222022 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-36251776

RESUMO

Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women. Advances in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by pathologists is cumbersome, time-consuming and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems has empowered the rapid digitization of pathology slides and enabled the development of Artificial Intelligence (AI)-assisted digital workflows. However, AI techniques, especially Deep Learning, require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as data-acquisition level constraints, time-consuming and expensive annotations and anonymization of patient information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin and Eosin (H&E)-stained images to advance AI development in the automatic characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs) and 4539 Regions Of Interest (ROIs) extracted from the WSIs. Each WSI and respective ROIs are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI and ROI levels. Furthermore, by including the understudied atypical lesions, BRACS offers a unique opportunity for leveraging AI to better understand their characteristics. We encourage AI practitioners to develop and evaluate novel algorithms on the BRACS dataset to further breast cancer diagnosis and patient care. Database URL: https://www.bracs.icar.cnr.it/.


Assuntos
Inteligência Artificial , Neoplasias da Mama , Algoritmos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Amarelo de Eosina-(YS) , Feminino , Hematoxilina , Humanos
2.
Med Image Anal ; 75: 102264, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34781160

RESUMO

Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net.


Assuntos
Técnicas Histológicas , Redes Neurais de Computação , Benchmarking , Humanos , Prognóstico
3.
Cancers (Basel) ; 12(5)2020 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-32466184

RESUMO

We introduce a machine learning-based analysis to predict the immunohistochemical (IHC) labeling index for the cell proliferation marker Ki67/MIB1 on cancer tissues based on morphometrical features extracted from hematoxylin and eosin (H&E)-stained formalin-fixed, paraffin-embedded (FFPE) tumor tissue samples. We provided a proof-of-concept prediction of the Ki67/MIB1 IHC positivity of cancer cells through the definition and quantitation of single nuclear features. In the first instance, we set our digital framework on Ki67/MIB1-stained OSCC (oral squamous cell carcinoma) tissue sample whole slide images, using QuPath as a working platform and its integrated algorithms, and we built a classifier in order to distinguish tumor and stroma classes and, within them, Ki67-positive and Ki67-negative cells; then, we sorted the morphometric features of tumor cells related to their Ki67 IHC status. Among the evaluated features, nuclear hematoxylin mean optical density (NHMOD) presented as the best one to distinguish Ki67/MIB1 positive from negative cells. We confirmed our findings in a single-cell level analysis of H&E staining on Ki67-immunostained/H&E-decolored tissue samples. Finally, we tested our digital framework on a case series of oral squamous cell carcinomas (OSCC), arranged in tissue microarrays; we selected two consecutive sections of each OSCC FFPE TMA (tissue microarray) block, respectively stained with H&E and immuno-stained for Ki67/MIB1. We automatically detected tumor cells in H&E slides and generated a "false color map" (FCM) based on NHMOD through the QuPath measurements map tool. FCM nearly coincided with the actual immunohistochemical result, allowing the prediction of Ki67/MIB1 positive cells in a direct visual fashion. Our proposed approach provides the pathologist with a fast method of identifying the proliferating compartment of the tumor through a quantitative assessment of the nuclear features on H&E slides, readily appreciable by visual inspection. Although this technique needs to be fine-tuned and tested on larger series of tumors, the digital analysis approach appears to be a promising tool to quickly forecast the tumor's proliferation fraction directly on routinely H&E-stained digital sections.

4.
Med Image Anal ; 56: 122-139, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31226662

RESUMO

Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.


Assuntos
Neoplasias da Mama/patologia , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Algoritmos , Feminino , Humanos , Microscopia , Coloração e Rotulagem
5.
Cell Commun Signal ; 17(1): 20, 2019 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-30823936

RESUMO

BACKGROUND: Shp1, a tyrosine-phosphatase-1 containing the Src-homology 2 (SH2) domain, is involved in inflammatory and immune reactions, where it regulates diverse signalling pathways, usually by limiting cell responses through dephosphorylation of target molecules. Moreover, Shp1 regulates actin dynamics. One Shp1 target is Src, which controls many cellular functions including actin dynamics. Src has been previously shown to be activated by a signalling cascade initiated by the cytosolic-phospholipase A2 (cPLA2) metabolite glycerophosphoinositol 4-phosphate (GroPIns4P), which enhances actin polymerisation and motility. While the signalling cascade downstream Src has been fully defined, the mechanism by which GroPIns4P activates Src remains unknown. METHODS: Affinity chromatography, mass spectrometry and co-immunoprecipitation studies were employed to identify the GroPIns4P-interactors; among these Shp1 was selected for further analysis. The specific Shp1 residues interacting with GroPIns4P were revealed by NMR and validated by site-directed mutagenesis and biophysical methods such as circular dichroism, isothermal calorimetry, fluorescence spectroscopy, surface plasmon resonance and computational modelling. Morphological and motility assays were performed in NIH3T3 fibroblasts. RESULTS: We find that Shp1 is the direct cellular target of GroPIns4P. GroPIns4P directly binds to the Shp1-SH2 domain region (with the crucial residues being Ser 118, Arg 138 and Ser 140) and thereby promotes the association between Shp1 and Src, and the dephosphorylation of the Src-inhibitory phosphotyrosine in position 530, resulting in Src activation. As a consequence, fibroblast cells exposed to GroPIns4P show significantly enhanced wound healing capability, indicating that GroPIns4P has a stimulatory role to activate fibroblast migration. GroPIns4P is produced by cPLA2 upon stimulation by diverse receptors, including the EGF receptor. Indeed, endogenously-produced GroPIns4P was shown to mediate the EGF-induced cell motility. CONCLUSIONS: This study identifies a so-far undescribed mechanism of Shp1/Src modulation that promotes cell motility and that is dependent on the cPLA2 metabolite GroPIns4P. We show that GroPIns4P is required for EGF-induced fibroblast migration and that it is part of a cPLA2/GroPIns4P/Shp1/Src cascade that might have broad implications for studies of immune-inflammatory response and cancer.


Assuntos
Movimento Celular , Receptores ErbB/metabolismo , Fosfatos de Inositol/metabolismo , Fosfolipases A2/metabolismo , Proteína Tirosina Fosfatase não Receptora Tipo 6/metabolismo , Transdução de Sinais , Quinases da Família src/metabolismo , Animais , Sítios de Ligação , Fator de Crescimento Epidérmico/farmacologia , Camundongos , Células NIH 3T3 , Fosforilação , Ligação Proteica , Proteína Tirosina Fosfatase não Receptora Tipo 6/química , Células RAW 264.7 , Cicatrização , Domínios de Homologia de src
6.
IEEE J Biomed Health Inform ; 23(1): 437-448, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-29994162

RESUMO

New technological advances in automated microscopy have given rise to large volumes of data, which have made human-based analysis infeasible, heightening the need for automatic systems for high-throughput microscopy applications. In particular, in the field of fluorescence microscopy, automatic tools for image analysis are making an essential contribution in order to increase the statistical power of the cell analysis process. The development of these automatic systems is a difficult task due to both the diversification of the staining patterns and the local variability of the images. In this paper, we present an unsupervised approach for automatic cell segmentation and counting, namely CSC, in high-throughput microscopy images. The segmentation is performed by dividing the whole image into square patches that undergo a gray level clustering followed by an adaptive thresholding. Subsequently, the cell labeling is obtained by detecting the centers of the cells, using both distance transform and curvature analysis, and by applying a region growing process. The advantages of CSC are manifold. The foreground detection process works on gray levels rather than on individual pixels, so it proves to be very efficient. Moreover, the combination of distance transform and curvature analysis makes the counting process very robust to clustered cells. A further strength of the CSC method is the limited number of parameters that must be tuned. Indeed, two different versions of the method have been considered, CSC-7 and CSC-3, depending on the number of parameters to be tuned. The CSC method has been tested on several publicly available image datasets of real and synthetic images. Results in terms of standard metrics and spatially aware measures show that CSC outperforms the current state-of-the-art techniques.


Assuntos
Contagem de Células/métodos , Ensaios de Triagem em Larga Escala/métodos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Animais , Linhagem Celular , Bases de Dados Factuais , Drosophila melanogaster , Humanos , Aprendizado de Máquina não Supervisionado
7.
Comput Med Imaging Graph ; 66: 73-81, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29573581

RESUMO

Retinitis Pigmentosa is an eye disease that presents with a slow loss of vision and then evolves until blindness results. The automatic detection of the early signs of retinitis pigmentosa acts as a great support to ophthalmologists in the diagnosis and monitoring of the disease in order to slow down the degenerative process. A large body of literature is devoted to the analysis of Retinitis Pigmentosa. However, all the existing approaches work on Optical Coherence Tomography (OCT) data, while hardly any attempts have been made working on fundus images. Fundus image analysis is a suitable tool in daily practice for an early detection of retinal diseases and the monitoring of their progression. Moreover, the fundus camera represents a low-cost and easy-access diagnostic system, which can be employed in resource-limited regions and countries. The fundus images of a patient suffering from retinitis pigmentosa are characterized by an attenuation of the vessels, a waxy disc pallor and the presence of pigment deposits. Considering that several methods have been proposed for the analysis of retinal vessels and the optic disk, this work focuses on the automatic segmentation of the pigment deposits in the fundus images. The image distortions are attenuated by applying a local pre-processing. Next, a watershed transformation is carried out to produce homogeneous regions. Working on regions rather than on pixels makes the method very robust to the high variability of pigment deposits in terms of color and shape, so allowing the detection even of small pigment deposits. The regions undergo a feature extraction procedure, so that a region classification process is performed by means of an outlier detection analysis and a rule set. The experiments have been performed on a dataset of images of patients suffering from retinitis pigmentosa. Although the images present a high variability in terms of color and illumination, the method provides a good performance in terms of sensitivity, specificity, accuracy and the F-measure, whose values are 74.43, 98.44, 97.90, 59.04, respectively.


Assuntos
Fundo de Olho , Interpretação de Imagem Assistida por Computador/métodos , Retinose Pigmentar/diagnóstico por imagem , Algoritmos , Humanos , Tomografia de Coerência Óptica/métodos
8.
IEEE J Transl Eng Health Med ; 5: 3800107, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28507827

RESUMO

Color vision deficiency (CVD) is an extremely frequent vision impairment that compromises the ability to recognize colors. In order to improve color vision in a subject with CVD, we designed and developed a wearable improved vision system based on an augmented reality device. The system was validated in a clinical pilot study on 24 subjects with CVD (18 males and 6 females, aged 37.4 ± 14.2 years). The primary outcome was the improvement in the Ishihara Vision Test score with the correction proposed by our system. The Ishihara test score significantly improved ([Formula: see text]) from 5.8 ± 3.0 without correction to 14.8 ± 5.0 with correction. Almost all patients showed an improvement in color vision, as shown by the increased test scores. Moreover, with our system, 12 subjects (50%) passed the vision color test as normal vision subjects. The development and preliminary validation of the proposed platform confirm that a wearable augmented-reality device could be an effective aid to improve color vision in subjects with CVD.

9.
Biomed Opt Express ; 7(5): 1853-64, 2016 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-27231626

RESUMO

The visualization of heterogeneous morphology, segmentation and quantification of image features is a crucial point for nonlinear optics microscopy applications, spanning from imaging of living cells or tissues to biomedical diagnostic. In this paper, a methodology combining stimulated Raman scattering microscopy and image analysis technique is presented. The basic idea is to join the potential of vibrational contrast of stimulated Raman scattering and the strength of imaging analysis technique in order to delineate subcellular morphology with chemical specificity. Validation tests on label free imaging of polystyrene-beads and of adipocyte cells are reported and discussed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...