Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Publication year range
1.
IEEE Trans Med Imaging ; 42(12): 3602-3613, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37471191

ABSTRACT

The growth rate of pulmonary nodules is a critical clue to the cancerous diagnosis. It is essential to monitor their dynamic progressions during pulmonary nodule management. To facilitate the prosperity of research on nodule growth prediction, we organized and published a temporal dataset called NLSTt with consecutive computed tomography (CT) scans. Based on the self-built dataset, we develop a visual learner to predict the growth for the following CT scan qualitatively and further propose a model to predict the growth rate of pulmonary nodules quantitatively, so that better diagnosis can be achieved with the help of our predicted results. To this end, in this work, we propose a parameterized Gempertz-guided morphological autoencoder (GM-AE) to generate any future-time-span high-quality visual appearances of pulmonary nodules from the baseline CT scan. Specifically, we parameterize a popular mathematical model for tumor growth kinetics, Gompertz, to predict future masses and volumes of pulmonary nodules. Then, we exploit the expected growth rate on the mass and volume to guide decoders generating future shape and texture of pulmonary nodules. We introduce two branches in an autoencoder to encourage shape-aware and textural-aware representation learning and integrate the generated shape into the textural-aware branch to simulate the future morphology of pulmonary nodules. We conduct extensive experiments on the self-built NLSTt dataset to demonstrate the superiority of our GM-AE to its competitive counterparts. Experiment results also reveal the learnable Gompertz function enjoys promising descriptive power in accounting for inter-subject variability of the growth rate for pulmonary nodules. Besides, we evaluate our GM-AE model on an in-house dataset to validate its generalizability and practicality. We make its code publicly available along with the published NLSTt dataset.


Subject(s)
Lung Neoplasms , Solitary Pulmonary Nodule , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Solitary Pulmonary Nodule/diagnostic imaging
2.
Front Oncol ; 12: 1002953, 2022.
Article in English | MEDLINE | ID: mdl-36313666

ABSTRACT

Background: Estimating the growth of pulmonary sub-solid nodules (SSNs) is crucial to the successful management of them during follow-up periods. The purpose of this study is to (1) investigate the measurement sensitivity of diameter, volume, and mass of SSNs for identifying growth and (2) seek to establish a deep learning-based model to predict the growth of SSNs. Methods: A total of 2,523 patients underwent at least 2-year examination records retrospectively collected with sub-solid nodules. A total of 2,358 patients with 3,120 SSNs from the NLST dataset were randomly divided into training and validation sets. Patients from the Yibicom Health Management Center and Guangdong Provincial People's Hospital were collected as an external test set (165 patients with 213 SSN). Trained models based on LUNA16 and Lndb19 datasets were employed to automatically obtain the diameter, volume, and mass of SSNs. Then, the increase rate in measurements between cancer and non-cancer groups was studied to evaluate the most appropriate way to identify growth-associated lung cancer. Further, according to the selected measurement, all SSNs were classified into two groups: growth and non-growth. Based on the data, the deep learning-based model (SiamModel) and radiomics model were developed and verified. Results: The double time of diameter, volume, and mass were 711 vs. 963 days (P = 0.20), 552 vs. 621 days (P = 0.04) and 488 vs. 623 days (P< 0.001) in the cancer and non-cancer groups, respectively. Our proposed SiamModel performed better than the radiomics model in both the NLST validation set and external test set, with an AUC of 0.858 (95% CI 0.786-0.921) and 0.760 (95% CI 0.646-0.857) in the validation set and 0.862 (95% CI 0.789-0.927) and 0.681 (95% CI 0.506-0.841) in the external test set, respectively. Furthermore, our SiamModel could use the data from first-time CT to predict the growth of SSNs, with an AUC of 0.855 (95% CI 0.793-0.908) in the NLST validation set and 0.821 (95% CI 0.725-0.904) in the external test set. Conclusion: Mass increase rate can reflect more sensitively the growth of SSNs associated with lung cancer than diameter and volume increase rates. A deep learning-based model has a great potential to predict the growth of SSNs.

3.
BMC Med Imaging ; 21(1): 99, 2021 06 10.
Article in English | MEDLINE | ID: mdl-34112095

ABSTRACT

BACKGROUND: Chest X-rays are the most commonly available and affordable radiological examination for screening thoracic diseases. According to the domain knowledge of screening chest X-rays, the pathological information usually lay on the lung and heart regions. However, it is costly to acquire region-level annotation in practice, and model training mainly relies on image-level class labels in a weakly supervised manner, which is highly challenging for computer-aided chest X-ray screening. To address this issue, some methods have been proposed recently to identify local regions containing pathological information, which is vital for thoracic disease classification. Inspired by this, we propose a novel deep learning framework to explore discriminative information from lung and heart regions. RESULT: We design a feature extractor equipped with a multi-scale attention module to learn global attention maps from global images. To exploit disease-specific cues effectively, we locate lung and heart regions containing pathological information by a well-trained pixel-wise segmentation model to generate binarization masks. By introducing element-wise logical AND operator on the learned global attention maps and the binarization masks, we obtain local attention maps in which pixels are are 1 for lung and heart region and 0 for other regions. By zeroing features of non-lung and heart regions in attention maps, we can effectively exploit their disease-specific cues in lung and heart regions. Compared to existing methods fusing global and local features, we adopt feature weighting to avoid weakening visual cues unique to lung and heart regions. Our method with pixel-wise segmentation can help overcome the deviation of locating local regions. Evaluated by the benchmark split on the publicly available chest X-ray14 dataset, the comprehensive experiments show that our method achieves superior performance compared to the state-of-the-art methods. CONCLUSION: We propose a novel deep framework for the multi-label classification of thoracic diseases in chest X-ray images. The proposed network aims to effectively exploit pathological regions containing the main cues for chest X-ray screening. Our proposed network has been used in clinic screening to assist the radiologists. Chest X-ray accounts for a significant proportion of radiological examinations. It is valuable to explore more methods for improving performance.


Subject(s)
Deep Learning , Heart Diseases/diagnostic imaging , Lung Diseases/diagnostic imaging , Radiography, Thoracic , Thoracic Diseases/diagnostic imaging , Heart/diagnostic imaging , Humans , Lung/diagnostic imaging , ROC Curve
4.
IEEE J Biomed Health Inform ; 25(10): 3943-3954, 2021 10.
Article in English | MEDLINE | ID: mdl-34018938

ABSTRACT

When encountering a dubious diagnostic case, medical instance retrieval can help radiologists make evidence-based diagnoses by finding images containing instances similar to a query case from a large image database. The similarity between the query case and retrieved similar cases is determined by visual features extracted from pathologically abnormal regions. However, the manifestation of these regions often lacks specificity, i.e., different diseases can have the same manifestation, and different manifestations may occur at different stages of the same disease. To combat the manifestation ambiguity in medical instance retrieval, we propose a novel deep framework called Y-Net, encoding images into compact hash-codes generated from convolutional features by feature aggregation. Y-Net can learn highly discriminative convolutional features by unifying the pixel-wise segmentation loss and classification loss. The segmentation loss allows exploring subtle spatial differences for good spatial-discriminability while the classification loss utilizes class-aware semantic information for good semantic-separability. As a result, Y-Net can enhance the visual features in pathologically abnormal regions and suppress the disturbing of the background during model training, which could effectively embed discriminative features into the hash-codes in the retrieval stage. Extensive experiments on two medical image datasets demonstrate that Y-Net can alleviate the ambiguity of pathologically abnormal regions and its retrieval performance outperforms the state-of-the-art method by an average of 9.27% on the returned list of 10.


Subject(s)
Algorithms , Semantics , Databases, Factual , Humans , Radiologists , Research Design
5.
Med Image Anal ; 69: 101981, 2021 04.
Article in English | MEDLINE | ID: mdl-33588123

ABSTRACT

Deep hashing methods have been shown to be the most efficient approximate nearest neighbor search techniques for large-scale image retrieval. However, existing deep hashing methods have a poor small-sample ranking performance for case-based medical image retrieval. The top-ranked images in the returned query results may be as a different class than the query image. This ranking problem is caused by classification, regions of interest (ROI), and small-sample information loss in the hashing space. To address the ranking problem, we propose an end-to-end framework, called Attention-based Triplet Hashing (ATH) network, to learn low-dimensional hash codes that preserve the classification, ROI, and small-sample information. We embed a spatial-attention module into the network structure of our ATH to focus on ROI information. The spatial-attention module aggregates the spatial information of feature maps by utilizing max-pooling, element-wise maximum, and element-wise mean operations jointly along the channel axis. To highlight the essential role of classification in direntiating case-based medical images, we propose a novel triplet cross-entropy loss to achieve maximal class-separability and maximal hash code-discriminability simultaneously during model training. The triplet cross-entropy loss can help to map the classification information of images and similarity between images into the hash codes. Moreover, by adopting triplet labels during model training, we can utilize the small-sample information fully to alleviate the imbalanced-sample problem. Extensive experiments on two case-based medical datasets demonstrate that our proposed ATH can further improve the retrieval performance compared to the state-of-the-art deep hashing methods and boost the ranking performance for small samples. Compared to the other loss methods, the triplet cross-entropy loss can enhance the classification performance and hash code-discriminability.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 353-356, 2020 07.
Article in English | MEDLINE | ID: mdl-33018001

ABSTRACT

Bundle branch block (BBB) is one of the most common cardiac disorder, and can be detected by electro-cardiogram (ECG) signal in clinical practice. Conventional methods adopted some kinds of hand-craft features, whose discriminative power is relatively low. On the other hand, these methods were based on the supervised learning, which required the high cost heartbeat annotation in the training. In this paper, a novel end-to-end deep network was proposed to classify three types of heartbeat: right BBB (RBBB), left BBB (LBBB) and others with a multiple instance learning based training strategy. We trained the proposed method on the China Physiological Signal Challenge 2018 database (CPSC) and tested on the MIT-BIH Arrhythmia database (AR). The proposed method achieved an accuracy of 78.58%, and sensitivity of 84.78% (LBBB), 51.23% (others) and 99.72% (RBBB), better than the baseline methods. Experimental results show that our method would be a good choice for the BBB classification on the ECG dataset with record-level labels instead of heartbeat annotations.


Subject(s)
Bundle-Branch Block , Electrocardiography , Arrhythmias, Cardiac/diagnosis , Bundle-Branch Block/diagnosis , China , Heart Rate , Humans
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 418-421, 2020 07.
Article in English | MEDLINE | ID: mdl-33018017

ABSTRACT

The multi-label electrocardiogram (ECG) classification is to automatically predict a set of concurrent cardiac abnormalities in an ECG record, which is significant for clinical diagnosis. Modeling the cardiac abnormality dependencies is the key to improving classification performance. To capture the dependencies, we proposed a multi-label classification method based on the weighted graph attention networks. In the study, a graph taking each class as a node was mapped and the class dependencies were represented by the weights of graph edges. A novel weights generation method was proposed by combining the self-attentional weights and the prior learned co-occurrence knowledge of classes. The algorithm was evaluated on the dataset of the Hefei Hi-tech Cup ECG Intelligent Competition for 34 kinds of ECG abnormalities classification. And the micro-f 1 and the macro-f 1 of cross validation respectively were 91.45% and 44.48%. The experiment results show that the proposed method can model class dependencies and improve classification performance.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography , Algorithms , Attention , Humans , Research Design
8.
Lin Chuang Er Bi Yan Hou Ke Za Zhi ; 16(7): 323-5, 2002 Jul.
Article in Chinese | MEDLINE | ID: mdl-15510726

ABSTRACT

OBJECTIVE: To make a further exploration of the mutation frequence of Chinese genetic deafness and make clear if the genetic deafness genealogy that we collected recently was resulted from the mutation of the deafness genes which had been cloned. METHOD: We made regular otologic examination, hearing test and physical examination among the members of this genealogy, and also inspected the mutation of seven autosomal domiant deafness genes, HDIAI,GJB2, GJB3, DFNA5, a-tectorin(resulting in two types of genetic deafness, DFNA8 and DFNA12), MYO7A,POU4F3, with PCR-Sequencing method in this genealogy. RESULT: 1. The analysis of hereditary mode: There were forty-seven persons collected in five generations of this genealogy, and eighteen persons of them were deafness. It accorded with autosomal dominant inheritance from the pedigree. 2. The clinic feature: All patients with deafness were postlingual deafness. Their hearing decreased onset between sixteen to thirty years old, and the deafness was binaural symmetrical, progressive sensorineural and without other systems abnormity. 3. Analysis of mutation detection: We found two nucleotides changes in CX26 genes, A341G and GC257-258CG, and one changed nucleotide in POU4F3 gene,T90C. But we didn't think the changed nucleotides caused deafness after we analysed them. No mutation was found in other five genes. CONCLUSION: The possibility that the deafness of this genealogy was resulted from the cloned gene is relatively small. Now, We are scanning the whole gene groups and making linkage analysis on this pedigree, it is most probably to orientate a new deafness gene position.


Subject(s)
Connexins/genetics , Deafness/genetics , Adolescent , Adult , Aged , Connexin 26 , DNA Mutational Analysis , Female , Genes, Dominant/genetics , Humans , Male , Middle Aged , Pedigree
SELECTION OF CITATIONS
SEARCH DETAIL
...