Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38743530

ABSTRACT

Breast lesion segmentation from ultrasound images is essential in computer-aided breast cancer diagnosis. To alleviate the problems of blurry lesion boundaries and irregular morphologies, common practices combine CNN and attention to integrate global and local information. However, previous methods use two independent modules to extract global and local features separately, such feature-wise inflexible integration ignores the semantic gap between them, resulting in representation redundancy/insufficiency and undesirable restrictions in clinic practices. Moreover, medical images are highly similar to each other due to the imaging methods and human tissues, but the captured global information by transformer-based methods in the medical domain is limited within images, the semantic relations and common knowledge across images are largely ignored. To alleviate the above problems, in the neighbor view, this paper develops a pixel neighbor representation learning method (NeighborNet) to flexibly integrate global and local context within and across images for lesion morphology and boundary modeling. Concretely, we design two neighbor layers to investigate two properties (i.e., number and distribution) of neighbors. The neighbor number for each pixel is not fixed but determined by itself. The neighbor distribution is extended from one image to all images in the datasets. With the two properties, for each pixel at each feature level, the proposed NeighborNet can evolve into the transformer or degenerate into the CNN for adaptive context representation learning to cope with the irregular lesion morphologies and blurry boundaries. The state-of-the-art performances on three ultrasound datasets prove the effectiveness of the proposed NeighborNet. The code is available at: https://github.com/fjcaoww/NeighborNet.

2.
Polymers (Basel) ; 15(24)2023 Dec 17.
Article in English | MEDLINE | ID: mdl-38139981

ABSTRACT

In this work, a novel α-nucleating agent (NA) for polypropylene (PP) termed APAl-3C-12Li was prepared and evaluated compared with the commercially available type NA-21. For the synthesis of the organophosphate-type NA (APAl-3C), the -OH group of the acid part of NA-21 was substituted by the isopropoxy group. The structure of APAl-3C was analyzed by spectroscopy and element analysis, the results of which were consistent with the theoretical molecular formula. APAl-3C's thermal stability was studied by differential scanning calorimetry (DSC) and thermogravimetry (TG), which showed only weak mass loss below 230 °C, meaning that it would not decompose during the processing of PP. The APAl-3C-12Li was used as a novel nucleating agent, studying its effects on crystallization, microstructure, mechanical and optical properties. Tests were performed in a PP random copolymer at different contents, in comparison to the commercial NA-21. The composite with 0.5 wt% APAl-3C-12Li has a similar crystallization temperature of 118.8 °C as with the addition of 0.5 wt% NA-21. An advantage is that the composite with the APAl-3C-12Li has a lower haze value of 9.3% than the counterpart with NA-21. This is due to the weaker polarity of APAl-3C-12Li after the introduction of methyl and better uniform dispersion in the PP matrix, resulting in stronger improvement of optical and mechanical properties.

3.
J Cancer Res Clin Oncol ; 149(17): 15469-15478, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37642722

ABSTRACT

PURPOSE: To investigate the performance of deep learning and radiomics features of intra-tumoral region (ITR) and peri-tumoral region (PTR) in the diagnosing of breast cancer lung metastasis (BCLM) and primary lung cancer (PLC) with low-dose CT (LDCT). METHODS: We retrospectively collected the LDCT images of 100 breast cancer patients with lung lesions, comprising 60 cases of BCLM and 40 cases of PLC. We proposed a fusion model that combined deep learning features extracted from ResNet18-based multi-input residual convolution network with traditional radiomics features. Specifically, the fusion model adopted a multi-region strategy, incorporating the aforementioned features from both the ITR and PTR. Then, we randomly divided the dataset into training and validation sets using fivefold cross-validation approach. Comprehensive comparative experiments were performed between the proposed fusion model and other eight models, including the intra-tumoral deep learning model, the intra-tumoral radiomics model, the intra-tumoral deep-learning radiomics model, the peri-tumoral deep learning model, the peri-tumoral radiomics model, the peri-tumoral deep-learning radiomics model, the multi-region radiomics model, and the multi-region deep-learning model. RESULTS: The fusion model developed using deep-learning radiomics feature sets extracted from the ITR and PTR had the best classification performance, with the area under the curve of 0.913 (95% CI 0.840-0.960). This was significantly higher than that of the single region's radiomics model or deep learning model. CONCLUSIONS: The combination of radiomics and deep learning features was effective in discriminating BCLM and PLC. Additionally, the analysis of the PTR can mine more comprehensive tumor information.


Subject(s)
Breast Neoplasms , Deep Learning , Lung Neoplasms , Humans , Female , Lung Neoplasms/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Retrospective Studies , Tomography, X-Ray Computed
4.
Comput Biol Med ; 157: 106788, 2023 05.
Article in English | MEDLINE | ID: mdl-36958233

ABSTRACT

Deep learning methods using multimodal imagings have been proposed for the diagnosis of Alzheimer's disease (AD) and its early stages (SMC, subjective memory complaints), which may help to slow the progression of the disease through early intervention. However, current fusion methods for multimodal imagings are generally coarse and may lead to suboptimal results through the use of shared extractors or simple downscaling stitching. Another issue with diagnosing brain diseases is that they often affect multiple areas of the brain, making it important to consider potential connections throughout the brain. However, traditional convolutional neural networks (CNNs) may struggle with this issue due to their limited local receptive fields. To address this, many researchers have turned to transformer networks, which can provide global information about the brain but can be computationally intensive and perform poorly on small datasets. In this work, we propose a novel lightweight network called MENet that adaptively recalibrates the multiscale long-range receptive field to localize discriminative brain regions in a computationally efficient manner. Based on this, the network extracts the intensity and location responses between structural magnetic resonance imagings (sMRI) and 18-Fluoro-Deoxy-Glucose Positron Emission computed Tomography (FDG-PET) as an enhancement fusion for AD and SMC diagnosis. Our method is evaluated on the publicly available ADNI datasets and achieves 97.67% accuracy in AD diagnosis tasks and 81.63% accuracy in SMC diagnosis tasks using sMRI and FDG-PET. These results achieve state-of-the-art (SOTA) performance in both tasks. To the best of our knowledge, this is one of the first deep learning research methods for SMC diagnosis with FDG-PET.


Subject(s)
Alzheimer Disease , Humans , Alzheimer Disease/diagnostic imaging , Fluorodeoxyglucose F18 , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Positron-Emission Tomography/methods
5.
Front Neurosci ; 16: 831533, 2022.
Article in English | MEDLINE | ID: mdl-35281501

ABSTRACT

18F-fluorodeoxyglucose (FDG)-positron emission tomography (PET) reveals altered brain metabolism in individuals with mild cognitive impairment (MCI) and Alzheimer's disease (AD). Some biomarkers derived from FDG-PET by computer-aided-diagnosis (CAD) technologies have been proved that they can accurately diagnosis normal control (NC), MCI, and AD. However, existing FDG-PET-based researches are still insufficient for the identification of early MCI (EMCI) and late MCI (LMCI). Compared with methods based other modalities, current methods with FDG-PET are also inadequate in using the inter-region-based features for the diagnosis of early AD. Moreover, considering the variability in different individuals, some hard samples which are very similar with both two classes limit the classification performance. To tackle these problems, in this paper, we propose a novel bilinear pooling and metric learning network (BMNet), which can extract the inter-region representation features and distinguish hard samples by constructing the embedding space. To validate the proposed method, we collect 898 FDG-PET images from Alzheimer's disease neuroimaging initiative (ADNI) including 263 normal control (NC) patients, 290 EMCI patients, 147 LMCI patients, and 198 AD patients. Following the common preprocessing steps, 90 features are extracted from each FDG-PET image according to the automatic anatomical landmark (AAL) template and then sent into the proposed network. Extensive fivefold cross-validation experiments are performed for multiple two-class classifications. Experiments show that most metrics are improved after adding the bilinear pooling module and metric losses to the Baseline model respectively. Specifically, in the classification task between EMCI and LMCI, the specificity improves 6.38% after adding the triple metric loss, and the negative predictive value (NPV) improves 3.45% after using the bilinear pooling module. In addition, the accuracy of classification between EMCI and LMCI achieves 79.64% using imbalanced FDG-PET images, which illustrates that the proposed method yields a state-of-the-art result of the classification accuracy between EMCI and LMCI based on PET images.

6.
Med Phys ; 49(1): 144-157, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34766623

ABSTRACT

PURPOSE: Recent studies have illustrated that the peritumoral regions of medical images have value for clinical diagnosis. However, the existing approaches using peritumoral regions mainly focus on the diagnostic capability of the single region and ignore the advantages of effectively fusing the intratumoral and peritumoral regions. In addition, these methods need accurate segmentation masks in the testing stage, which are tedious and inconvenient in clinical applications. To address these issues, we construct a deep convolutional neural network that can adaptively fuse the information of multiple tumoral-regions (FMRNet) for breast tumor classification using ultrasound (US) images without segmentation masks in the testing stage. METHODS: To sufficiently excavate the potential relationship, we design a fused network and two independent modules to extract and fuse features of multiple regions simultaneously. First, we introduce two enhanced combined-tumoral (EC) region modules, aiming to enhance the combined-tumoral features gradually. Then, we further design a three-branch module for extracting and fusing the features of intratumoral, peritumoral, and combined-tumoral regions, denoted as the intratumoral, peritumoral, and combined-tumoral module. Especially, we design a novel fusion module by introducing a channel attention module to adaptively fuse the features of three regions. The model is evaluated on two public datasets including UDIAT and BUSI with breast tumor ultrasound images. Two independent groups of experiments are performed on two respective datasets using the fivefold stratified cross-validation strategy. Finally, we conduct ablation experiments on two datasets, in which BUSI is used as the training set and UDIAT is used as the testing set. RESULTS: We conduct detailed ablation experiments about the proposed two modules and comparative experiments with other existing representative methods. The experimental results show that the proposed method yields state-of-the-art performance on both two datasets. Especially, in the UDIAT dataset, the proposed FMRNet achieves a high accuracy of 0.945 and a specificity of 0.945, respectively. Moreover, the precision (PRE = 0.909) even dramatically improves by 21.6% on the BUSI dataset compared with the existing method of the best result. CONCLUSION: The proposed FMRNet shows good performance in breast tumor classification with US images, and proves its capability of exploiting and fusing the information of multiple tumoral-regions. Furthermore, the FMRNet has potential value in classifying other types of cancers using multiple tumoral-regions of other kinds of medical images.


Subject(s)
Breast Neoplasms , Breast , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Female , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , Ultrasonography , Ultrasonography, Mammary
7.
Eur Radiol ; 31(8): 5924-5939, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33569620

ABSTRACT

OBJECTIVES: To develop and validate a multiparametric MRI-based radiomics nomogram for pretreatment predicting the axillary sentinel lymph node (SLN) burden in early-stage breast cancer. METHODS: A total of 230 women with early-stage invasive breast cancer were retrospectively analyzed. A radiomics signature was constructed based on preoperative multiparametric MRI from the training dataset (n = 126) of center 1, then tested in the validation cohort (n = 42) from center 1 and an external test cohort (n = 62) from center 2. Multivariable logistic regression was applied to develop a radiomics nomogram incorporating radiomics signature and predictive clinical and radiological features. The radiomics nomogram's performance was evaluated by its discrimination, calibration, and clinical use and was compared with MRI-based descriptors of primary breast tumor. RESULTS: The constructed radiomics nomogram incorporating radiomics signature and MRI-determined axillary lymph node (ALN) burden showed a good calibration and outperformed the MRI-determined ALN burden alone for predicting SLN burden (area under the curve [AUC]: 0.82 vs. 0.68 [p < 0.001] in training cohort; 0.81 vs. 0.68 in validation cohort [p = 0.04]; and 0.81 vs. 0.58 [p = 0.001] in test cohort). Compared with the MRI-based breast tumor combined descriptors, the radiomics nomogram achieved a higher AUC in test cohort (0.81 vs. 0.58, p = 0.005) and a comparable AUC in training (0.82 vs. 0.73, p = 0.15) and validation (0.81 vs. 0.65, p = 0.31) cohorts. CONCLUSION: A multiparametric MRI-based radiomics nomogram can be used for preoperative prediction of the SLN burden in early-stage breast cancer. KEY POINTS: • Radiomics nomogram incorporating radiomics signature and MRI-determined ALN burden outperforms the MRI-determined ALN burden alone for predicting SLN burden in early-stage breast cancer. • Radiomics nomogram might have a better predictive ability than the MRI-based breast tumor combined descriptors. • Multiparametric MRI-based radiomics nomogram can be used as a non-invasive tool for preoperative predicting of SLN burden in patients with early-stage breast cancer.


Subject(s)
Breast Neoplasms , Multiparametric Magnetic Resonance Imaging , Sentinel Lymph Node , Breast Neoplasms/diagnostic imaging , Female , Humans , Lymph Nodes/diagnostic imaging , Lymphatic Metastasis , Nomograms , Retrospective Studies , Sentinel Lymph Node/diagnostic imaging
8.
Glia ; 57(7): 767-76, 2009 May.
Article in English | MEDLINE | ID: mdl-18985731

ABSTRACT

Although there is significant information concerning the consequences of cerebral ischemia on neuronal function, relatively little is known about functional responses of astrocytes, the predominant glial-cell type in the central nervous system. In this study, we asked whether focal ischemia would impact astrocytic Ca(2+) signaling, a characteristic form of excitability in this cell type. In vivo Ca(2+) imaging of cortical astrocytes was performed using two-photon (2-P) microscopy during the acute phase of photothrombosis-induced ischemia initiated by green light illumination of circulating Rose Bengal. Although whisker evoked potentials were reduced by over 90% within minutes of photothrombosis, astrocytes in the ischemic core remained structurally intact for a few hours. In vivo Ca(2+) imaging showed that an increase in transient Ca(2+) signals in astrocytes within 20 min of ischemia. These Ca(2+) signals were synchronized and propagated as waves amongst the glial network. Pharmacological manipulations demonstrated that these Ca(2+) signals were dependent on activation of metabotropic glutamate receptor 5 (mGluR5) and metabotropic gamma-aminobutyric acid receptor (GABA(B)R) but not by P2 purinergic receptor or A1 adenosine receptor. Selective inhibition of Ca(2+) in astrocytes with BAPTA significantly reduced the infarct volume, demonstrating that the enhanced astrocytic Ca(2+) signal contributes to neuronal damage presumably through Ca(2+)-dependent release of glial glutamate. Because astrocytes offer multiple functions in close communication with neurons and vasculature, the ischemia-induced increase in astrocytic Ca(2+) signaling may represent an initial attempt for these cells to communicate with neurons or provide feed back regulation to the vasculature.


Subject(s)
Astrocytes/metabolism , Brain Ischemia/metabolism , Brain/metabolism , Calcium Signaling , Animals , Brain/drug effects , Brain Ischemia/chemically induced , Brain Ischemia/pathology , Calcium/metabolism , Calcium Signaling/drug effects , Cell Death/drug effects , Chelating Agents/pharmacology , Egtazic Acid/analogs & derivatives , Egtazic Acid/pharmacology , Evoked Potentials, Somatosensory , Light , Male , Mice , Neurons/metabolism , Neurons/pathology , Neuroprotective Agents/pharmacology , Receptor, Adenosine A1/metabolism , Receptor, Metabotropic Glutamate 5 , Receptors, GABA-B/metabolism , Receptors, Metabotropic Glutamate/metabolism , Receptors, Purinergic P2/metabolism , Rose Bengal , Vibrissae
SELECTION OF CITATIONS
SEARCH DETAIL
...