Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Sci Total Environ ; 904: 166673, 2023 Dec 15.
Article in English | MEDLINE | ID: mdl-37659539

ABSTRACT

In this study, we explored the impact of RDS particle size on the migration dynamics of RDS and naphthalene through rigorous wash-off experiments. The results illuminated that smaller RDS particles showed higher mobility in stormwater runoff. On the other hand, RDS particles larger than 150 µm showed migration ratios below 2 %, suggesting that naphthalene adsorbed on larger RDS primarily migrated in dissolved form. Furthermore, we investigated the migration behaviors of RDS and naphthalene under varied conditions, including rainfall intensity, duration, and naphthalene concentrations. Larger rainfall intensity promoted the naphthalene release from RDS, while long rainfall duration (≥10 min) impeded the migration velocities (≤2.91 %/5 min for RDS, and ≤3.32 %/5 min for corresponding naphthalene) of RDS and naphthalene. Additionally, higher naphthalene concentrations in RDS diminished migration ratios of dissolved naphthalene. Significantly, the maximum uptake of naphthalene on RDS was 6.02 mg/g by the adsorption Langmuir isotherm. Importantly, the adsorption process of naphthalene in RDS is primarily governed by the physical adsorption, as demonstrated by the successive desorption experiments, which showed the desorption rate of up to 87.32 %. Moreover, advanced characterizations such as XPS, FTIR and Raman spectra further confirmed the physical nature of the adsorption process. These findings may help the understanding of the migration behavior of other pollutants in urban surface particulates.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14337-14352, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37738203

ABSTRACT

Continual learning (CL) aims to learn a non-stationary data distribution and not forget previous knowledge. The effectiveness of existing approaches that rely on memory replay can decrease over time as the model tends to overfit the stored examples. As a result, the model's ability to generalize well is significantly constrained. Additionally, these methods often overlook the inherent uncertainty in the memory data distribution, which differs significantly from the distribution of all previous data examples. To overcome these issues, we propose a principled memory evolution framework that dynamically adjusts the memory data distribution. This evolution is achieved by employing distributionally robust optimization (DRO) to make the memory buffer increasingly difficult to memorize. We consider two types of constraints in DRO: f-divergence and Wasserstein ball constraints. For f-divergence constraint, we derive a family of methods to evolve the memory buffer data in the continuous probability measure space with Wasserstein gradient flow (WGF). For Wasserstein ball constraint, we directly solve it in the euclidean space. Extensive experiments on existing benchmarks demonstrate the effectiveness of the proposed methods for alleviating forgetting. As a by-product of the proposed framework, our method is more robust to adversarial examples than compared CL methods.

3.
Int J Mol Sci ; 24(12)2023 Jun 13.
Article in English | MEDLINE | ID: mdl-37373208

ABSTRACT

The kidney contains numerous mitochondria in proximal tubular cells that provide energy for tubular secretion and reabsorption. Mitochondrial injury and consequent excessive reactive oxygen species (ROS) production can cause tubular damage and play a major role in the pathogenesis of kidney diseases, including diabetic nephropathy. Accordingly, bioactive compounds that protect the renal tubular mitochondria from ROS are desirable. Here, we aimed to report 3,5-dihydroxy-4-methoxybenzyl alcohol (DHMBA), isolated from the Pacific oyster (Crassostrea gigas) as a potentially useful compound. In human renal tubular HK-2 cells, DHMBA significantly mitigated the cytotoxicity induced by the ROS inducer L-buthionine-(S, R)-sulfoximine (BSO). DHMBA reduced the mitochondrial ROS production and subsequently regulated mitochondrial homeostasis, including mitochondrial biogenesis, fusion/fission balance, and mitophagy; DHMBA also enhanced mitochondrial respiration in BSO-treated cells. These findings highlight the potential of DHMBA to protect renal tubular mitochondrial function against oxidative stress.


Subject(s)
Antioxidants , Crassostrea , Animals , Humans , Antioxidants/pharmacology , Antioxidants/metabolism , Reactive Oxygen Species/metabolism , Oxidative Stress , Kidney Tubules , Ethanol/metabolism , Mitochondria/metabolism
4.
Data Augment Label Imperfections (2022) ; 13567: 112-122, 2022 Sep.
Article in English | MEDLINE | ID: mdl-36383493

ABSTRACT

This paper aims to identify uncommon cardiothoracic diseases and patterns on chest X-ray images. Training a machine learning model to classify rare diseases with multi-label indications is challenging without sufficient labeled training samples. Our model leverages the information from common diseases and adapts to perform on less common mentions. We propose to use multi-label few-shot learning (FSL) schemes including neighborhood component analysis loss, generating additional samples using distribution calibration and fine-tuning based on multi-label classification loss. We utilize the fact that the widely adopted nearest neighbor-based FSL schemes like ProtoNet are Voronoi diagrams in feature space. In our method, the Voronoi diagrams in the features space generated from multi-label schemes are combined into our geometric DeepVoro Multi-label ensemble. The improved performance in multi-label few-shot classification using the multi-label ensemble is demonstrated in our experiments (The code is publicly available at https://github.com/Saurabh7/Few-shot-learning-multilabel-cxray).

5.
Molecules ; 27(9)2022 May 08.
Article in English | MEDLINE | ID: mdl-35566372

ABSTRACT

Humans are exposed to numerous compounds daily, some of which have adverse effects on health. Computational approaches for modeling toxicological data in conjunction with machine learning algorithms have gained popularity over the last few years. Machine learning approaches have been used to predict toxicity-related biological activities using chemical structure descriptors. However, toxicity-related proteomic features have not been fully investigated. In this study, we construct a computational pipeline using machine learning models for predicting the most important protein features responsible for the toxicity of compounds taken from the Tox21 dataset that is implemented within the multiscale Computational Analysis of Novel Drug Opportunities (CANDO) therapeutic discovery platform. Tox21 is a highly imbalanced dataset consisting of twelve in vitro assays, seven from the nuclear receptor (NR) signaling pathway and five from the stress response (SR) pathway, for more than 10,000 compounds. For the machine learning model, we employed a random forest with the combination of Synthetic Minority Oversampling Technique (SMOTE) and the Edited Nearest Neighbor (ENN) method (SMOTE+ENN), which is a resampling method to balance the activity class distribution. Within the NR and SR pathways, the activity of the aryl hydrocarbon receptor (NR-AhR) and the mitochondrial membrane potential (SR-MMP) were two of the top-performing twelve toxicity endpoints with AUCROCs of 0.90 and 0.92, respectively. The top extracted features for evaluating compound toxicity were analyzed for enrichment to highlight the implicated biological pathways and proteins. We validated our enrichment results for the activity of the AhR using a thorough literature search. Our case study showed that the selected enriched pathways and proteins from our computational pipeline are not only correlated with AhR toxicity but also form a cascading upstream/downstream arrangement. Our work elucidates significant relationships between protein and compound interactions computed using CANDO and the associated biological pathways to which the proteins belong for twelve toxicity endpoints. This novel study uses machine learning not only to predict and understand toxicity but also elucidates therapeutic mechanisms at a proteomic level for a variety of toxicity endpoints.


Subject(s)
Machine Learning , Proteomics , Algorithms , Drug Discovery/methods , Humans , Proteins
6.
IEEE Trans Med Imaging ; 40(9): 2343-2353, 2021 09.
Article in English | MEDLINE | ID: mdl-33939610

ABSTRACT

The important cues for a realistic lung nodule synthesis include the diversity in shape and background, controllability of semantic feature levels, and overall CT image quality. To incorporate these cues as the multiple learning targets, we introduce the Multi-Target Co-Guided Adversarial Mechanism, which utilizes the foreground and background mask to guide nodule shape and lung tissues, takes advantage of the CT lung and mediastinal window as the guidance of spiculation and texture control, respectively. Further, we propose a Multi-Target Co-Guided Synthesizing Network with a joint loss function to realize the co-guidance of image generation and semantic feature learning. The proposed network contains a Mask-Guided Generative Adversarial Sub-Network (MGGAN) and a Window-Guided Semantic Learning Sub-Network (WGSLN). The MGGAN generates the initial synthesis using the mask combined with the foreground and background masks, guiding the generation of nodule shape and background tissues. Meanwhile, the WGSLN controls the semantic features and refines the synthesis quality by transforming the initial synthesis into the CT lung and mediastinal window, and performing the spiculation and texture learning simultaneously. We validated our method using the quantitative analysis of authenticity under the Fréchet Inception Score, and the results show its state-of-the-art performance. We also evaluated our method as a data augmentation method to predict malignancy level on the LIDC-IDRI database, and the results show that the accuracy of VGG-16 is improved by 5.6%. The experimental results confirm the effectiveness of the proposed method.


Subject(s)
Lung Neoplasms , Databases, Factual , Humans , Lung/diagnostic imaging , Lung Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
7.
Mach Learn Med Imaging ; 12966: 110-119, 2021 Sep.
Article in English | MEDLINE | ID: mdl-35647616

ABSTRACT

Self-supervised learning provides an opportunity to explore unlabeled chest X-rays and their associated free-text reports accumulated in clinical routine without manual supervision. This paper proposes a Joint Image Text Representation Learning Network (JoImTeRNet) for pre-training on chest X-ray images and their radiology reports. The model was pre-trained on both the global image-sentence level and the local image region-word level for visual-textual matching. Both are bidirectionally constrained on Cross-Entropy based and ranking-based Triplet Matching Losses. The region-word matching is calculated using the attention mechanism without direct supervision about their mapping. The pre-trained multi-modal representation learning paves the way for downstream tasks concerning image and/or text encoding. We demonstrate the representation learning quality by cross-modality retrievals and multi-label classifications on two datasets: OpenI-IU and MIMIC-CXR. Our code is available at https://github.com/mshaikh2/JoImTeR_MLMI_2021.

8.
IEEE Trans Med Imaging ; 40(12): 3969, 2021 12.
Article in English | MEDLINE | ID: mdl-34982669

ABSTRACT

In the above article [1], Figs. 5 and 6 were published incorrectly. The correct figures are given below.

9.
Article in English | MEDLINE | ID: mdl-29623248

ABSTRACT

Interstitial lung diseases (ILD) involve several abnormal imaging patterns observed in computed tomography (CT) images. Accurate classification of these patterns plays a significant role in precise clinical decision making of the extent and nature of the diseases. Therefore, it is important for developing automated pulmonary computer-aided detection systems. Conventionally, this task relies on experts' manual identification of regions of interest (ROIs) as a prerequisite to diagnose potential diseases. This protocol is time consuming and inhibits fully automatic assessment. In this paper, we present a new method to classify ILD imaging patterns on CT images. The main difference is that the proposed algorithm uses the entire image as a holistic input. By circumventing the prerequisite of manual input ROIs, our problem set-up is significantly more difficult than previous work but can better address the clinical workflow. Qualitative and quantitative results using a publicly available ILD database demonstrate state-of-the-art classification accuracy under the patch-based classification and shows the potential of predicting the ILD type using holistic image.

10.
Med Image Anal ; 46: 229-243, 2018 05.
Article in English | MEDLINE | ID: mdl-29627687

ABSTRACT

Segmentation, denoising, and partial volume correction (PVC) are three major processes in the quantification of uptake regions in post-reconstruction PET images. These problems are conventionally addressed by independent steps. In this study, we hypothesize that these three processes are dependent; therefore, jointly solving them can provide optimal support for quantification of the PET images. To achieve this, we utilize interactions among these processes when designing solutions for each challenge. We also demonstrate that segmentation can help in denoising and PVC by locally constraining the smoothness and correction criteria. For denoising, we adapt generalized Anscombe transformation to Gaussianize the multiplicative noise followed by a new adaptive smoothing algorithm called regional mean denoising. For PVC, we propose a volume consistency-based iterative voxel-based correction algorithm in which denoised and delineated PET images guide the correction process during each iteration precisely. For PET image segmentation, we use affinity propagation (AP)-based iterative clustering method that helps the integration of PVC and denoising algorithms into the delineation process. Qualitative and quantitative results, obtained from phantoms, clinical, and pre-clinical data, show that the proposed framework provides an improved and joint solution for segmentation, denoising, and partial volume correction.


Subject(s)
Image Enhancement/methods , Positron-Emission Tomography/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Animals , Artifacts , Humans , Magnetic Resonance Imaging , Neoplasms/diagnostic imaging , Phantoms, Imaging , Positron Emission Tomography Computed Tomography/methods , Rabbits , Radiopharmaceuticals , Reproducibility of Results , Sensitivity and Specificity , Signal-To-Noise Ratio , Tuberculosis, Pulmonary/diagnostic imaging
11.
Tomography ; 3(2): 114-122, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28856247

ABSTRACT

We present a new image quality assessment method for determining whether reducing radiation dose impairs the image quality of computed tomography (CT) in qualitative and quantitative clinical analyses tasks. In this Institutional Review Board-exempt study, we conducted a review of 50 patients (male, 22; female, 28) who underwent reduced-dose CT scanning on the first follow-up after standard-dose multiphase CT scanning. Scans were for surveillance of von Hippel-Lindau disease (N = 26) and renal cell carcinoma (N = 10). We investigated density, morphometric, and structural differences between scans both at tissue (fat, bone) and organ levels (liver, heart, spleen, lung). To quantify structural variations caused by image quality differences, we propose using the following metrics: dice similarity coefficient, structural similarity index, Hausdorff distance, gradient magnitude similarity deviation, and weighted spectral distance. Pearson correlation coefficient and Welch 2-sample t test were used for quantitative comparisons of organ morphometry and to compare density distribution of tissue, respectively. For qualitative evaluation, 2-sided Kendall Tau test was used to assess agreement among readers. Both qualitative and quantitative evaluations were designed to examine significance of image differences for clinical tasks. Qualitative judgment served as an overall assessment, whereas detailed quantifications on structural consistency, intensity homogeneity, and texture similarity revealed more accurate and global difference estimations. Qualitative and quantitative results indicated no significant image quality degradation. Our study concludes that low(er)-dose CT scans can be routinely used because of no significant loss in quantitative image information compared with standard-dose CT scans.

12.
IEEE Trans Med Imaging ; 35(5): 1285-98, 2016 05.
Article in English | MEDLINE | ID: mdl-26886976

ABSTRACT

Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.


Subject(s)
Diagnosis, Computer-Assisted/methods , Neural Networks, Computer , Databases, Factual , Humans , Image Interpretation, Computer-Assisted , Lung Diseases, Interstitial/diagnostic imaging , Lymph Nodes/diagnostic imaging , Reproducibility of Results
13.
Inf Process Med Imaging ; 23: 184-95, 2013.
Article in English | MEDLINE | ID: mdl-24683968

ABSTRACT

We introduce a novel algorithm for segmenting the high resolution CT images of the left ventricle (LV), particularly the papillary muscles and the trabeculae. High quality segmentations of these structures are necessary in order to better understand the anatomical function and geometrical properties of LV. These fine structures, however, are extremely challenging to capture due to their delicate and complex nature in both geometry and topology. Our algorithm computes the potential missing topological structures of a given initial segmentation. Using techniques from computational topology, e.g. persistent homology, our algorithm find topological handles which are likely to be the true signal. To further increase accuracy, these proposals are measured by the saliency and confidence from a trained classifier. Handles with high scores are restored in the final segmentation, leading to high quality segmentation results of the complex structures.


Subject(s)
Heart Ventricles/diagnostic imaging , Imaging, Three-Dimensional/methods , Papillary Muscles/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Subtraction Technique , Tomography, X-Ray Computed/methods , Algorithms , Humans , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
14.
Med Image Comput Comput Assist Interv ; 15(Pt 2): 387-94, 2012.
Article in English | MEDLINE | ID: mdl-23286072

ABSTRACT

Image segmentation plays a crucial role in many medical imaging applications by automatically locating the regions of interest. Typically supervised learning based segmentation methods require a large set of accurately labeled training data. However, thel labeling process is tedious, time consuming and sometimes not necessary. We propose a robust logistic regression algorithm to handle label outliers such that doctors do not need to waste time on precisely labeling images for training set. To validate its effectiveness and efficiency, we conduct carefully designed experiments on cervigram image segmentation while there exist label outliers. Experimental results show that the proposed robust logistic regression algorithms achieve superior performance compared to previous methods, which validates the benefits of the proposed algorithms.


Subject(s)
Algorithms , Cervix Uteri/pathology , Documentation/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Female , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
15.
Med Image Comput Comput Assist Interv ; 14(Pt 1): 468-75, 2011.
Article in English | MEDLINE | ID: mdl-22003651

ABSTRACT

In this paper, we present a method to simulate and visualize blood flow through the human heart, using the reconstructed 4D motion of the endocardial surface of the left ventricle as boundary conditions. The reconstruction captures the motion of the full 3D surfaces of the complex features, such as the papillary muscles and the ventricular trabeculae. We use visualizations of the flow field to view the interactions between the blood and the trabeculae in far more detail than has been achieved previously, which promises to give a better understanding of cardiac flow. Finally, we use our simulation results to compare the blood flow within one healthy heart and two diseased hearts.


Subject(s)
Image Processing, Computer-Assisted/methods , Myocardium/pathology , Tomography, X-Ray Computed/methods , Algorithms , Blood Flow Velocity , Computer Simulation , Diastole , Heart/diagnostic imaging , Heart/physiology , Heart Ventricles/pathology , Humans , Imaging, Three-Dimensional/methods , Reproducibility of Results , Systole
SELECTION OF CITATIONS
SEARCH DETAIL
...