Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
Chemosphere ; 352: 141374, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38342144

ABSTRACT

Despite the widespread occurrence of regolith-hosted rare earth elements (REEs) across South China, their spatial distribution characteristics in soils and their impact factors remain largely uncertain. This knowledge gap impedes the exploration of regolith-hosted REE deposits and the assessment of the environmental risks associated with REEs. To address this issue, 180 soil samples were collected from Meizhou City, Guangdong Province, a region known for its high abundance of regolith-hosted REEs. Subsequently, the correlations between REE enrichment/fractionation and various factors, i.e., topography, climate conditions, land use, and landform were analysed using the geo-detector method. The results revealed a highly uneven spatial distribution of REEs and their fractionation features with some regions displaying distinct spatial patterns. Elevation was the dominant factor influencing this distribution, and showed strong correlations with the concentrations of REEs, light REEs (LREEs) and heavy REEs (HREEs); the LREE/HREE ratio; and the positive Ce anomaly (δCe). The negative Eu anomaly (δEu) showed a good correlation with rock type. The enrichment and fractionation of REEs indicated a coupling among the abovementioned factors. For REE enrichment, areas with elevations of 138-148 m, precipitation levels of 1553-1574 mm, annual average land surface temperatures of 30.4-30.5 °C, leaf area index values of 22-29 and surface cutting degree of 21.5-29.9 m showed the highest average abundance within each type (scope) of the predominant factors. These findings highlight the key factors affecting REE distribution, thereby aiding the efficient utilization of regolith-hosted REE resources and the evaluation of their environmental risks.


Subject(s)
Metals, Rare Earth , Soil Pollutants , Metals, Rare Earth/analysis , Soil , Soil Pollutants/analysis , China , Plant Leaves/chemistry
2.
Article in English | MEDLINE | ID: mdl-37995168

ABSTRACT

Inspired by the diversity of biological neurons, quadratic artificial neurons can play an important role in deep learning models. The type of quadratic neurons of our interest replaces the inner-product operation in the conventional neuron with a quadratic function. Despite promising results so far achieved by networks of quadratic neurons, there are important issues not well addressed. Theoretically, the superior expressivity of a quadratic network over either a conventional network or a conventional network via quadratic activation is not fully elucidated, which makes the use of quadratic networks not well grounded. In practice, although a quadratic network can be trained via generic backpropagation, it can be subject to a higher risk of collapse than the conventional counterpart. To address these issues, we first apply the spline theory and a measure from algebraic geometry to give two theorems that demonstrate better model expressivity of a quadratic network than the conventional counterpart with or without quadratic activation. Then, we propose an effective training strategy referred to as referenced linear initialization (ReLinear) to stabilize the training process of a quadratic network, thereby unleashing the full potential in its associated machine learning tasks. Comprehensive experiments on popular datasets are performed to support our findings and confirm the performance of quadratic deep learning. We have shared our code in https://github.com/FengleiFan/ReLinear.

3.
Article in English | MEDLINE | ID: mdl-37279121

ABSTRACT

The Retinex model is one of the most representative and effective methods for low-light image enhancement. However, the Retinex model does not explicitly tackle the noise problem and shows unsatisfactory enhancing results. In recent years, due to the excellent performance, deep learning models have been widely used in low-light image enhancement. However, these methods have two limitations. First, the desirable performance can only be achieved by deep learning when a large number of labeled data are available. However, it is not easy to curate massive low-/normal-light paired data. Second, deep learning is notoriously a black-box model. It is difficult to explain their inner working mechanism and understand their behaviors. In this article, using a sequential Retinex decomposition strategy, we design a plug-and-play framework based on the Retinex theory for simultaneous image enhancement and noise removal. Meanwhile, we develop a convolutional neural network-based (CNN-based) denoiser into our proposed plug-and-play framework to generate a reflectance component. The final image is enhanced by integrating the illumination and reflectance with gamma correction. The proposed plug-and-play framework can facilitate both post hoc and ad hoc interpretability. Extensive experiments on different datasets demonstrate that our framework outcompetes the state-of-the-art methods in both image enhancement and denoising.

4.
IEEE Trans Med Imaging ; 42(6): 1590-1602, 2023 06.
Article in English | MEDLINE | ID: mdl-37015446

ABSTRACT

Image denoising is a prerequisite for downstream tasks in many fields. Low-dose and photon-counting computed tomography (CT) denoising can optimize diagnostic performance at minimized radiation dose. Supervised deep denoising methods are popular but require paired clean or noisy samples that are often unavailable in practice. Limited by the independent noise assumption, current self-supervised denoising methods cannot process correlated noises as in CT images. Here we propose the first-of-its-kind similarity-based self-supervised deep denoising approach, referred to as Noise2Sim, that works in a nonlocal and nonlinear fashion to suppress not only independent but also correlated noises. Theoretically, Noise2Sim is asymptotically equivalent to supervised learning methods under mild conditions. Experimentally, Nosie2Sim recovers intrinsic features from noisy low-dose CT and photon-counting CT images as effectively as or even better than supervised learning methods on practical datasets visually, quantitatively and statistically. Noise2Sim is a general self-supervised denoising approach and has great potential in diverse applications.


Subject(s)
Deep Learning , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Photons , Image Processing, Computer-Assisted/methods
5.
Phys Med Biol ; 68(6)2023 03 15.
Article in English | MEDLINE | ID: mdl-36854190

ABSTRACT

Objective. Low-dose computed tomography (LDCT) denoising is an important problem in CT research. Compared to the normal dose CT, LDCT images are subjected to severe noise and artifacts. Recently in many studies, vision transformers have shown superior feature representation ability over the convolutional neural networks (CNNs). However, unlike CNNs, the potential of vision transformers in LDCT denoising was little explored so far. Our paper aims to further explore the power of transformer for the LDCT denoising problem.Approach. In this paper, we propose a Convolution-free Token2Token Dilated Vision Transformer (CTformer) for LDCT denoising. The CTformer uses a more powerful token rearrangement to encompass local contextual information and thus avoids convolution. It also dilates and shifts feature maps to capture longer-range interaction. We interpret the CTformer by statically inspecting patterns of its internal attention maps and dynamically tracing the hierarchical attention flow with an explanatory graph. Furthermore, overlapped inference mechanism is employed to effectively eliminate the boundary artifacts that are common for encoder-decoder-based denoising models.Main results. Experimental results on Mayo dataset suggest that the CTformer outperforms the state-of-the-art denoising methods with a low computational overhead.Significance. The proposed model delivers excellent denoising performance on LDCT. Moreover, low computational cost and interpretability make the CTformer promising for clinical applications.


Subject(s)
Neural Networks, Computer , Tomography, X-Ray Computed , Artifacts , Image Processing, Computer-Assisted/methods , Signal-To-Noise Ratio , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards , Humans
6.
Article in English | MEDLINE | ID: mdl-35786562

ABSTRACT

Recent years have witnessed an increasing interest in the correspondence between infinitely wide networks and Gaussian processes. Despite the effectiveness and elegance of the current neural network Gaussian process theory, to the best of our knowledge, all the neural network Gaussian processes (NNGPs) are essentially induced by increasing width. However, in the era of deep learning, what concerns us more regarding a neural network is its depth as well as how depth impacts the behaviors of a network. Inspired by a width-depth symmetry consideration, we use a shortcut network to show that increasing the depth of a neural network can also give rise to a Gaussian process, which is a valuable addition to the existing theory and contributes to revealing the true picture of deep learning. Beyond the proposed Gaussian process by depth, we theoretically characterize its uniform tightness property and the smallest eigenvalue of the Gaussian process kernel. These characterizations can not only enhance our understanding of the proposed depth-induced Gaussian process but also pave the way for future applications. Lastly, we examine the performance of the proposed Gaussian process by regression experiments on two benchmark datasets.

7.
IEEE Trans Radiat Plasma Med Sci ; 6(6): 656-666, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35865007

ABSTRACT

Deep neural network based methods have achieved promising results for CT metal artifact reduction (MAR), most of which use many synthesized paired images for supervised learning. As synthesized metal artifacts in CT images may not accurately reflect the clinical counterparts, an artifact disentanglement network (ADN) was proposed with unpaired clinical images directly, producing promising results on clinical datasets. However, as the discriminator can only judge if large regions semantically look artifact-free or artifact-affected, it is difficult for ADN to recover small structural details of artifact-affected CT images based on adversarial losses only without sufficient constraints. To overcome the illposedness of this problem, here we propose a low-dimensional manifold (LDM) constrained disentanglement network (DN), leveraging the image characteristics that the patch manifold of CT images is generally low-dimensional. Specifically, we design an LDM-DN learning algorithm to empower the disentanglement network through optimizing the synergistic loss functions used in ADN while constraining the recovered images to be on a low-dimensional patch manifold. Moreover, learning from both paired and unpaired data, an efficient hybrid optimization scheme is proposed to further improve the MAR performance on clinical datasets. Extensive experiments demonstrate that the proposed LDM-DN approach can consistently improve the MAR performance in paired and/or unpaired learning settings, outperforming competing methods on synthesized and clinical datasets.

8.
IEEE Trans Radiat Plasma Med Sci ; 5(6): 741-760, 2021 Nov.
Article in English | MEDLINE | ID: mdl-35573928

ABSTRACT

Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success recently in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide adoption in mission-critical applications such as medical diagnosis and therapy. Because of the huge potentials of deep learning, increasing the interpretability of deep neural networks has recently attracted much research attention. In this paper, we propose a simple but comprehensive taxonomy for interpretability, systematically review recent studies in improving interpretability of neural networks, describe applications of interpretability in medicine, and discuss possible future research directions of interpretability, such as in relation to fuzzy logic and brain science.

9.
Neural Netw ; 124: 383-392, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32062373

ABSTRACT

Recently, deep learning has achieved huge successes in many important applications. In our previous studies, we proposed quadratic/second-order neurons and deep quadratic neural networks. In a quadratic neuron, the inner product of a vector of data and the corresponding weights in a conventional neuron is replaced with a quadratic function. The resultant quadratic neuron enjoys an enhanced expressive capability over the conventional neuron. However, how quadratic neurons improve the expressing capability of a deep quadratic network has not been studied up to now, preferably in relation to that of a conventional neural network. Specifically, we ask four basic questions in this paper: (1) for the one-hidden-layer network structure, is there any function that a quadratic network can approximate much more efficiently than a conventional network? (2) for the same multi-layer network structure, is there any function that can be expressed by a quadratic network but cannot be expressed with conventional neurons in the same structure? (3) Does a quadratic network give a new insight into universal approximation? (4) To approximate the same class of functions with the same error bound, could a quantized quadratic network have a lower number of weights than a quantized conventional network? Our main contributions are the four interconnected theorems shedding light upon these four questions and demonstrating the merits of a quadratic network in terms of expressive efficiency, unique capability, compact architecture and computational capacity respectively.


Subject(s)
Deep Learning
10.
IEEE Trans Med Imaging ; 39(6): 2035-2050, 2020 06.
Article in English | MEDLINE | ID: mdl-31902758

ABSTRACT

Inspired by complexity and diversity of biological neurons, our group proposed quadratic neurons by replacing the inner product in current artificial neurons with a quadratic operation on input data, thereby enhancing the capability of an individual neuron. Along this direction, we are motivated to evaluate the power of quadratic neurons in popular network architectures, simulating human-like learning in the form of "quadratic-neuron-based deep learning". Our prior theoretical studies have shown important merits of quadratic neurons and networks in representation, efficiency, and interpretability. In this paper, we use quadratic neurons to construct an encoder-decoder structure, referred as the quadratic autoencoder, and apply it to low-dose CT denoising. The experimental results on the Mayo low-dose CT dataset demonstrate the utility and robustness of quadratic autoencoder in terms of image denoising and model efficiency. To our best knowledge, this is the first time that the deep learning approach is implemented with a new type of neurons and demonstrates a significant potential in the medical imaging field.


Subject(s)
Neurons , Tomography, X-Ray Computed , Humans
11.
Int J Numer Method Biomed Eng ; 34(5): e2956, 2018 05.
Article in English | MEDLINE | ID: mdl-29277960

ABSTRACT

The artificial neural network is a popular framework in machine learning. To empower individual neurons, we recently suggested that the current type of neurons could be upgraded to second-order counterparts, in which the linear operation between inputs to a neuron and the associated weights is replaced with a nonlinear quadratic operation. A single second-order neurons already have a strong nonlinear modeling ability, such as implementing basic fuzzy logic operations. In this paper, we develop a general backpropagation algorithm to train the network consisting of second-order neurons. The numerical studies are performed to verify the generalized backpropagation algorithm.


Subject(s)
Neural Networks, Computer , Algorithms , Fuzzy Logic , Machine Learning
12.
Article in English | MEDLINE | ID: mdl-28749579

ABSTRACT

In machine learning, an artificial neural network is the mainstream approach. Such a network consists of many neurons. These neurons are of the same type characterized by the 2 features: (1) an inner product of an input vector and a matching weighting vector of trainable parameters and (2) a nonlinear excitation function. Here, we investigate the possibility of replacing the inner product with a quadratic function of the input vector, thereby upgrading the first-order neuron to the second-order neuron, empowering individual neurons and facilitating the optimization of neural networks. Also, numerical examples are provided to illustrate the feasibility and merits of the second-order neurons. Finally, further topics are discussed.


Subject(s)
Machine Learning , Neural Networks, Computer , Blood Pressure/physiology , Fuzzy Logic , Humans , Oxygen Consumption
13.
Sensors (Basel) ; 12(2): 1846-62, 2012.
Article in English | MEDLINE | ID: mdl-22438741

ABSTRACT

Impervious surface area (ISA) is considered as an indicator of environment change and is regarded as an important input parameter for hydrological cycle simulation, water management and area pollution assessment. The Pearl River Delta (PRD), the 3rd most important economic district of China, is chosen in this paper to extract the ISA information based on Landsat images of 1998, 2003 and 2008 by using a linear spectral un-mixing method and to monitor impervious surface change by analyzing the multi-temporal Landsat-derived fractional impervious surface. Results of this study were as follows: (1) the area of ISA in the PRD increased 79.09% from 1998 to 2003 and 26.88% from 2003 to 2008 separately; (2) the spatial distribution of ISA was described according to the 1998/2003 percentage respectively. Most of middle and high percentage ISA was located in northwestern and southeastern of the whole delta, and middle percentage ISA was mainly located in the city interior, high percentage ISA was mainly located in the suburban around the city accordingly; (3) the expanding direction and trend of high percentage ISA was discussed in order to understand the change of urban in this delta; High percentage ISA moved from inner city to edge of urban area during 1998-2003 and moved to the suburban area that far from the urban area mixed with jumpily and gradually during 2003-2008. According to the discussion of high percentage ISA spatial expanded direction, it could be found out that high percentage ISA moved outward from the centre line of Pearl River of the whole delta while a high ISA percentage in both shores of the Pearl River Estuary moved toward the Pearl River; (4) combining the change of ISA with social conditions, the driving relationship was analyzed in detail. It was evident that ISA percentage change had a deep relationship with the economic development of this region in the past ten years. Contemporaneous major sport events (16th Asia Games of Guangzhou, 26th Summer Universidad of Shenzhen) and the government policies also promoted the development of the ISA. Meanwhile, topographical features like the National Nature Reserve of China restricted and affected the expansion of the ISA. Above all, this paper attempted to extract ISA in a major region of the PRD; the temporal and spatial analyses to PRD ISA demonstrated the drastic changes in developed areas of China. These results were important and valuable for land use management, ecological protection and policy establishment.


Subject(s)
Algorithms , Geographic Information Systems , Image Interpretation, Computer-Assisted/methods , Remote Sensing Technology/methods , Rivers/chemistry , Spacecraft , China
14.
Environ Monit Assess ; 137(1-3): 127-47, 2008 Feb.
Article in English | MEDLINE | ID: mdl-17564805

ABSTRACT

Land use/land cover (LULC) has a profound impact on economy, society and environment, especially in rapid developing areas. Rapid and prompt monitoring and predicting of LULC's change are crucial and significant. Currently, integration of Geographical Information System (GIS) and Remote Sensing (RS) methods is one of the most important methods for detecting LULC's change, which includes image processing (such as geometrical-rectifying, supervised-classification, etc.), change detection (post-classification), GIS-based spatial analysis, Markov chain and a Cellular Automata (CA) models, etc. The core corridor of Pearl River Delta was selected for studying LULC's change in this paper by using the above methods for the reason that the area contributed 78.31% (1998)-81.4% (2003) of Gross Domestic Product (GDP) to the whole Pearl River Delta (PRD). The temporal and spatial LULC's changes from 1998 to 2003 were detected by RS data. At the same time, urban expansion levels in the next 5 and 10 years were predicted temporally and spatially by using Markov chain and a simple Cellular Automata model respectively. Finally, urban expansion and farmland loss were discussed against the background of China's urban expansion and cropland loss during 1990-2000. The result showed: (1) the rate of urban expansion was up to 8.91% during 1998-2003 from 169,078.32 to 184,146.48 ha; (2) the rate of farmland loss was 5.94% from 312,069.06 to 293,539.95 ha; (3) a lot of farmland converted to urban or development area, and more forest and grass field converted to farmland accordingly; (4) the spatial predicting result of urban expansion showed that urban area was enlarged ulteriorly compared with the previous results, and the directions of expansion is along the existing urban area and transportation lines.


Subject(s)
Agriculture , Ecosystem , Environmental Monitoring , China , Geographic Information Systems , Markov Chains , Photography , Rivers , Spacecraft
SELECTION OF CITATIONS
SEARCH DETAIL
...