Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38466603

ABSTRACT

Analysis of the 3-D texture is indispensable for various tasks, such as retrieval, segmentation, classification, and inspection of sculptures, knit fabrics, and biological tissues. A 3-D texture represents a locally repeated surface variation (SV) that is independent of the overall shape of the surface and can be determined using the local neighborhood and its characteristics. Existing methods mostly employ computer vision techniques that analyze a 3-D mesh globally, derive features, and then utilize them for classification or retrieval tasks. While several traditional and learning-based methods have been proposed in the literature, only a few have addressed 3-D texture analysis, and none have considered unsupervised schemes so far. This article proposes an original framework for the unsupervised segmentation of 3-D texture on the mesh manifold. The problem is approached as a binary surface segmentation task, where the mesh surface is partitioned into textured and nontextured regions without prior annotation. The proposed method comprises a mutual transformer-based system consisting of a label generator (LG) and a label cleaner (LC). Both models take geometric image representations of the surface mesh facets and label them as texture or nontexture using an iterative mutual learning scheme. Extensive experiments on three publicly available datasets with diverse texture patterns demonstrate that the proposed framework outperforms standard and state-of-the-art unsupervised techniques and performs reasonably well compared to supervised methods.

2.
IEEE/ACM Trans Comput Biol Bioinform ; 20(4): 2420-2433, 2023.
Article in English | MEDLINE | ID: mdl-35849664

ABSTRACT

Multimodal medical images are widely used by clinicians and physicians to analyze and retrieve complementary information from high-resolution images in a non-invasive manner. Loss of corresponding image resolution adversely affects the overall performance of medical image interpretation. Deep learning-based single image super resolution (SISR) algorithms have revolutionized the overall diagnosis framework by continually improving the architectural components and training strategies associated with convolutional neural networks (CNN) on low-resolution images. However, existing work lacks in two ways: i) the SR output produced exhibits poor texture details, and often produce blurred edges, ii) most of the models have been developed for a single modality, hence, require modification to adapt to a new one. This work addresses (i) by proposing generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data. Existing approaches based on the GAN have yielded good SR results; however, the texture details of their SR output have been experimentally confirmed to be deficient for medical images particularly. The integration of wavelet transform (WT) and GANs in our proposed SR model addresses the aforementioned limitation concerning textons. While the WT divides the LR image into multiple frequency bands, the transferred GAN uses multi-attention and upsample blocks to predict high-frequency components. Additionally, we present a learning method for training domain-specific classifiers as perceptual loss functions. Using a combination of multi-attention GAN loss and a perceptual loss function results in an efficient and reliable performance. Applying the same model for medical images from diverse modalities is challenging, our work addresses (ii) by training and performing on several modalities via transfer learning. Using two medical datasets, we validate our proposed SR network against existing state-of-the-art approaches and achieve promising results in terms of structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR).

3.
JMIR Med Inform ; 9(8): e29433, 2021 Aug 02.
Article in English | MEDLINE | ID: mdl-34338648

ABSTRACT

BACKGROUND: Foodborne disease is a common threat to human health worldwide, leading to millions of deaths every year. Thus, the accurate prediction foodborne disease risk is very urgent and of great importance for public health management. OBJECTIVE: We aimed to design a spatial-temporal risk prediction model suitable for predicting foodborne disease risks in various regions, to provide guidance for the prevention and control of foodborne diseases. METHODS: We designed a novel end-to-end framework to predict foodborne disease risk by using a multigraph structural long short-term memory neural network, which can utilize an encoder-decoder to achieve multistep prediction. In particular, to capture multiple spatial correlations, we divided regions by administrative area and constructed adjacent graphs with metrics that included region proximity, historical data similarity, regional function similarity, and exposure food similarity. We also integrated an attention mechanism in both spatial and temporal dimensions, as well as external factors, to refine prediction accuracy. We validated our model with a long-term real-world foodborne disease data set, comprising data from 2015 to 2019 from multiple provinces in China. RESULTS: Our model can achieve F1 scores of 0.822, 0.679, 0.709, and 0.720 for single-month forecasts for the provinces of Beijing, Zhejiang, Shanxi and Hebei, respectively, and the highest F1 score was 20% higher than the best results of the other models. The experimental results clearly demonstrated that our approach can outperform other state-of-the-art models, with a margin. CONCLUSIONS: The spatial-temporal risk prediction model can take into account the spatial-temporal characteristics of foodborne disease data and accurately determine future disease spatial-temporal risks, thereby providing support for the prevention and risk assessment of foodborne disease.

4.
Curr Med Imaging ; 17(1): 64-72, 2021.
Article in English | MEDLINE | ID: mdl-32101132

ABSTRACT

BACKGROUND: The brain is the most complex organ of the human body with millions of connections and activations. The electromagnetic signals are generated inside the brain due to a mental or physical task performed. These signals excite a bunch of neurons within a particular lobe depending upon the nature of the task performed. To localize this activity, certain machine learning (ML) techniques in conjunction with a neuroimaging technique (M/EEG, fMRI, PET) are developed. Different ML techniques are provided in the literature for brain source localization. Among them, the most common are: minimum norm estimation (MNE), low resolution brain electromagnetic tomography (LORETA) and Bayesian framework based multiple sparse priors (MSP). AIMS: In this research work, EEG is used as a neuroimaging technique. METHODS: EEG data is synthetically generated at SNR=5dB. Afterwards, ML techniques are applied to estimate the active sources. Each dataset is run for multiple trials (>40). The performance is analyzed using free energy and localization error as performance indicators. Furthermore, MSP is applied with a variant number of patches to observe the impact of patches on source localization. RESULTS: It is observed that with an increased number of patches, the sources are localized with more precision and accuracy as expressed in terms of free energy and localization error, respectively. CONCLUSION: The patches optimization within the Bayesian Framework produces improved results in terms of free energy and localization error.


Subject(s)
Brain Mapping , Electroencephalography , Bayes Theorem , Brain/diagnostic imaging , Humans , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...