Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Leuk Res ; 122: 106950, 2022 11.
Article in English | MEDLINE | ID: mdl-36152502

ABSTRACT

In biomedical image analysis, segmentation of cell nuclei from microscopic images is a highly challenging research problem. In the computer-assisted health care system, the segmented microscopic cells have been used by many biological researchers for the early prediction of various diseases. Multiple myeloma is one type of disease which is also term as a plasma cell cancer. The segmentation of the nucleus and cell is a very critical step for multiple myeloma detection. Here, In this work, we have designed two modules. One is for recognizing the nucleus of myeloma cells with a deep IEMD neural network, and the other is for differentiating the cell i.e cytoplasm. The different IMFs provides detailed frequency component of an image which are used for feature extraction. This will significantly improves the performance. We proposed a new counting algorithm for counting the myeloma-affected plasma cells in this paper. An algorithm for counting overgrowth plasma cells within the myeloid tissue has been developed using the Python TensorFlow framework. Experimental outcomes on SegPC datasets substantiate that, the proposed deep learning approach outperforms other competitive methods in myeloma recognition and detection. The result of this research indicates that, the proposed image segmentation mechanism can recognize multiple myeloma with superiority. Early detection of multiple myeloma at the initial stage increases the chances to cure patients.


Subject(s)
Multiple Myeloma , Humans , Multiple Myeloma/diagnostic imaging , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Algorithms , Cell Nucleus
2.
IEEE Trans Image Process ; 31: 2027-2039, 2022.
Article in English | MEDLINE | ID: mdl-35167450

ABSTRACT

Quality assessment of 3D-synthesized images has traditionally been based on detecting specific categories of distortions such as stretching, black-holes, blurring, etc. However, such approaches have limitations in accurately detecting distortions entirely in 3D synthesized images affecting their performance. This work proposes an algorithm to efficiently detect the distortions and subsequently evaluate the perceptual quality of 3D synthesized images. The process of generation of 3D synthesized images produces a few pixel shift between reference and 3D synthesized image, and hence they are not properly aligned with each other. To address this, we propose using morphological operation (opening) in the residual image to reduce perceptually unimportant information between the reference and the distorted 3D synthesized image. The residual image suppresses the perceptually unimportant information and highlights the geometric distortions which significantly affect the overall quality of 3D synthesized images. We utilized the information present in the residual image to quantify the perceptual quality measure and named this algorithm as Perceptually Unimportant Information Reduction (PU-IR) algorithm. At the same time, the residual image cannot capture the minor structural and geometric distortions due to the usage of erosion operation. To address this, we extract the perceptually important deep features from the pre-trained VGG-16 architectures on the Laplacian pyramid. The distortions in 3D synthesized images are present in patches, and the human visual system perceives even the small levels of these distortions. With this view, to compare these deep features between reference and distorted image, we propose using cosine similarity and named this algorithm as Deep Features extraction and comparison using Cosine Similarity (DF-CS) algorithm. The cosine similarity is based upon their similarity rather than computing the magnitude of the difference of deep features. Finally, the pooling is done to obtain the objective quality scores using simple multiplication to both PU-IR and DF-CS algorithms. Our source code is available online: https://github.com/sadbhawnathakur/3D-Image-Quality-Assessment.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Humans
3.
IEEE Trans Image Process ; 31: 1737-1750, 2022.
Article in English | MEDLINE | ID: mdl-35100114

ABSTRACT

Existing Quality Assessment (QA) algorithms consider identifying "black-holes" to assess perceptual quality of 3D-synthesized views. However, advancements in rendering and inpainting techniques have made black-hole artifacts near obsolete. Further, 3D-synthesized views frequently suffer from stretching artifacts due to occlusion that in turn affect perceptual quality. Existing QA algorithms are found to be inefficient in identifying these artifacts, as has been seen by their performance on the IETR dataset. We found, empirically, that there is a relationship between the number of blocks with stretching artifacts in view and the overall perceptual quality. Building on this observation, we propose a Convolutional Neural Network (CNN) based algorithm that identifies the blocks with stretching artifacts and incorporates the number of blocks with the stretching artifacts to predict the quality of 3D-synthesized views. To address the challenge with existing 3D-synthesized views dataset, which has few samples, we collect images from other related datasets to increase the sample size and increase generalization while training our proposed CNN-based algorithm. The proposed algorithm identifies blocks with stretching distortions and subsequently fuses them to predict perceptual quality without reference, achieving improvement in performance compared to existing no-reference QA algorithms that are not trained on the IETR dataset. The proposed algorithm can also identify the blocks with stretching artifacts efficiently, which can further be used in downstream applications to improve the quality of 3D views. Our source code is available online: https://github.com/sadbhawnathakur/3D-Image-Quality-Assessment.

4.
IEEE J Transl Eng Health Med ; 7: 1800309, 2019.
Article in English | MEDLINE | ID: mdl-31281739

ABSTRACT

In this paper, a novel context-dependent fuzzy set associated statistical model-based intensity inhomogeneity correction technique for magnetic resonance image (MRI) is proposed. The observed MRI is considered to be affected by intensity inhomogeneity and it is assumed to be a multiplicative quantity. In the proposed scheme the intensity inhomogeneity correction and MRI segmentation is considered as a combined task. The maximum a posteriori probability (MAP) estimation principle is explored to solve this problem. A fuzzy set associated Gibbs' Markov random field (MRF) is considered to model the spatio-contextual information of an MRI. It is observed that the MAP estimate of the MRF model does not yield good results with any local searching strategy, as it gets trapped to local optimum. Hence, we have exploited the advantage of variable neighborhood searching (VNS)-based iterative global convergence criterion for MRF-MAP estimation. The effectiveness of the proposed scheme is established by testing it on different MRIs. Three performance evaluation measures are considered to evaluate the performance of the proposed scheme against existing state-of-the-art techniques. The simulation results establish the effectiveness of the proposed technique.

5.
Magn Reson Imaging ; 34(9): 1292-1304, 2016 Nov.
Article in English | MEDLINE | ID: mdl-27477599

ABSTRACT

In this article, a statistical fusion based segmentation technique is proposed to identify different abnormality in magnetic resonance images (MRI). The proposed scheme follows seed selection, region growing-merging and fusion of multiple image segments. In this process initially, an image is divided into a number of blocks and for each block we compute the phase component of the Fourier transform. The phase component of each block reflects the gray level variation among the block but contains a large correlation among them. Hence a singular value decomposition (SVD) technique is adhered to generate a singular value of each block. Then a thresholding procedure is applied on these singular values to identify edgy and smooth regions and some seed points are selected for segmentation. By considering each seed point we perform a binary segmentation of the complete MRI and hence with all seed points we get an equal number of binary images. A parcel based statistical fusion process is used to fuse all the binary images into multiple segments. Effectiveness of the proposed scheme is tested on identifying different abnormalities: prostatic carcinoma detection, tuberculous granulomas identification and intracranial neoplasm or brain tumor detection. The proposed technique is established by comparing its results against seven state-of-the-art techniques with six performance evaluation measures.


Subject(s)
Brain Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/statistics & numerical data , Magnetic Resonance Imaging/methods , Magnetic Resonance Imaging/statistics & numerical data , Prostatic Neoplasms/diagnostic imaging , Tuberculoma/diagnostic imaging , Algorithms , Fourier Analysis , Humans , Male
6.
IEEE Trans Image Process ; 22(8): 3087-96, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23715521

ABSTRACT

In this paper, a spatiocontextual unsupervised change detection technique for multitemporal, multispectral remote sensing images is proposed. The technique uses a Gibbs Markov random field (GMRF) to model the spatial regularity between the neighboring pixels of the multitemporal difference image. The difference image is generated by change vector analysis applied to images acquired on the same geographical area at different times. The change detection problem is solved using the maximum a posteriori probability (MAP) estimation principle. The MAP estimator of the GMRF used to model the difference image is exponential in nature, thus a modified Hopfield type neural network (HTNN) is exploited for estimating the MAP. In the considered Hopfield type network, a single neuron is assigned to each pixel of the difference image and is assumed to be connected only to its neighbors. Initial values of the neurons are set by histogram thresholding. An expectation-maximization algorithm is used to estimate the GMRF model parameters. Experiments are carried out on three-multispectral and multitemporal remote sensing images. Results of the proposed change detection scheme are compared with those of the manual-trial-and-error technique, automatic change detection scheme based on GMRF model and iterated conditional mode algorithm, a context sensitive change detection scheme based on HTNN, the GMRF model, and a graph-cut algorithm. A comparison points out that the proposed method provides more accurate change detection maps than other methods.


Subject(s)
Algorithms , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Remote Sensing Technology/methods , Subtraction Technique , Data Interpretation, Statistical , Image Enhancement/methods , Markov Chains , Reproducibility of Results , Sensitivity and Specificity , Systems Integration
SELECTION OF CITATIONS
SEARCH DETAIL
...