Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Diagn Pathol ; 17(1): 38, 2022 Apr 19.
Article in English | MEDLINE | ID: mdl-35436941

ABSTRACT

BACKGROUND: Nuclei classification, segmentation, and detection from pathological images are challenging tasks due to cellular heterogeneity in the Whole Slide Images (WSI). METHODS: In this work, we propose advanced DCNN models for nuclei classification, segmentation, and detection tasks. The Densely Connected Neural Network (DCNN) and Densely Connected Recurrent Convolutional Network (DCRN) models are applied for the nuclei classification tasks. The Recurrent Residual U-Net (R2U-Net) and the R2UNet-based regression model named the University of Dayton Net (UD-Net) are applied for nuclei segmentation and detection tasks respectively. The experiments are conducted on publicly available datasets, including Routine Colon Cancer (RCC) classification and detection and the Nuclei Segmentation Challenge 2018 datasets for segmentation tasks. The experimental results were evaluated with a five-fold cross-validation method, and the average testing results are compared against the existing approaches in terms of precision, recall, Dice Coefficient (DC), Mean Squared Error (MSE), F1-score, and overall testing accuracy by calculating pixels and cell-level analysis. RESULTS: The results demonstrate around 2.6% and 1.7% higher performance in terms of F1-score for nuclei classification and detection tasks when compared to the recently published DCNN based method. Also, for nuclei segmentation, the R2U-Net shows around 91.90% average testing accuracy in terms of DC, which is around 1.54% higher than the U-Net model. CONCLUSION: The proposed methods demonstrate robustness with better quantitative and qualitative results in three different tasks for analyzing the WSI.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Cell Nucleus , Humans , Image Processing, Computer-Assisted/methods
2.
Diagnostics (Basel) ; 12(1)2022 Jan 05.
Article in English | MEDLINE | ID: mdl-35054287

ABSTRACT

Diabetes and high blood pressure are the primary causes of Chronic Kidney Disease (CKD). Glomerular Filtration Rate (GFR) and kidney damage markers are used by researchers around the world to identify CKD as a condition that leads to reduced renal function over time. A person with CKD has a higher chance of dying young. Doctors face a difficult task in diagnosing the different diseases linked to CKD at an early stage in order to prevent the disease. This research presents a novel deep learning model for the early detection and prediction of CKD. This research objectives to create a deep neural network and compare its performance to that of other contemporary machine learning techniques. In tests, the average of the associated features was used to replace all missing values in the database. After that, the neural network's optimum parameters were fixed by establishing the parameters and running multiple trials. The foremost important features were selected by Recursive Feature Elimination (RFE). Hemoglobin, Specific Gravity, Serum Creatinine, Red Blood Cell Count, Albumin, Packed Cell Volume, and Hypertension were found as key features in the RFE. Selected features were passed to machine learning models for classification purposes. The proposed Deep neural model outperformed the other four classifiers (Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Logistic regression, Random Forest, and Naive Bayes classifier) by achieving 100% accuracy. The proposed approach could be a useful tool for nephrologists in detecting CKD.

3.
Opt Express ; 28(25): 38419-38443, 2020 Dec 07.
Article in English | MEDLINE | ID: mdl-33379654

ABSTRACT

Division of focal plane (DoFP), or integrated microgrid polarimeters, typically consist of a 2 × 2 mosaic of linear polarization filters overlaid upon a focal plane array sensor and obtain temporally synchronized polarized intensity measurements across a scene, similar in concept to a Bayer color filter array camera. However, the resulting estimated polarimetric images suffer a loss in resolution and can be plagued by aliasing due to the spatially-modulated microgrid measurement strategy. Demosaicing strategies have been proposed that attempt to minimize these effects, but result in some level of residual artifacts. In this work we propose a conditional generative adversarial network (cGAN) approach to the microgrid demosaicing problem. We evaluate the performance of our approach against full-resolution division-of-time polarimeter data as well as compare against both traditional and recent microgrid demosaicing methods. We apply these demosaicing strategies to data from both real and simulated visible microgrid imagery and provide an objective criteria for evaluating their performance. We demonstrate that the proposed cGAN approach results in estimated Stokes imagery that is comparable to full-resolution ground truth imagery from both a quantitative and qualitative perspective.

4.
J Med Imaging (Bellingham) ; 6(1): 014006, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30944843

ABSTRACT

Deep learning (DL)-based semantic segmentation methods have been providing state-of-the-art performance in the past few years. More specifically, these techniques have been successfully applied in medical image classification, segmentation, and detection tasks. One DL technique, U-Net, has become one of the most popular for these applications. We propose a recurrent U-Net model and a recurrent residual U-Net model, which are named RU-Net and R2U-Net, respectively. The proposed models utilize the power of U-Net, residual networks, and recurrent convolutional neural networks. There are several advantages to using these proposed architectures for segmentation tasks. First, a residual unit helps when training deep architectures. Second, feature accumulation with recurrent residual convolutional layers ensures better feature representation for segmentation tasks. Third, it allows us to design better U-Net architectures with the same number of network parameters with better performance for medical image segmentation. The proposed models are tested on three benchmark datasets, such as blood vessel segmentation in retinal images, skin cancer segmentation, and lung lesion segmentation. The experimental results show superior performance on segmentation tasks compared to equivalent models, including a variant of a fully connected convolutional neural network called SegNet, U-Net, and residual U-Net.

5.
J Digit Imaging ; 32(4): 605-617, 2019 08.
Article in English | MEDLINE | ID: mdl-30756265

ABSTRACT

The Deep Convolutional Neural Network (DCNN) is one of the most powerful and successful deep learning approaches. DCNNs have already provided superior performance in different modalities of medical imaging including breast cancer classification, segmentation, and detection. Breast cancer is one of the most common and dangerous cancers impacting women worldwide. In this paper, we have proposed a method for breast cancer classification with the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model. The IRRCNN is a powerful DCNN model that combines the strength of the Inception Network (Inception-v4), the Residual Network (ResNet), and the Recurrent Convolutional Neural Network (RCNN). The IRRCNN shows superior performance against equivalent Inception Networks, Residual Networks, and RCNNs for object recognition tasks. In this paper, the IRRCNN approach is applied for breast cancer classification on two publicly available datasets including BreakHis and Breast Cancer (BC) classification challenge 2015. The experimental results are compared against the existing machine learning and deep learning-based approaches with respect to image-based, patch-based, image-level, and patient-level classification. The IRRCNN model provides superior classification performance in terms of sensitivity, area under the curve (AUC), the ROC curve, and global accuracy compared to existing approaches for both datasets.


Subject(s)
Breast Neoplasms/diagnostic imaging , Mammography/methods , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Breast/diagnostic imaging , Female , Humans , Machine Learning
6.
Comput Intell Neurosci ; 2018: 6747098, 2018.
Article in English | MEDLINE | ID: mdl-30224913

ABSTRACT

In spite of advances in object recognition technology, handwritten Bangla character recognition (HBCR) remains largely unsolved due to the presence of many ambiguous handwritten characters and excessively cursive Bangla handwritings. Even many advanced existing methods do not lead to satisfactory performance in practice that related to HBCR. In this paper, a set of the state-of-the-art deep convolutional neural networks (DCNNs) is discussed and their performance on the application of HBCR is systematically evaluated. The main advantage of DCNN approaches is that they can extract discriminative features from raw data and represent them with a high degree of invariance to object distortions. The experimental results show the superior performance of DCNN models compared with the other popular object recognition approaches, which implies DCNN can be a good candidate for building an automatic HBCR system for practical applications.


Subject(s)
Handwriting , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Humans , Machine Learning
7.
Sensors (Basel) ; 12(4): 5116-33, 2012.
Article in English | MEDLINE | ID: mdl-22666078

ABSTRACT

One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains.

8.
IEEE Trans Image Process ; 18(6): 1314-25, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19366643

ABSTRACT

A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.


Subject(s)
Face , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Artificial Intelligence , Humans , Photography , Principal Component Analysis/methods , ROC Curve , Thermography
9.
Neural Netw ; 22(1): 91-9, 2009 Jan.
Article in English | MEDLINE | ID: mdl-18995987

ABSTRACT

In this paper, we propose the concept of a manifold of color perception through empirical observation that the center-surround properties of images in a perceptually similar environment define a manifold in the high dimensional space. Such a manifold representation can be learned using a novel recurrent neural network based learning algorithm. Unlike the conventional recurrent neural network model in which the memory is stored in an attractive fixed point at discrete locations in the state space, the dynamics of the proposed learning algorithm represent memory as a nonlinear line of attraction. The region of convergence around the nonlinear line is defined by the statistical characteristics of the training data. This learned manifold can then be used as a basis for color correction of the images having different color perception to the learned color perception. Experimental results show that the proposed recurrent neural network learning algorithm is capable of color balance the lighting variations in images captured in different environments successfully.


Subject(s)
Algorithms , Artificial Intelligence , Color Vision , Memory , Neural Networks, Computer , Pattern Recognition, Automated , Association Learning/physiology , Color , Color Vision/physiology
10.
IEEE Trans Neural Netw ; 17(1): 246-50, 2006 Jan.
Article in English | MEDLINE | ID: mdl-16526493

ABSTRACT

We propose a linear attractor network based on the observation that similar patterns form a pipeline in the state space, which can be used for pattern association. To model the pipeline in the state space, we present a learning algorithm using a recurrent neural network. A least-squares estimation approach utilizing the interdependency between neurons defines the dynamics of the network. The region of convergence around the line of attraction is defined based on the statistical characteristics of the input patterns. Performance of the learning algorithm is evaluated by conducting several experiments in benchmark problems, and it is observed that the new technique is suitable for multiple-valued pattern association.

SELECTION OF CITATIONS
SEARCH DETAIL
...