Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 5737, 2023 Apr 07.
Article in English | MEDLINE | ID: mdl-37029181

ABSTRACT

Metallographic images or often called the microstructures contain important information about metals, such as strength, toughness, ductility, corrosion resistance, which are used to choose the proper materials for various engineering applications. Thus by understanding the microstructures, one can determine the behaviour of a component made of a particular metal, and can predict the failure of that component in certain conditions. Image segmentation is a powerful technique for determination of morphological features of the microstructure like volume fraction, inclusion morphology, void, and crystal orientations. These are some key factors for determining the physical properties of metal. Therefore, automatic micro-structure characterization using image processing is useful for industrial applications which currently adopts deep learning-based segmentation models. In this paper, we propose a metallographic image segmentation method using an ensemble of modified U-Nets. Three U-Net models having the same architecture are separately fed with color transformed imaged (RGB, HSV and YUV). We improvise the U-Net with dilated convolutions and attention mechanisms to get finer grained features. Then we apply the sum-rule-based ensemble method on the outcomes of U-Net models to get the final prediction mask. We achieve the mean intersection over union (IoU) score of 0.677 on a publicly available standard dataset, namely MetalDAM. We also show that the proposed method obtains results comparable to state-of-the-art methods with fewer number of model parameters. The source code of the proposed work can be found at  https://github.com/mb16biswas/attention-unet .

2.
Diagnostics (Basel) ; 12(5)2022 May 08.
Article in English | MEDLINE | ID: mdl-35626328

ABSTRACT

Parkinson's Disease (PD) is a progressive central nervous system disorder that is caused due to the neural degeneration mainly in the substantia nigra in the brain. It is responsible for the decline of various motor functions due to the loss of dopamine-producing neurons. Tremors in hands is usually the initial symptom, followed by rigidity, bradykinesia, postural instability, and impaired balance. Proper diagnosis and preventive treatment can help patients improve their quality of life. We have proposed an ensemble of Deep Learning (DL) models to predict Parkinson's using DaTscan images. Initially, we have used four DL models, namely, VGG16, ResNet50, Inception-V3, and Xception, to classify Parkinson's disease. In the next stage, we have applied a Fuzzy Fusion logic-based ensemble approach to enhance the overall result of the classification model. The proposed model is assessed on a publicly available database provided by the Parkinson's Progression Markers Initiative (PPMI). The achieved recognition accuracy, Precision, Sensitivity, Specificity, F1-score from the proposed model are 98.45%, 98.84%, 98.84%, 97.67%, and 98.84%, respectively which are higher than the individual model. We have also developed a Graphical User Interface (GUI)-based software tool for public use that instantly detects all classes using Magnetic Resonance Imaging (MRI) with reasonable accuracy. The proposed method offers better performance compared to other state-of-the-art methods in detecting PD. The developed GUI-based software tool can play a significant role in detecting the disease in real-time.

3.
Comput Methods Programs Biomed ; 219: 106776, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35398621

ABSTRACT

BACKGROUND AND OBJECTIVE: Cervical cancer is one of the leading causes of women's death. Like any other disease, cervical cancer's early detection and treatment with the best possible medical advice are the paramount steps that should be taken to ensure the minimization of after-effects of contracting this disease. PaP smear images are one the most effective ways to detect the presence of such type of cancer. This article proposes a fuzzy distance-based ensemble approach composed of deep learning models for cervical cancer detection in PaP smear images. METHODS: We employ three transfer learning models for this task: Inception V3, MobileNet V2, and Inception ResNet V2, with additional layers to learn data-specific features. To aggregate the outcomes of these models, we propose a novel ensemble method based on the minimization of error values between the observed and the ground-truth. For samples with multiple predictions, we first take three distance measures, i.e., Euclidean, Manhattan (City-Block), and Cosine, for each class from their corresponding best possible solution. We then defuzzify these distance measures using the product rule to calculate the final predictions. RESULTS: In the current experiments, we have achieved 95.30%, 93.92%, and 96.44% respectively when Inception V3, MobileNet V2, and Inception ResNet V2 run individually. After applying the proposed ensemble technique, the performance reaches 96.96% which is higher than the individual models. CONCLUSION: Experimental outcomes on three publicly available datasets ensure that the proposed model presents competitive results compared to state-of-the-art methods. The proposed approach provides an end-to-end classification technique to detect cervical cancer from PaP smear images. This may help the medical professionals for better treatment of the cervical cancer. Thus increasing the overall efficiency in the whole testing process. The source code of the proposed work can be found in github.com/rishavpramanik/CervicalFuzzyDistanceEnsemble.


Subject(s)
Uterine Cervical Neoplasms , Early Detection of Cancer , Female , Humans , Papanicolaou Test , Uterine Cervical Neoplasms/diagnostic imaging , Vaginal Smears
4.
Comput Biol Med ; 141: 105027, 2022 02.
Article in English | MEDLINE | ID: mdl-34799076

ABSTRACT

Breast cancer is one of the deadliest diseases in women and its incidence is growing at an alarming rate. However, early detection of this disease can be life-saving. The rapid development of deep learning techniques has generated a great deal of interest in the medical imaging field. Researchers around the world are working on developing breast cancer detection methods using medical imaging. In the present work, we have proposed a two-stage model for breast cancer detection using thermographic images. Firstly, features are extracted from images using a deep learning model, called VGG16. To select the optimal subset of features, we use a meta-heuristic algorithm called the Dragonfly Algorithm (DA) in the second step. To improve the performance of the DA, a memory-based version of DA is proposed using the Grunwald-Letnikov (GL) method. The proposed two-stage framework has been evaluated on a publicly available standard dataset called DMR-IR. The proposed model efficiently filters out non-essential features and had 100% diagnostic accuracy on the standard dataset, with 82% fewer features compared to the VGG16 model.


Subject(s)
Breast Neoplasms , Algorithms , Breast Neoplasms/diagnostic imaging , Female , Humans , Thermography
5.
Appl Intell (Dordr) ; 51(12): 8985-9000, 2021.
Article in English | MEDLINE | ID: mdl-34764594

ABSTRACT

The rapid spread of coronavirus disease has become an example of the worst disruptive disasters of the century around the globe. To fight against the spread of this virus, clinical image analysis of chest CT (computed tomography) images can play an important role for an accurate diagnostic. In the present work, a bi-modular hybrid model is proposed to detect COVID-19 from the chest CT images. In the first module, we have used a Convolutional Neural Network (CNN) architecture to extract features from the chest CT images. In the second module, we have used a bi-stage feature selection (FS) approach to find out the most relevant features for the prediction of COVID and non-COVID cases from the chest CT images. At the first stage of FS, we have applied a guided FS methodology by employing two filter methods: Mutual Information (MI) and Relief-F, for the initial screening of the features obtained from the CNN model. In the second stage, Dragonfly algorithm (DA) has been used for the further selection of most relevant features. The final feature set has been used for the classification of the COVID-19 and non-COVID chest CT images using the Support Vector Machine (SVM) classifier. The proposed model has been tested on two open-access datasets: SARS-CoV-2 CT images and COVID-CT datasets and the model shows substantial prediction rates of 98.39% and 90.0% on the said datasets respectively. The proposed model has been compared with a few past works for the prediction of COVID-19 cases. The supporting codes are uploaded in the Github link: https://github.com/Soumyajit-Saha/A-Bi-Stage-Feature-Selection-on-Covid-19-Dataset.

6.
Sci Rep ; 11(1): 20696, 2021 10 19.
Article in English | MEDLINE | ID: mdl-34667253

ABSTRACT

The analysis of human facial expressions from the thermal images captured by the Infrared Thermal Imaging (IRTI) cameras has recently gained importance compared to images captured by the standard cameras using light having a wavelength in the visible spectrum. It is because infrared cameras work well in low-light conditions and also infrared spectrum captures thermal distribution that is very useful for building systems like Robot interaction systems, quantifying the cognitive responses from facial expressions, disease control, etc. In this paper, a deep learning model called IRFacExNet (InfraRed Facial Expression Network) has been proposed for facial expression recognition (FER) from infrared images. It utilizes two building blocks namely Residual unit and Transformation unit which extract dominant features from the input images specific to the expressions. The extracted features help to detect the emotion of the subjects in consideration accurately. The Snapshot ensemble technique is adopted with a Cosine annealing learning rate scheduler to improve the overall performance. The performance of the proposed model has been evaluated on a publicly available dataset, namely IRDatabase developed by RWTH Aachen University. The facial expressions present in the dataset are Fear, Anger, Contempt, Disgust, Happy, Neutral, Sad, and Surprise. The proposed model produces 88.43% recognition accuracy, better than some state-of-the-art methods considered here for comparison. Our model provides a robust framework for the detection of accurate expression in the absence of visible light.


Subject(s)
Facial Recognition/physiology , Cognition/physiology , Deep Learning , Emotions/physiology , Facial Expression , Female , Humans , Spectrophotometry, Infrared/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...