Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
IEEE Trans Med Imaging ; 42(12): 3987-4000, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37768798

ABSTRACT

Polyps are very common abnormalities in human gastrointestinal regions. Their early diagnosis may help in reducing the risk of colorectal cancer. Vision-based computer-aided diagnostic systems automatically identify polyp regions to assist surgeons in their removal. Due to their varying shape, color, size, texture, and unclear boundaries, polyp segmentation in images is a challenging problem. Existing deep learning segmentation models mostly rely on convolutional neural networks that have certain limitations in learning the diversity in visual patterns at different spatial locations. Further, they fail to capture inter-feature dependencies. Vision transformer models have also been deployed for polyp segmentation due to their powerful global feature extraction capabilities. But they too are supplemented by convolution layers for learning contextual local information. In the present paper, a polyp segmentation model CoInNet is proposed with a novel feature extraction mechanism that leverages the strengths of convolution and involution operations and learns to highlight polyp regions in images by considering the relationship between different feature maps through a statistical feature attention unit. To further aid the network in learning polyp boundaries, an anomaly boundary approximation module is introduced that uses recursively fed feature fusion to refine segmentation results. It is indeed remarkable that even tiny-sized polyps with only 0.01% of an image area can be precisely segmented by CoInNet. It is crucial for clinical applications, as small polyps can be easily overlooked even in the manual examination due to the voluminous size of wireless capsule endoscopy videos. CoInNet outperforms thirteen state-of-the-art methods on five benchmark polyp segmentation datasets.


Subject(s)
Capsule Endoscopy , Surgeons , Humans , Neural Networks, Computer , Image Processing, Computer-Assisted
2.
Appl Soft Comput ; 125: 109109, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35693544

ABSTRACT

The COVID-19 pandemic has posed an unprecedented threat to the global public health system, primarily infecting the airway epithelial cells in the respiratory tract. Chest X-ray (CXR) is widely available, faster, and less expensive therefore it is preferred to monitor the lungs for COVID-19 diagnosis over other techniques such as molecular test, antigen test, antibody test, and chest computed tomography (CT). As the pandemic continues to reveal the limitations of our current ecosystems, researchers are coming together to share their knowledge and experience in order to develop new systems to tackle it. In this work, an end-to-end IoT infrastructure is designed and built to diagnose patients remotely in the case of a pandemic, limiting COVID-19 dissemination while also improving measurement science. The proposed framework comprises six steps. In the last step, a model is designed to interpret CXR images and intelligently measure the severity of COVID-19 lung infections using a novel deep neural network (DNN). The proposed DNN employs multi-scale sampling filters to extract reliable and noise-invariant features from a variety of image patches. Experiments are conducted on five publicly available databases, including COVIDx, COVID-19 Radiography, COVID-XRay-5K, COVID-19-CXR, and COVIDchestxray, with classification accuracies of 96.01%, 99.62%, 99.22%, 98.83%, and 100%, and testing times of 0.541, 0.692, 1.28, 0.461, and 0.202 s, respectively. The obtained results show that the proposed model surpasses fourteen baseline techniques. As a result, the newly developed model could be utilized to evaluate treatment efficacy, particularly in remote locations.

3.
Biomedicines ; 10(4)2022 Apr 12.
Article in English | MEDLINE | ID: mdl-35453637

ABSTRACT

This work analyses the results of research regarding the predisposition of genetic hematological risks associated with secondary polyglobulia. The subjects of the study were selected based on shared laboratory markers and basic clinical symptoms. JAK2 (Janus Kinase 2) mutation negativity represented the common genetic marker of the subjects in the sample of interest. A negative JAK2 mutation hypothetically excluded the presence of an autonomous myeloproliferative disease at the time of detection. The parameters studied in this work focused mainly on thrombotic, immunological, metabolic, and cardiovascular risks. The final goal of the work was to discover the most significant key markers for the diagnosis of high-risk patients and to exclude the less important or only complementary markers, which often represent a superfluous economic burden for healthcare institutions. These research results are applicable as a clinical guideline for the effective diagnosis of selected parameters that demonstrated high sensitivity and specificity. According to the results obtained in the present research, groups with a high incidence of mutations were evaluated as being at higher risk for polycythemia vera disease. It was not possible to clearly determine which of the patients examined had a higher risk of developing the disease as different combinations of mutations could manifest different symptoms of the disease. In general, the entire study group was at risk for manifestations of polycythemia vera disease without a clear diagnosis. The group with less than 20% incidence appeared to be clinically insignificant for polycythemia vera testing and thus there is a potential for saving money in mutation testing. On the other hand, the JAK V617F (somatic mutation of JAK2) parameter from this group should be investigated as it is a clear exclusion or confirmation of polycythemia vera as the primary disease.

4.
Comput Biol Med ; 145: 105420, 2022 06.
Article in English | MEDLINE | ID: mdl-35390744

ABSTRACT

Depression is a major depressive disorder characterized by persistent sadness and a sense of worthlessness, as well as a loss of interest in pleasurable activities, which leads to a variety of physical and emotional problems. It is a worldwide illness that affects millions of people and should be detected at an early stage to prevent negative effects on an individual's life. Electroencephalogram (EEG) is a non-invasive technique for detecting depression that analyses brain signals to determine the current mental state of depressed subjects. In this study, we propose a method for automatic feature extraction to detect depression by first constructing a graph from the dataset where the nodes represent the subjects in the dataset and where the edge weights obtained using the Euclidean distance reflect the relationship between them. The Node2vec algorithmic framework is then used to compute feature representations for nodes in a graph in the form of node embeddings ensuring that similar nodes in the graph remain near in the embedding. These node embeddings act as useful features which can be directly used by classification algorithms to determine whether a subject is depressed thus reducing the effort required for manual handcrafted feature extraction. To combine the features collected from the multiple channels of the EEG data, the method proposes three types of fusion methods: graph-level fusion, feature-level fusion, and decision-level fusion. The proposed method is tested on three publicly available datasets with 3, 20, and 128 channels, respectively, and compared to five state-of-the-art methods. The results show that the proposed method detects depression effectively with a peak accuracy of 0.933 in decision-level fusion, which is the highest among the state-of-the-art methods.


Subject(s)
Brain-Computer Interfaces , Depressive Disorder, Major , Algorithms , Depression/diagnosis , Depressive Disorder, Major/diagnosis , Electroencephalography , Humans
5.
Comput Biol Med ; 137: 104789, 2021 10.
Article in English | MEDLINE | ID: mdl-34455302

ABSTRACT

Wireless capsule endoscopy (WCE) is one of the most efficient methods for the examination of gastrointestinal tracts. Computer-aided intelligent diagnostic tools alleviate the challenges faced during manual inspection of long WCE videos. Several approaches have been proposed in the literature for the automatic detection and localization of anomalies in WCE images. Some of them focus on specific anomalies such as bleeding, polyp, lesion, etc. However, relatively fewer generic methods have been proposed to detect all those common anomalies simultaneously. In this paper, a deep convolutional neural network (CNN) based model 'WCENet' is proposed for anomaly detection and localization in WCE images. The model works in two phases. In the first phase, a simple and efficient attention-based CNN classifies an image into one of the four categories: polyp, vascular, inflammatory, or normal. If the image is classified in one of the abnormal categories, it is processed in the second phase for the anomaly localization. Fusion of Grad-CAM++ and a custom SegNet is used for anomalous region segmentation in the abnormal image. WCENet classifier attains accuracy and area under receiver operating characteristic of 98% and 99%. The WCENet segmentation model obtains a frequency weighted intersection over union of 81%, and an average dice score of 56% on the KID dataset. WCENet outperforms nine different state-of-the-art conventional machine learning and deep learning models on the KID dataset. The proposed model demonstrates potential for clinical applications.


Subject(s)
Capsule Endoscopy , Algorithms , Image Processing, Computer-Assisted , Machine Learning , Neural Networks, Computer , ROC Curve
6.
Entropy (Basel) ; 22(10)2020 Oct 19.
Article in English | MEDLINE | ID: mdl-33286942

ABSTRACT

Detection and localization of regions of images that attract immediate human visual attention is currently an intensive area of research in computer vision. The capability of automatic identification and segmentation of such salient image regions has immediate consequences for applications in the field of computer vision, computer graphics, and multimedia. A large number of salient object detection (SOD) methods have been devised to effectively mimic the capability of the human visual system to detect the salient regions in images. These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based. In this survey, most of the influential advances in image-based SOD from both conventional as well as deep learning-based categories have been reviewed in detail. Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection. Results are presented for various challenging cases for some large-scale public datasets. Different metrics considered for assessment of the performance of state-of-the-art salient object detection models are also covered. Some future directions for SOD are presented towards end.

7.
Comput Biol Med ; 127: 104094, 2020 12.
Article in English | MEDLINE | ID: mdl-33152668

ABSTRACT

One of the most recent non-invasive technologies to examine the gastrointestinal tract is wireless capsule endoscopy (WCE). As there are thousands of endoscopic images in an 8-15 h long video, an evaluator has to pay constant attention for a relatively long time (60-120 min). Therefore the possibility of the presence of pathological findings in a few images (displayed for evaluation for a few seconds only) brings a significant risk of missing the pathology with all negative consequences for the patient. Hence, manually reviewing a video to identify abnormal images is not only a tedious and time consuming task that overwhelms human attention but also is error prone. In this paper, a method is proposed for the automatic detection of abnormal WCE images. The differential box counting method is used for the extraction of fractal dimension (FD) of WCE images and the random forest based ensemble classifier is used for the identification of abnormal frames. The FD is a well-known technique for extraction of features related to texture, smoothness, and roughness. In this paper, FDs are extracted from pixel-blocks of WCE images and are fed to the classifier for identification of images with abnormalities. To determine a suitable pixel block size for FD feature extraction, various sizes of blocks are considered and are fed into six frequently used classifiers separately, and the block size of 7×7 giving the best performance is empirically determined. Further, the selection of the random forest ensemble classifier is also done using the same empirical study. Performance of the proposed method is evaluated on two datasets containing WCE frames. Results demonstrate that the proposed method outperforms some of the state-of-the-art methods with AUC of 85% and 99% on Dataset-I and Dataset-II respectively.


Subject(s)
Capsule Endoscopy , Fractals , Gastrointestinal Tract , Humans
8.
Appl Opt ; 59(22): 6593, 2020 Aug 01.
Article in English | MEDLINE | ID: mdl-32749359

ABSTRACT

This publisher's note amends information in the Funding section of Appl. Opt.59, 5642 (2020).APOPAI0003-693510.1364/AO.391234.

9.
Comput Math Methods Med ; 2020: 8303465, 2020.
Article in English | MEDLINE | ID: mdl-32831902

ABSTRACT

Human emotion recognition has been a major field of research in the last decades owing to its noteworthy academic and industrial applications. However, most of the state-of-the-art methods identified emotions after analyzing facial images. Emotion recognition using electroencephalogram (EEG) signals has got less attention. However, the advantage of using EEG signals is that it can capture real emotion. However, very few EEG signals databases are publicly available for affective computing. In this work, we present a database consisting of EEG signals of 44 volunteers. Twenty-three out of forty-four are females. A 32 channels CLARITY EEG traveler sensor is used to record four emotional states namely, happy, fear, sad, and neutral of subjects by showing 12 videos. So, 3 video files are devoted to each emotion. Participants are mapped with the emotion that they had felt after watching each video. The recorded EEG signals are considered further to classify four types of emotions based on discrete wavelet transform and extreme learning machine (ELM) for reporting the initial benchmark classification performance. The ELM algorithm is used for channel selection followed by subband selection. The proposed method performs the best when features are captured from the gamma subband of the FP1-F7 channel with 94.72% accuracy. The presented database would be available to the researchers for affective recognition applications.


Subject(s)
Algorithms , Electroencephalography/methods , Emotions/classification , Benchmarking , Brain/anatomy & histology , Brain/physiology , Brain Waves/physiology , Computational Biology , Databases, Factual , Electroencephalography/statistics & numerical data , Emotions/physiology , Female , Humans , Machine Learning , Male , Mathematical Concepts , Neural Networks, Computer , Photic Stimulation , Video Recording
10.
Appl Opt ; 59(19): 5642-5655, 2020 Jul 01.
Article in English | MEDLINE | ID: mdl-32609685

ABSTRACT

Multi-focus image fusion is defined as "the combination of a group of partially focused images of a same scene with the objective of producing a fully focused image." Normally, transform-domain-based image fusion methods preserve the textures and edges in the blend image, but many are translation variant. The translation-invariant transforms produce the same size approximation and detail images, which are more convenient to devise the fusion rules. In this work, a translation-invariant multi-focus image fusion approach using the à-trous wavelet transform is introduced, which uses fractal dimension as a clarity measure for the approximation coefficients and Otsu's threshold to fuse the detail coefficients. The subjective assessment of the proposed method is carried out using the fusion results of nine state-of-the-art methods. On the other hand, eight fusion quality metrics are considered for the objective assessment. The results of subjective and objective assessment on grayscale and color multi-focus image pairs illustrate that the proposed method is competitive and even better than some of the existing methods.

11.
Article in English | MEDLINE | ID: mdl-29078042

ABSTRACT

New image fusion rules for multimodal medical images are proposed in this work. Image fusion rules are defined by random forest learning algorithm and a translation-invariant à-trous wavelet transform (AWT). The proposed method is threefold. First, source images are decomposed into approximation and detail coefficients using AWT. Second, random forest is used to choose pixels from the approximation and detail coefficients for forming the approximation and detail coefficients of the fused image. Lastly, inverse AWT is applied to reconstruct fused image. All experiments have been performed on 198 slices of both computed tomography and positron emission tomography images of a patient. A traditional fusion method based on Mallat wavelet transform has also been implemented on these slices. A new image fusion performance measure along with 4 existing measures has been presented, which helps to compare the performance of 2 pixel level fusion methods. The experimental results clearly indicate that the proposed method outperforms the traditional method in terms of visual and quantitative qualities and the new measure is meaningful.


Subject(s)
Positron Emission Tomography Computed Tomography/methods , Wavelet Analysis , Algorithms , Humans , Tomography, X-Ray Computed
12.
Comput Intell Neurosci ; 2012: 261089, 2012.
Article in English | MEDLINE | ID: mdl-22924035

ABSTRACT

Thermal infrared (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Wavelet Analysis , Algorithms , Body Temperature , Databases, Factual , Face , Humans , Image Enhancement/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...