Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnostics (Basel) ; 13(13)2023 Jun 24.
Article in English | MEDLINE | ID: mdl-37443550

ABSTRACT

Echocardiography is one of the imaging systems most often utilized for assessing heart anatomy and function. Left ventricle ejection fraction (LVEF) is an important clinical variable assessed from echocardiography via the measurement of left ventricle (LV) parameters. Significant inter-observer and intra-observer variability is seen when LVEF is quantified by cardiologists using huge echocardiography data. Machine learning algorithms have the capability to analyze such extensive datasets and identify intricate patterns of structure and function of the heart that highly skilled observers might overlook, hence paving the way for computer-assisted diagnostics in this field. In this study, LV segmentation is performed on echocardiogram data followed by feature extraction from the left ventricle based on clinical methods. The extracted features are then subjected to analysis using both neural networks and traditional machine learning algorithms to estimate the LVEF. The results indicate that employing machine learning techniques on the extracted features from the left ventricle leads to higher accuracy than the utilization of Simpson's method for estimating the LVEF. The evaluations are performed on a publicly available echocardiogram dataset, EchoNet-Dynamic. The best results are obtained when DeepLab, a convolutional neural network architecture, is used for LV segmentation along with Long Short-Term Memory Networks (LSTM) for the regression of LVEF, obtaining a dice similarity coefficient of 0.92 and a mean absolute error of 5.736%.

2.
IEEE Trans Cybern ; 51(9): 4515-4527, 2021 Sep.
Article in English | MEDLINE | ID: mdl-31880579

ABSTRACT

This article presents an efficient fingerprint identification system that implements an initial classification for search-space reduction followed by minutiae neighbor-based feature encoding and matching. The current state-of-the-art fingerprint classification methods use a deep convolutional neural network (DCNN) to assign confidence for the classification prediction, and based on this prediction, the input fingerprint is matched with only the subset of the database that belongs to the predicted class. It can be observed for the DCNNs that as the architectures deepen, the farthest layers of the network learn more abstract information from the input images that result in higher prediction accuracies. However, the downside is that the DCNNs are data hungry and require lots of annotated (labeled) data to learn generalized network parameters for deeper layers. In this article, a shallow multifeature view CNN (SMV-CNN) fingerprint classifier is proposed that extracts: 1) fine-grained features from the input image and 2) abstract features from explicitly derived representations obtained from the input image. The multifeature views are fed to a fully connected neural network (NN) to compute a global classification prediction. The classification results show that the SMV-CNN demonstrated an improvement of 2.8% when compared to baseline CNN consisting of a single grayscale view on an open-source database. Moreover, in comparison with the state-of-the-art residual network (ResNet-50) image classification model, the proposed method performs comparably while being less complex and more efficient during training. The result of classification-based fingerprint identification has shown that the search space is reduced by over 50% without degradation of identification accuracies.


Subject(s)
Neural Networks, Computer
3.
Multimed Tools Appl ; : 1-26, 2021 Dec 27.
Article in English | MEDLINE | ID: mdl-34975282

ABSTRACT

Smart video surveillance helps to build more robust smart city environment. The varied angle cameras act as smart sensors and collect visual data from smart city environment and transmit it for further visual analysis. The transmitted visual data is required to be in high quality for efficient analysis which is a challenging task while transmitting videos on low capacity bandwidth communication channels. In latest smart surveillance cameras, high quality of video transmission is maintained through various video encoding techniques such as high efficiency video coding. However, these video coding techniques still provide limited capabilities and the demand of high-quality based encoding for salient regions such as pedestrians, vehicles, cyclist/motorcyclist and road in video surveillance systems is still not met. This work is a contribution towards building an efficient salient region-based surveillance framework for smart cities. The proposed framework integrates a deep learning-based video surveillance technique that extracts salient regions from a video frame without information loss, and then encodes it in reduced size. We have applied this approach in diverse case studies environments of smart city to test the applicability of the framework. The successful result in terms of bitrate 56.92%, peak signal to noise ratio 5.35 bd and SR based segmentation accuracy of 92% and 96% for two different benchmark datasets is the outcome of proposed work. Consequently, the generation of less computational region-based video data makes it adaptable to improve surveillance solution in Smart Cities.

4.
IEEE Access ; 8: 22812-22825, 2020.
Article in English | MEDLINE | ID: mdl-32391238

ABSTRACT

Tuberculosis (TB) is an infectious disease that can lead towards death if left untreated. TB detection involves extraction of complex TB manifestation features such as lung cavity, air space consolidation, endobronchial spread, and pleural effusions from chest x-rays (CXRs). Deep learning based approach named convolutional neural network (CNN) has the ability to learn complex features from CXR images. The main problem is that CNN does not consider uncertainty to classify CXRs using softmax layer. It lacks in presenting the true probability of CXRs by differentiating confusing cases during TB detection. This paper presents the solution for TB identification by using Bayesian-based convolutional neural network (B-CNN). It deals with the uncertain cases that have low discernibility among the TB and non-TB manifested CXRs. The proposed TB identification methodology based on B-CNN is evaluated on two TB benchmark datasets, i.e., Montgomery and Shenzhen. For training and testing of proposed scheme we have utilized Google Colab platform which provides NVidia Tesla K80 with 12 GB of VRAM, single core of 2.3 GHz Xeon Processor, 12 GB RAM and 320 GB of disk. B-CNN achieves 96.42% and 86.46% accuracy on both dataset, respectively as compared to the state-of-the-art machine learning and CNN approaches. Moreover, B-CNN validates its results by filtering the CXRs as confusion cases where the variance of B-CNN predicted outputs is more than a certain threshold. Results prove the supremacy of B-CNN for the identification of TB and non-TB sample CXRs as compared to counterparts in terms of accuracy, variance in the predicted probabilities and model uncertainty.

5.
J Digit Imaging ; 32(6): 1027-1043, 2019 12.
Article in English | MEDLINE | ID: mdl-30980262

ABSTRACT

Surgical telementoring systems have gained lots of interest, especially in remote locations. However, bandwidth constraint has been the primary bottleneck for efficient telementoring systems. This study aims to establish an efficient surgical telementoring system, where the qualified surgeon (mentor) provides real-time guidance and technical assistance for surgical procedures to the on-spot physician (surgeon). High Efficiency Video Coding (HEVC/H.265)-based video compression has shown promising results for telementoring applications. However, there is a trade-off between the bandwidth resources required for video transmission and quality of video received by the remote surgeon. In order to efficiently compress and transmit real-time surgical videos, a hybrid lossless-lossy approach is proposed where surgical incision region is coded in high quality whereas the background region is coded in low quality based on distance from the surgical incision region. For surgical incision region extraction, state-of-the-art deep learning (DL) architectures for semantic segmentation can be used. However, the computational complexity of these architectures is high resulting in large training and inference times. For telementoring systems, encoding time is crucial; therefore, very deep architectures are not suitable for surgical incision extraction. In this study, we propose a shallow convolutional neural network (S-CNN)-based segmentation approach that consists of encoder network only for surgical region extraction. The segmentation performance of S-CNN is compared with one of the state-of-the-art image segmentation networks (SegNet), and results demonstrate the effectiveness of the proposed network. The proposed telementoring system is efficient and explicitly considers the physiological nature of the human visual system to encode the video by providing good overall visual impact in the location of surgery. The results of the proposed S-CNN-based segmentation demonstrated a pixel accuracy of 97% and a mean intersection over union accuracy of 79%. Similarly, HEVC experimental results showed that the proposed surgical region-based encoding scheme achieved an average bitrate reduction of 88.8% at high-quality settings in comparison with default full-frame HEVC encoding. The average gain in encoding performance (signal-to-noise) of the proposed algorithm is 11.5 dB in the surgical region. The bitrate saving and visual quality of the proposed optimal bit allocation scheme are compared with the mean shift segmentation-based coding scheme for fair comparison. The results show that the proposed scheme maintains high visual quality in surgical incision region along with achieving good bitrate saving. Based on comparison and results, the proposed encoding algorithm can be considered as an efficient and effective solution for surgical telementoring systems for low-bandwidth networks.


Subject(s)
Mentoring/methods , Neural Networks, Computer , Surgical Procedures, Operative/methods , Telemedicine/methods , Video Recording , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...