Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Sci Rep ; 14(1): 4947, 2024 Feb 28.
Article in English | MEDLINE | ID: mdl-38418484

ABSTRACT

Internet of Things (IoT) paves the way for the modern smart industrial applications and cities. Trusted Authority acts as a sole control in monitoring and maintaining the communications between the IoT devices and the infrastructure. The communication between the IoT devices happens from one trusted entity of an area to the other by way of generating security certificates. Establishing trust by way of generating security certificates for the IoT devices in a smart city application can be of high cost and expensive. In order to facilitate this, a secure group authentication scheme that creates trust amongst a group of IoT devices owned by several entities has been proposed. The majority of proposed authentication techniques are made for individual device authentication and are also utilized for group authentication; nevertheless, a unique solution for group authentication is the Dickson polynomial based secure group authentication scheme. The secret keys used in our proposed authentication technique are generated using the Dickson polynomial, which enables the group to authenticate without generating an excessive amount of network traffic overhead. IoT devices' group authentication has made use of the Dickson polynomial. Blockchain technology is employed to enable secure, efficient, and fast data transfer among the unique IoT devices of each group deployed at different places. Also, the proposed secure group authentication scheme developed based on Dickson polynomials is resistant to replay, man-in-the-middle, tampering, side channel and signature forgeries, impersonation, and ephemeral key secret leakage attacks. In order to accomplish this, we have implemented a hardware-based physically unclonable function. Implementation has been carried using python language and deployed and tested on Blockchain using Ethereum Goerli's Testnet framework. Performance analysis has been carried out by choosing various benchmarks and found that the proposed framework outperforms its counterparts through various metrics. Different parameters are also utilized to assess the performance of the proposed blockchain framework and shows that it has better performance in terms of computation, communication, storage and latency.

2.
PeerJ Comput Sci ; 9: e1355, 2023.
Article in English | MEDLINE | ID: mdl-37346503

ABSTRACT

Innovative technology and improvements in intelligent machinery, transportation facilities, emergency systems, and educational services define the modern era. It is difficult to comprehend the scenario, do crowd analysis, and observe persons. For e-learning-based multiobject tracking and predication framework for crowd data via multilayer perceptron, this article recommends an organized method that takes e-learning crowd-based type data as input, based on usual and abnormal actions and activities. After that, super pixel and fuzzy c mean, for features extraction, we used fused dense optical flow and gradient patches, and for multiobject tracking, we applied a compressive tracking algorithm and Taylor series predictive tracking approach. The next step is to find the mean, variance, speed, and frame occupancy utilized for trajectory extraction. To reduce data complexity and optimization, we applied T-distributed stochastic neighbor embedding (t-SNE). For predicting normal and abnormal action in e-learning-based crowd data, we used multilayer perceptron (MLP) to classify numerous classes. We used the three-crowd activity University of California San Diego, Department of Pediatrics (USCD-Ped), Shanghai tech, and Indian Institute of Technology Bombay (IITB) corridor datasets for experimental estimation based on human and nonhuman-based videos. We achieve a mean accuracy of 87.00%, USCD-Ped, Shanghai tech for 85.75%, and IITB corridor of 88.00% datasets.

3.
Cluster Comput ; 26(2): 1253-1266, 2023.
Article in English | MEDLINE | ID: mdl-36349064

ABSTRACT

Affective Computing is one of the central studies for achieving advanced human-computer interaction and is a popular research direction in the field of artificial intelligence for smart healthcare frameworks. In recent years, the use of electroencephalograms (EEGs) to analyze human emotional states has become a hot spot in the field of emotion recognition. However, the EEG is a non-stationary, non-linear signal that is sensitive to interference from other physiological signals and external factors. Traditional emotion recognition methods have limitations in complex algorithm structures and low recognition precision. In this article, based on an in-depth analysis of EEG signals, we have studied emotion recognition methods in the following respects. First, in this study, the DEAP dataset and the excitement model were used, and the original signal was filtered with others. The frequency band was selected using a butter filter and then the data was processed in the same range using min-max normalization. Besides, in this study, we performed hybrid experiments on sash windows and overlays to obtain an optimal combination for the calculation of features. We also apply the Discrete Wave Transform (DWT) to extract those functions from the preprocessed EEG data. Finally, a pre-trained k-Nearest Neighbor (kNN) machine learning model was used in the recognition and classification process and different combinations of DWT and kNN parameters were tested and fitted. After 10-fold cross-validation, the precision reached 86.4%. Compared to state-of-the-art research, this method has higher recognition accuracy than conventional recognition methods, while maintaining a simple structure and high speed of operation.

4.
PeerJ Comput Sci ; 8: e1105, 2022.
Article in English | MEDLINE | ID: mdl-36262158

ABSTRACT

Human locomotion is an imperative topic to be conversed among researchers. Predicting the human motion using multiple techniques and algorithms has always been a motivating subject matter. For this, different methods have shown the ability of recognizing simple motion patterns. However, predicting the dynamics for complex locomotion patterns is still immature. Therefore, this article proposes unique methods including the calibration-based filter algorithm and kinematic-static patterns identification for predicting those complex activities from fused signals. Different types of signals are extracted from benchmarked datasets and pre-processed using a novel calibration-based filter for inertial signals along with a Bessel filter for physiological signals. Next, sliding overlapped windows are utilized to get motion patterns defined over time. Then, polynomial probability distribution is suggested to decide the motion patterns natures. For features extraction based kinematic-static patterns, time and probability domain features are extracted over physical action dataset (PAD) and growing old together validation (GOTOV) dataset. Further, the features are optimized using quadratic discriminant analysis and orthogonal fuzzy neighborhood discriminant analysis techniques. Manifold regularization algorithms have also been applied to assess the performance of proposed prediction system. For the physical action dataset, we achieved an accuracy rate of 82.50% for patterned signals. While, the GOTOV dataset, we achieved an accuracy rate of 81.90%. As a result, the proposed system outdid when compared to the other state-of-the-art models in literature.

5.
Sensors (Basel) ; 22(18)2022 Sep 08.
Article in English | MEDLINE | ID: mdl-36146134

ABSTRACT

Resource constraint Consumer Internet of Things (CIoT) is controlled through gateway devices (e.g., smartphones, computers, etc.) that are connected to Mobile Edge Computing (MEC) servers or cloud regulated by a third party. Recently Machine Learning (ML) has been widely used in automation, consumer behavior analysis, device quality upgradation, etc. Typical ML predicts by analyzing customers' raw data in a centralized system which raises the security and privacy issues such as data leakage, privacy violation, single point of failure, etc. To overcome the problems, Federated Learning (FL) developed an initial solution to ensure services without sharing personal data. In FL, a centralized aggregator collaborates and makes an average for a global model used for the next round of training. However, the centralized aggregator raised the same issues, such as a single point of control leaking the updated model and interrupting the entire process. Additionally, research claims data can be retrieved from model parameters. Beyond that, since the Gateway (GW) device has full access to the raw data, it can also threaten the entire ecosystem. This research contributes a blockchain-controlled, edge intelligence federated learning framework for a distributed learning platform for CIoT. The federated learning platform allows collaborative learning with users' shared data, and the blockchain network replaces the centralized aggregator and ensures secure participation of gateway devices in the ecosystem. Furthermore, blockchain is trustless, immutable, and anonymous, encouraging CIoT end users to participate. We evaluated the framework and federated learning outcomes using the well-known Stanford Cars dataset. Experimental results prove the effectiveness of the proposed framework.


Subject(s)
Blockchain , Internet of Things , Computer Security , Ecosystem , Privacy
6.
Comput Math Methods Med ; 2022: 2858845, 2022.
Article in English | MEDLINE | ID: mdl-35813426

ABSTRACT

Brain cancer is a rare and deadly disease with a slim chance of survival. One of the most important tasks for neurologists and radiologists is to detect brain tumors early. Recent claims have been made that computer-aided diagnosis-based systems can diagnose brain tumors by employing magnetic resonance imaging (MRI) as a supporting technology. We propose transfer learning approaches for a deep learning model to detect malignant tumors, such as glioblastoma, using MRI scans in this study. This paper presents a deep learning-based approach for brain tumor identification and classification using the state-of-the-art object detection framework YOLO (You Only Look Once). The YOLOv5 is a novel object detection deep learning technique that requires limited computational architecture than its competing models. The study used the Brats 2021 dataset from the RSNA-MICCAI brain tumor radio genomic classification. The dataset has images annotated from RSNA-MICCAI brain tumor radio genomic competition dataset using the make sense an AI online tool for labeling dataset. The preprocessed data is then divided into testing and training for the model. The YOLOv5 model provides a precision of 88 percent. Finally, our model is tested across the whole dataset, and it is concluded that it is able to detect brain tumors successfully.


Subject(s)
Brain Neoplasms , Deep Learning , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Diagnosis, Computer-Assisted/methods , Humans , Magnetic Resonance Imaging/methods , Neural Networks, Computer
7.
Healthcare (Basel) ; 10(7)2022 Jul 13.
Article in English | MEDLINE | ID: mdl-35885819

ABSTRACT

Nowadays, healthcare is the prime need of every human being in the world, and clinical datasets play an important role in developing an intelligent healthcare system for monitoring the health of people. Mostly, the real-world datasets are inherently class imbalanced, clinical datasets also suffer from this imbalance problem, and the imbalanced class distributions pose several issues in the training of classifiers. Consequently, classifiers suffer from low accuracy, precision, recall, and a high degree of misclassification, etc. We performed a brief literature review on the class imbalanced learning scenario. This study carries the empirical performance evaluation of six classifiers, namely Decision Tree, k-Nearest Neighbor, Logistic regression, Artificial Neural Network, Support Vector Machine, and Gaussian Naïve Bayes, over five imbalanced clinical datasets, Breast Cancer Disease, Coronary Heart Disease, Indian Liver Patient, Pima Indians Diabetes Database, and Coronary Kidney Disease, with respect to seven different class balancing techniques, namely Undersampling, Random oversampling, SMOTE, ADASYN, SVM-SMOTE, SMOTEEN, and SMOTETOMEK. In addition to this, the appropriate explanations for the superiority of the classifiers as well as data-balancing techniques are also explored. Furthermore, we discuss the possible recommendations on how to tackle the class imbalanced datasets while training the different supervised machine learning methods. Result analysis demonstrates that SMOTEEN balancing method often performed better over all the other six data-balancing techniques with all six classifiers and for all five clinical datasets. Except for SMOTEEN, all other six balancing techniques almost had equal performance but moderately lesser performance than SMOTEEN.

8.
Comput Intell Neurosci ; 2022: 8512469, 2022.
Article in English | MEDLINE | ID: mdl-35665292

ABSTRACT

In today's world, diabetic retinopathy is a very severe health issue, which is affecting many humans of different age groups. Due to the high levels of blood sugar, the minuscule blood vessels in the retina may get damaged in no time and further may lead to retinal detachment and even sometimes lead to glaucoma blindness. If diabetic retinopathy can be diagnosed at the early stages, then many of the affected people will not be losing their vision and also human lives can be saved. Several machine learning and deep learning methods have been applied on the available data sets of diabetic retinopathy, but they were unable to provide the better results in terms of accuracy in preprocessing and optimizing the classification and feature extraction process. To overcome the issues like feature extraction and optimization in the existing systems, we have considered the Diabetic Retinopathy Debrecen Data Set from the UCI machine learning repository and designed a deep learning model with principal component analysis (PCA) for dimensionality reduction, and to extract the most important features, Harris hawks optimization algorithm is used further to optimize the classification and feature extraction process. The results shown by the deep learning model with respect to specificity, precision, accuracy, and recall are very much satisfactory compared to the existing systems.


Subject(s)
Deep Learning , Diabetes Mellitus , Diabetic Retinopathy , Falconiformes , Algorithms , Animals , Birds , Diabetic Retinopathy/diagnosis , Humans , Machine Learning , Retina
9.
Comput Intell Neurosci ; 2022: 3098604, 2022.
Article in English | MEDLINE | ID: mdl-35755731

ABSTRACT

When it comes to conveying sentiments and thoughts, facial expressions are quite effective. For human-computer collaboration, data-driven animation, and communication between humans and robots to be successful, the capacity to recognize emotional states in facial expressions must be developed and implemented. Recently published studies have found that deep learning is becoming increasingly popular in the field of image categorization. As a result, to resolve the problem of facial expression recognition (FER) using convolutional neural networks (CNN), increasingly substantial efforts have been made in recent years. Facial expressions may be acquired from databases like CK+ and JAFFE using this novel FER technique based on activations, optimizations, and regularization parameters. The model recognized emotions such as happiness, sadness, surprise, fear, anger, disgust, and neutrality. The performance of the model was evaluated using a variety of methodologies, including activation, optimization, and regularization, as well as other hyperparameters, as detailed in this study. In experiments, the FER technique may be used to recognize emotions with an Adam, Softmax, and Dropout Ratio of 0.1 to 0.2 when combined with other techniques. It also outperforms current FER techniques that rely on handcrafted features and only one channel, as well as has superior network performance compared to the present state-of-the-art techniques.


Subject(s)
Facial Expression , Facial Recognition , Anger , Emotions/physiology , Humans , Neural Networks, Computer
10.
Sensors (Basel) ; 22(7)2022 Mar 25.
Article in English | MEDLINE | ID: mdl-35408126

ABSTRACT

Unlike 2-dimensional (2D) images, direct 3-dimensional (3D) point cloud processing using deep neural network architectures is challenging, mainly due to the lack of explicit neighbor relationships. Many researchers attempt to remedy this by performing an additional voxelization preprocessing step. However, this adds additional computational overhead and introduces quantization error issues, limiting an accurate estimate of the underlying structure of objects that appear in the scene. To this end, in this article, we propose a deep network that can directly consume raw unstructured point clouds to perform object classification and part segmentation. In particular, a Deep Feature Transformation Network (DFT-Net) has been proposed, consisting of a cascading combination of edge convolutions and a feature transformation layer that captures the local geometric features by preserving neighborhood relationships among the points. The proposed network builds a graph in which the edges are dynamically and independently calculated on each layer. To achieve object classification and part segmentation, we ensure point order invariance while conducting network training simultaneously-the evaluation of the proposed network has been carried out on two standard benchmark datasets for object classification and part segmentation. The results were comparable to or better than existing state-of-the-art methodologies. The overall score obtained using the proposed DFT-Net is significantly improved compared to the state-of-the-art methods with the ModelNet40 dataset for object categorization.

11.
Diagnostics (Basel) ; 12(3)2022 Mar 17.
Article in English | MEDLINE | ID: mdl-35328279

ABSTRACT

A skin lesion is a portion of skin that observes abnormal growth compared to other areas of the skin. The ISIC 2018 lesion dataset has seven classes. A miniature dataset version of it is also available with only two classes: malignant and benign. Malignant tumors are tumors that are cancerous, and benign tumors are non-cancerous. Malignant tumors have the ability to multiply and spread throughout the body at a much faster rate. The early detection of the cancerous skin lesion is crucial for the survival of the patient. Deep learning models and machine learning models play an essential role in the detection of skin lesions. Still, due to image occlusions and imbalanced datasets, the accuracies have been compromised so far. In this paper, we introduce an interpretable method for the non-invasive diagnosis of melanoma skin cancer using deep learning and ensemble stacking of machine learning models. The dataset used to train the classifier models contains balanced images of benign and malignant skin moles. Hand-crafted features are used to train the base models (logistic regression, SVM, random forest, KNN, and gradient boosting machine) of machine learning. The prediction of these base models was used to train level one model stacking using cross-validation on the training set. Deep learning models (MobileNet, Xception, ResNet50, ResNet50V2, and DenseNet121) were used for transfer learning, and were already pre-trained on ImageNet data. The classifier was evaluated for each model. The deep learning models were then ensembled with different combinations of models and assessed. Furthermore, shapely adaptive explanations are used to construct an interpretability approach that generates heatmaps to identify the parts of an image that are most suggestive of the illness. This allows dermatologists to understand the results of our model in a way that makes sense to them. For evaluation, we calculated the accuracy, F1-score, Cohen's kappa, confusion matrix, and ROC curves and identified the best model for classifying skin lesions.

12.
Multimed Syst ; 28(4): 1175-1187, 2022.
Article in English | MEDLINE | ID: mdl-34075280

ABSTRACT

In recent times, COVID-19 infection gets increased exponentially with the existence of a restricted number of rapid testing kits. Several studies have reported the COVID-19 diagnosis model from chest X-ray images. But the diagnosis of COVID-19 patients from chest X-ray images is a tedious process as the bilateral modifications are considered an ill-posed problem. This paper presents a new metaheuristic-based fusion model for COVID-19 diagnosis using chest X-ray images. The proposed model comprises different preprocessing, feature extraction, and classification processes. Initially, the Weiner filtering (WF) technique is used for the preprocessing of images. Then, the fusion-based feature extraction process takes place by the incorporation of gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRM), and local binary patterns (LBP). Afterward, the salp swarm algorithm (SSA) selected the optimal feature subset. Finally, an artificial neural network (ANN) is applied as a classification process to classify infected and healthy patients. The proposed model's performance has been assessed using the Chest X-ray image dataset, and the results are examined under diverse aspects. The obtained results confirmed the presented model's superior performance over the state of art methods.

13.
Multimed Syst ; 28(4): 1275-1288, 2022.
Article in English | MEDLINE | ID: mdl-33897112

ABSTRACT

Classification of human emotions based on electroencephalography (EEG) is a very popular topic nowadays in the provision of human health care and well-being. Fast and effective emotion recognition can play an important role in understanding a patient's emotions and in monitoring stress levels in real-time. Due to the noisy and non-linear nature of the EEG signal, it is still difficult to understand emotions and can generate large feature vectors. In this article, we have proposed an efficient spatial feature extraction and feature selection method with a short processing time. The raw EEG signal is first divided into a smaller set of eigenmode functions called (IMF) using the empirical model-based decomposition proposed in our work, known as intensive multivariate empirical mode decomposition (iMEMD). The Spatio-temporal analysis is performed with Complex Continuous Wavelet Transform (CCWT) to collect all the information in the time and frequency domains. The multiple model extraction method uses three deep neural networks (DNNs) to extract features and dissect them together to have a combined feature vector. To overcome the computational curse, we propose a method of differential entropy and mutual information, which further reduces feature size by selecting high-quality features and pooling the k-means results to produce less dimensional qualitative feature vectors. The system seems complex, but once the network is trained with this model, real-time application testing and validation with good classification performance is fast. The proposed method for selecting attributes for benchmarking is validated with two publicly available data sets, SEED, and DEAP. This method is less expensive to calculate than more modern sentiment recognition methods, provides real-time sentiment analysis, and offers good classification accuracy.

14.
Big Data ; 2021 Dec 13.
Article in English | MEDLINE | ID: mdl-34898266

ABSTRACT

There is a drastic increase in Internet usage across the globe, thanks to mobile phone penetration. This extreme Internet usage generates huge volumes of data, in other terms, big data. Security and privacy are the main issues to be considered in big data management. Hence, in this article, Attribute-based Adaptive Homomorphic Encryption (AAHE) is developed to enhance the security of big data. In the proposed methodology, Oppositional Based Black Widow Optimization (OBWO) is introduced to select the optimal key parameters by following the AAHE method. By considering oppositional function, Black Widow Optimization (BWO) convergence analysis was enhanced. The proposed methodology has different processes, namely, process setup, encryption, and decryption processes. The researcher evaluated the proposed methodology with non-abelian rings and the homomorphism process in ciphertext format. Further, it is also utilized in improving one-way security related to the conjugacy examination issue. Afterward, homomorphic encryption is developed to secure the big data. The study considered two types of big data such as adult datasets and anonymous Microsoft web datasets to validate the proposed methodology. With the help of performance metrics such as encryption time, decryption time, key size, processing time, downloading, and uploading time, the proposed method was evaluated and compared against conventional cryptography techniques such as Rivest-Shamir-Adleman (RSA) and Elliptic Curve Cryptography (ECC). Further, the key generation process was also compared against conventional methods such as BWO, Particle Swarm Optimization (PSO), and Firefly Algorithm (FA). The results established that the proposed method is supreme than the compared methods and can be applied in real time in near future.

15.
J Healthc Eng ; 2021: 5513679, 2021.
Article in English | MEDLINE | ID: mdl-34194681

ABSTRACT

The world is experiencing an unprecedented crisis due to the coronavirus disease (COVID-19) outbreak that has affected nearly 216 countries and territories across the globe. Since the pandemic outbreak, there is a growing interest in computational model-based diagnostic technologies to support the screening and diagnosis of COVID-19 cases using medical imaging such as chest X-ray (CXR) scans. It is discovered in initial studies that patients infected with COVID-19 show abnormalities in their CXR images that represent specific radiological patterns. Still, detection of these patterns is challenging and time-consuming even for skilled radiologists. In this study, we propose a novel convolutional neural network- (CNN-) based deep learning fusion framework using the transfer learning concept where parameters (weights) from different models are combined into a single model to extract features from images which are then fed to a custom classifier for prediction. We use gradient-weighted class activation mapping to visualize the infected areas of CXR images. Furthermore, we provide feature representation through visualization to gain a deeper understanding of the class separability of the studied models with respect to COVID-19 detection. Cross-validation studies are used to assess the performance of the proposed models using open-access datasets containing healthy and both COVID-19 and other pneumonia infected CXR images. Evaluation results show that the best performing fusion model can attain a classification accuracy of 95.49% with a high level of sensitivity and specificity.


Subject(s)
COVID-19/diagnostic imaging , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Sensitivity and Specificity
16.
J Healthc Eng ; 2021: 3277988, 2021.
Article in English | MEDLINE | ID: mdl-34150188

ABSTRACT

The world has been facing the COVID-19 pandemic since December 2019. Timely and efficient diagnosis of COVID-19 suspected patients plays a significant role in medical treatment. The deep transfer learning-based automated COVID-19 diagnosis on chest X-ray is required to counter the COVID-19 outbreak. This work proposes a real-time Internet of Things (IoT) framework for early diagnosis of suspected COVID-19 patients by using ensemble deep transfer learning. The proposed framework offers real-time communication and diagnosis of COVID-19 suspected cases. The proposed IoT framework ensembles four deep learning models such as InceptionResNetV2, ResNet152V2, VGG16, and DenseNet201. The medical sensors are utilized to obtain the chest X-ray modalities and diagnose the infection by using the deep ensemble model stored on the cloud server. The proposed deep ensemble model is compared with six well-known transfer learning models over the chest X-ray dataset. Comparative analysis revealed that the proposed model can help radiologists to efficiently and timely diagnose the COVID-19 suspected patients.


Subject(s)
Artificial Intelligence , COVID-19 Testing , COVID-19/diagnosis , Internet of Things , SARS-CoV-2 , Brazil , China , Computer Simulation , Computer Systems , Databases, Factual , Deep Learning , Diagnosis, Computer-Assisted , Humans , Pattern Recognition, Automated , Radiography, Thoracic , United States , X-Rays
17.
Pattern Recognit ; 113: 107700, 2021 May.
Article in English | MEDLINE | ID: mdl-33100403

ABSTRACT

Various AI functionalities such as pattern recognition and prediction can effectively be used to diagnose (recognize) and predict coronavirus disease 2019 (COVID-19) infections and propose timely response (remedial action) to minimize the spread and impact of the virus. Motivated by this, an AI system based on deep meta learning has been proposed in this research to accelerate analysis of chest X-ray (CXR) images in automatic detection of COVID-19 cases. We present a synergistic approach to integrate contrastive learning with a fine-tuned pre-trained ConvNet encoder to capture unbiased feature representations and leverage a Siamese network for final classification of COVID-19 cases. We validate the effectiveness of our proposed model using two publicly available datasets comprising images from normal, COVID-19 and other pneumonia infected categories. Our model achieves 95.6% accuracy and AUC of 0.97 in diagnosing COVID-19 from CXR images even with a limited number of training samples.

18.
Comput Intell Neurosci ; 2021: 7615106, 2021.
Article in English | MEDLINE | ID: mdl-34976044

ABSTRACT

During the past two decades, many remote sensing image fusion techniques have been designed to improve the spatial resolution of the low-spatial-resolution multispectral bands. The main objective is fuse the low-resolution multispectral (MS) image and the high-spatial-resolution panchromatic (PAN) image to obtain a fused image having high spatial and spectral information. Recently, many artificial intelligence-based deep learning models have been designed to fuse the remote sensing images. But these models do not consider the inherent image distribution difference between MS and PAN images. Therefore, the obtained fused images may suffer from gradient and color distortion problems. To overcome these problems, in this paper, an efficient artificial intelligence-based deep transfer learning model is proposed. Inception-ResNet-v2 model is improved by using a color-aware perceptual loss (CPL). The obtained fused images are further improved by using gradient channel prior as a postprocessing step. Gradient channel prior is used to preserve the color and gradient information. Extensive experiments are carried out by considering the benchmark datasets. Performance analysis shows that the proposed model can efficiently preserve color and gradient information in the fused remote sensing images than the existing models.


Subject(s)
Artificial Intelligence , Remote Sensing Technology
19.
Sustain Cities Soc ; 64: 102582, 2021 Jan.
Article in English | MEDLINE | ID: mdl-33178557

ABSTRACT

Sustainable smart city initiatives around the world have recently had great impact on the lives of citizens and brought significant changes to society. More precisely, data-driven smart applications that efficiently manage sparse resources are offering a futuristic vision of smart, efficient, and secure city operations. However, the ongoing COVID-19 pandemic has revealed the limitations of existing smart city deployment; hence; the development of systems and architectures capable of providing fast and effective mechanisms to limit further spread of the virus has become paramount. An active surveillance system capable of monitoring and enforcing social distancing between people can effectively slow the spread of this deadly virus. In this paper, we propose a data-driven deep learning-based framework for the sustainable development of a smart city, offering a timely response to combat the COVID-19 pandemic through mass video surveillance. To implementing social distancing monitoring, we used three deep learning-based real-time object detection models for the detection of people in videos captured with a monocular camera. We validated the performance of our system using a real-world video surveillance dataset for effective deployment.

SELECTION OF CITATIONS
SEARCH DETAIL
...