Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 14976, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38951646

ABSTRACT

Software-defined networking (SDN) is a pioneering network paradigm that strategically decouples the control plane from the data and management planes, thereby streamlining network administration. SDN's centralized network management makes configuring access control list (ACL) policies easier, which is important as these policies frequently change due to network application needs and topology modifications. Consequently, this action may trigger modifications at the SDN controller. In response, the controller performs computational tasks to generate updated flow rules in accordance with modified ACL policies and installs flow rules at the data plane. Existing research has investigated reactive flow rules installation that changes in ACL policies result in packet violations and network inefficiencies. Network management becomes difficult due to deleting inconsistent flow rules and computing new flow rules per modified ACL policies. The proposed solution efficiently handles ACL policy change phenomena by automatically detecting ACL policy change and accordingly detecting and deleting inconsistent flow rules along with the caching at the controller and adding new flow rules at the data plane. A comprehensive analysis of both proactive and reactive mechanisms in SDN is carried out to achieve this. To facilitate the evaluation of these mechanisms, the ACL policies are modeled using a 5-tuple structure comprising Source, Destination, Protocol, Ports, and Action. The resulting policies are then translated into a policy implementation file and transmitted to the controller. Subsequently, the controller utilizes the network topology and the ACL policies to calculate the necessary flow rules and caches these flow rules in hash table in addition to installing them at the switches. The proposed solution is simulated in Mininet Emulator using a set of ACL policies, hosts, and switches. The results are presented by varying the ACL policy at different time instances, inter-packet delay and flow timeout value. The simulation results show that the reactive flow rule installation performs better than the proactive mechanism with respect to network throughput, packet violations, successful packet delivery, normalized overhead, policy change detection time and end-to-end delay. The proposed solution, designed to be directly used on SDN controllers that support the Pyretic language, provides a flexible and efficient approach for flow rule installation. The proposed mechanism can be employed to facilitate network administrators in implementing ACL policies. It may also be integrated with network monitoring and debugging tools to analyze the effectiveness of the policy change mechanism.

2.
Data Brief ; 54: 110458, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38711739

ABSTRACT

This paper presents a dataset comprising 700 video sequences encoded in the two most popular video formats (codecs) of today, H.264 and H.265 (HEVC). Six reference sequences were encoded under different quality profiles, including several bitrates and resolutions, and were affected by various packet loss rates. Subsequently, the image quality of encoded video sequences was assessed by subjective, as well as objective, evaluation. Therefore, the enclosed spreadsheet contains results of both assessment approaches in a form of MOS (Mean Opinion Score) delivered by the absolute category ranking (ACR) procedure, SSIM (Structural Similarity Index Measure) and VMAF (Video Multimethod Assessment Fusion). All assessments are available for each test sequence. This allows a comprehensive evaluation of coding efficiency under different test scenarios without the necessity of real observers or a secure laboratory environment, as recommended by the ITU (International Telecommunication Union). As there is currently no standardized mapping function between the results of subjective and objective methods, this dataset can also be used to design and verify experimental machine learning algorithms that contribute to solving the relevant research issues.

3.
Heliyon ; 10(8): e29410, 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38644823

ABSTRACT

Currently, the Internet of Things (IoT) generates a huge amount of traffic data in communication and information technology. The diversification and integration of IoT applications and terminals make IoT vulnerable to intrusion attacks. Therefore, it is necessary to develop an efficient Intrusion Detection System (IDS) that guarantees the reliability, integrity, and security of IoT systems. The detection of intrusion is considered a challenging task because of inappropriate features existing in the input data and the slow training process. In order to address these issues, an effective meta heuristic based feature selection and deep learning techniques are developed for enhancing the IDS. The Osprey Optimization Algorithm (OOA) based feature selection is proposed for selecting the highly informative features from the input which leads to an effective differentiation among the normal and attack traffic of network. Moreover, the traditional sigmoid and tangent activation functions are replaced with the Exponential Linear Unit (ELU) activation function to propose the modified Bi-directional Long Short Term Memory (Bi-LSTM). The modified Bi-LSTM is used for classifying the types of intrusion attacks. The ELU activation function makes gradients extremely large during back-propagation and leads to faster learning. This research is analysed in three different datasets such as N-BaIoT, Canadian Institute for Cybersecurity Intrusion Detection Dataset 2017 (CICIDS-2017), and ToN-IoT datasets. The empirical investigation states that the proposed framework obtains impressive detection accuracy of 99.98 %, 99.97 % and 99.88 % on the N-BaIoT, CICIDS-2017, and ToN-IoT datasets, respectively. Compared to peer frameworks, this framework obtains high detection accuracy with better interpretability and reduced processing time.

4.
PLoS One ; 19(3): e0299127, 2024.
Article in English | MEDLINE | ID: mdl-38536782

ABSTRACT

Depression is a serious mental health disorder affecting millions of individuals worldwide. Timely and precise recognition of depression is vital for appropriate mediation and effective treatment. Electroencephalography (EEG) has surfaced as a promising tool for inspecting the neural correlates of depression and therefore, has the potential to contribute to the diagnosis of depression effectively. This study presents an EEG-based mental depressive disorder detection mechanism using a publicly available EEG dataset called Multi-modal Open Dataset for Mental-disorder Analysis (MODMA). This study uses EEG data acquired from 55 participants using 3 electrodes in the resting-state condition. Twelve temporal domain features are extracted from the EEG data by creating a non-overlapping window of 10 seconds, which is presented to a novel feature selection mechanism. The feature selection algorithm selects the optimum chunk of attributes with the highest discriminative power to classify the mental depressive disorders patients and healthy controls. The selected EEG attributes are classified using three different classification algorithms i.e., Best- First (BF) Tree, k-nearest neighbor (KNN), and AdaBoost. The highest classification accuracy of 96.36% is achieved using BF-Tree using a feature vector length of 12. The proposed mental depressive classification scheme outperforms the existing state-of-the-art depression classification schemes in terms of the number of electrodes used for EEG recording, feature vector length, and the achieved classification accuracy. The proposed framework could be used in psychiatric settings, providing valuable support to psychiatrists.


Subject(s)
Depression , Support Vector Machine , Humans , Depression/diagnosis , Algorithms , Electroencephalography , Machine Learning
5.
Data Brief ; 53: 110125, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38370917

ABSTRACT

The Cattle Biometrics Dataset is the result of a rigorous process of data collecting, encompassing a wide range of cattle photographs obtained from publicly accessible cattle markets and farms. The dataset provided contains a comprehensive collection of more than 8,000 annotated samples derived from several cow breeds. This dataset represents a valuable asset for conducting research in the field of biometric recognition. The diversity of cattle in this context includes a range of ages, genders, breeds, and environmental conditions. Every photograph is taken from different quality cameras is thoroughly annotated, with special attention given to the muzzle of the cattle, which is considered an excellent biometric characteristic. In addition to its obvious practical benefits, this dataset possesses significant potential for extensive reuse. Within the domain of computer vision, it serves as a catalyst for algorithmic advancements, whereas in the agricultural sector, it augments practises related to cattle management. Machine learning aficionados highly value the use of machine learning for the construction and experimentation of models, especially in the context of transfer learning. Interdisciplinary collaboration is actively encouraged, facilitating the advancement of knowledge at the intersections of agriculture, computer science, and data science. The Cattle Biometrics Dataset represents a valuable resource that has the potential to stimulate significant advancements in various academic disciplines, fostering ground breaking research and innovation.

6.
Sci Rep ; 14(1): 69, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38167902

ABSTRACT

Pakistan falls significantly below the recommended forest coverage level of 20 to 30 percent of total area, with less than 6 percent of its land under forest cover. This deficiency is primarily attributed to illicit deforestation for wood and charcoal, coupled with a failure to embrace advanced techniques for forest estimation, monitoring, and supervision. Remote sensing techniques leveraging Sentinel-2 satellite images were employed. Both single-layer stacked images and temporal layer stacked images from various dates were utilized for forest classification. The application of an artificial neural network (ANN) supervised classification algorithm yielded notable results. Using a single-layer stacked image from Sentinel-2, an impressive 91.37% training overall accuracy and 0.865 kappa coefficient were achieved, along with 93.77% testing overall accuracy and a 0.902 kappa coefficient. Furthermore, the temporal layer stacked image approach demonstrated even better results. This method yielded 98.07% overall training accuracy, 97.75% overall testing accuracy, and kappa coefficients of 0.970 and 0.965, respectively. The random forest (RF) algorithm, when applied, achieved 99.12% overall training accuracy, 92.90% testing accuracy, and kappa coefficients of 0.986 and 0.882. Notably, with the temporal layer stacked image of the Sentinel-2 satellite, the RF algorithm reached exceptional performance with 99.79% training accuracy, 96.98% validation accuracy, and kappa coefficients of 0.996 and 0.954. In terms of forest cover estimation, the ANN algorithm identified 31.07% total forest coverage in the District Abbottabad region. In comparison, the RF algorithm recorded a slightly higher 31.17% of the total forested area. This research highlights the potential of advanced remote sensing techniques and machine learning algorithms in improving forest cover assessment and monitoring strategies.

7.
Sci Rep ; 13(1): 7422, 2023 May 08.
Article in English | MEDLINE | ID: mdl-37156887

ABSTRACT

Due to the wide availability of easy-to-access content on social media, along with the advanced tools and inexpensive computing infrastructure, has made it very easy for people to produce deep fakes that can cause to spread disinformation and hoaxes. This rapid advancement can cause panic and chaos as anyone can easily create propaganda using these technologies. Hence, a robust system to differentiate between real and fake content has become crucial in this age of social media. This paper proposes an automated method to classify deep fake images by employing Deep Learning and Machine Learning based methodologies. Traditional Machine Learning (ML) based systems employing handcrafted feature extraction fail to capture more complex patterns that are poorly understood or easily represented using simple features. These systems cannot generalize well to unseen data. Moreover, these systems are sensitive to noise or variations in the data, which can reduce their performance. Hence, these problems can limit their usefulness in real-world applications where the data constantly evolves. The proposed framework initially performs an Error Level Analysis of the image to determine if the image has been modified. This image is then supplied to Convolutional Neural Networks for deep feature extraction. The resultant feature vectors are then classified via Support Vector Machines and K-Nearest Neighbors by performing hyper-parameter optimization. The proposed method achieved the highest accuracy of 89.5% via Residual Network and K-Nearest Neighbor. The results prove the efficiency and robustness of the proposed technique; hence, it can be used to detect deep fake images and reduce the potential threat of slander and propaganda.

8.
PLoS One ; 18(2): e0271897, 2023.
Article in English | MEDLINE | ID: mdl-36735648

ABSTRACT

In view of the challenges faced by organizations and departments concerned with agricultural capacity observations, we collected In-Situ data consisting of diverse crops (More than 11 consumable vegetation types) in our pilot region of Harichand Charsadda, Khyber Pakhtunkhwa (KP), Pakistan. Our proposed Long Short-Term Memory based Deep Neural network model was trained for land cover land use statistics generation using the acquired ground truth data, for a synergy between Planet-Scope Dove and European Space Agency's Sentinel-2. Total of 4 bands from both sentinel-2 and planet scope including Red, Green, Near-Infrared (NIR) and Normalised Difference Vegetation Index (NDVI) were used for classification purpose. Using short temporal frame of Sentinel-2 comprising 5 date images, we propose an realistic and implementable procedure for generating accurate crop statistics using remote sensing. Our self collected data-set consists of a total number of 107,899 pixels which was further split into 70% and 30% for training and testing purpose of the model respectively. The collected data is in the shape of field parcels, which has been further split for training, validation and test sets, to avoid spatial auto-correlation. To ensure the quality and accuracy 15% of the training data was left out for validation purpose, and 15% for testing. Prediction was also performed on our trained model and visual analysis of the area from the image showed significant results. Further more a comparison between Sentinel-2 time series is performed separately from the fused Planet-Scope and Sentinel-2 time-series data sets. The results achieved shows a weighted average of 93% for Sentinel-2 time series and 97% for fused Planet-Scope and Sentinel-2 time series.


Subject(s)
Crops, Agricultural , Satellite Imagery , Satellite Imagery/methods , Memory, Short-Term , Planets , Neural Networks, Computer
9.
PLoS One ; 18(2): e0275653, 2023.
Article in English | MEDLINE | ID: mdl-36758037

ABSTRACT

Deep learning based data driven methods with multi-sensors spectro-temporal data are widely used for pattern identification and land-cover classification in remote sensing domain. However, adjusting the right tuning for the deep learning models is extremely important as different parameter setting can alter the performance of the model. In our research work, we have evaluated the performance of Convolutional Long Short-Term Memory (ConvLSTM) and deep learning techniques, over various hyper-parameters setting for an imbalanced dataset and the one with highest performance is utilized for land-cover classification. The parameters that are considered for experimentation are; Batch size, Number of Layers in ConvLSTM model, and No of filters in each layer of the ConvLSTM are the parameters that will be considered for our experimentation. Experiments also have been conducted on LSTM model for comparison using the same hyper-parameters. It has been found that the two layered ConvLSTM model having 16-filters and a batch size of 128 outperforms other setting scenarios, with an overall validation accuracy of 97.71%. The accuracy achieved for the LSTM is 93.9% for training and 92.7% for testing.


Subject(s)
Memory, Long-Term , Neural Networks, Computer
10.
Sensors (Basel) ; 22(18)2022 Sep 13.
Article in English | MEDLINE | ID: mdl-36146254

ABSTRACT

Fog computing is one of the major components of future 6G networks. It can provide fast computing of different application-related tasks and improve system reliability due to better decision-making. Parallel offloading, in which a task is split into several sub-tasks and transmitted to different fog nodes for parallel computation, is a promising concept in task offloading. Parallel offloading suffers from challenges such as sub-task splitting and mapping of sub-tasks to the fog nodes. In this paper, we propose a novel many-to-one matching-based algorithm for the allocation of sub-tasks to fog nodes. We develop preference profiles for IoT nodes and fog nodes to reduce the task computation delay. We also propose a technique to address the externalities problem in the matching algorithm that is caused by the dynamic preference profiles. Furthermore, a detailed evaluation of the proposed technique is presented to show the benefits of each feature of the algorithm. Simulation results show that the proposed matching-based offloading technique outperforms other available techniques from the literature and improves task latency by 52% at high task loads.


Subject(s)
Algorithms , Computer Simulation , Reproducibility of Results
11.
Comput Intell Neurosci ; 2022: 5012962, 2022.
Article in English | MEDLINE | ID: mdl-35875731

ABSTRACT

COVID-19 has depleted healthcare systems around the world. Extreme conditions must be defined as soon as possible so that services and treatment can be deployed and intensified. Many biomarkers are being investigated in order to track the patient's condition. Unfortunately, this may interfere with the symptoms of other diseases, making it more difficult for a specialist to diagnose or predict the severity level of the case. This research develops a Smart Healthcare System for Severity Prediction and Critical Tasks Management (SHSSP-CTM) for COVID-19 patients. On the one hand, a machine learning (ML) model is projected to predict the severity of COVID-19 disease. On the other hand, a multi-agent system is proposed to prioritize patients according to the seriousness of the COVID-19 condition and then provide complete network management from the edge to the cloud. Clinical data, including Internet of Medical Things (IoMT) sensors and Electronic Health Record (EHR) data of 78 patients from one hospital in the Wasit Governorate, Iraq, were used in this study. Different data sources are fused to generate new feature pattern. Also, data mining techniques such as normalization and feature selection are applied. Two models, specifically logistic regression (LR) and random forest (RF), are used as baseline severity predictive models. A multi-agent algorithm (MAA), consisting of a personal agent (PA) and fog node agent (FNA), is used to control the prioritization process of COVID-19 patients. The highest prediction result is achieved based on data fusion and selected features, where all examined classifiers observe a significant increase in accuracy. Furthermore, compared with state-of-the-art methods, the RF model showed a high and balanced prediction performance with 86% accuracy, 85.7% F-score, 87.2% precision, and 86% recall. In addition, as compared to the cloud, the MAA showed very significant performance where the resource usage was 66% in the proposed model and 34% in the traditional cloud, the delay was 19% in the proposed model and 81% in the cloud, and the consumed energy was 31% in proposed model and 69% in the cloud. The findings of this study will allow for the early detection of three severity cases, lowering mortality rates.


Subject(s)
COVID-19 , Internet of Things , Algorithms , Delivery of Health Care , Humans
12.
Sensors (Basel) ; 22(13)2022 Jul 01.
Article in English | MEDLINE | ID: mdl-35808473

ABSTRACT

The calculation of the average sideways acceleration, based on speed and angular velocity on small roundabouts for a vehicle of up to 3.5 t gross vehicle mass, is described in this paper. Calculations of the turning radius are derived from angular velocity and an automatic selection of events, based on the lateral acceleration of the coefficient of variation within a defined time window. The calculation of the turning radius based on speed and angular velocity yields almost identical results to the calculation of the turning radius by the three-point method using GPS coordinates, as described in previous research. This means that the calculation of the turning radius, derived from the speed of GNSS/INS dual-antenna sensor and gyroscope data, yields similar results to those from the computation of the turning radius derived from the coordinates of a GNSS/INS dual-antenna sensor. The research results can be used in the development of sensors to improve road safety.

13.
Sci Rep ; 12(1): 7898, 2022 05 12.
Article in English | MEDLINE | ID: mdl-35551266

ABSTRACT

This paper aims to describe and evaluate the proposed calibration model based on a neural network for post-processing of two essential meteorological parameters, namely near-surface air temperature (2 m) and 24 h accumulated precipitation. The main idea behind this work is to improve short-term (up to 3 days) forecasts delivered by a global numerical weather prediction (NWP) model called ECMWF (European Centre for Medium-Range Weather Forecasts). In comparison to the existing local weather models that typically provide weather forecasts for limited geographic areas (e.g., within one country but they are more accurate), ECMWF offers a prediction of the weather phenomena across the world. Another significant benefit of this global NWP model includes the fact, that by using it in several well-known online applications, forecasts are freely available while local models outputs are often paid. Our proposed ECMWF-enhancing model uses a combination of raw ECMWF data and additional input parameters we have identified as useful for ECMWF error estimation and its subsequent correction. The ground truth data used for the training phase of our model consists of real observations from weather stations located in 10 cities across two European countries. The results obtained from cross-validation indicate that our parametric model outperforms the accuracy of a standard ECMWF prediction and gets closer to the forecast precision of the local NWP models.


Subject(s)
Deep Learning , Forecasting , Meteorology , Temperature , Weather
14.
Sensors (Basel) ; 22(6)2022 Mar 16.
Article in English | MEDLINE | ID: mdl-35336468

ABSTRACT

In this article, we address the determination of turning radius and lateral acceleration acting on a vehicle up to 3.5 t gross vehicle mass (GVM) and cargo in curves based on turning radius and speed. Global Navigation Satellite System with Inertial Navigation System (GNSS/INS) dual-antenna sensor is used to measure acceleration, speed, and vehicle position to determine the turning radius and determine the proper formula to calculate long average lateral acceleration acting on vehicle and cargo. The two methods for automatic selection of events were applied based on stable lateral acceleration value and on mean square error (MSE) of turning radiuses. The models of calculation of turning radius are valid for turning radius within 5-70 m for both methods of automatic selection of events with mean root mean square error (RMSE) 1.88 m and 1.32 m. The models of calculation of lateral acceleration are valid with mean RMSE of 0.022 g and 0.016 g for both methods of automatic selection of events. The results of the paper may be applied in the planning and implementation of packing and cargo securing procedures to calculate average lateral acceleration acting on vehicle and cargo based on turning radius and speed for vehicles up to 3.5 t GVM. The results can potentially be applied for the deployment of autonomous vehicles in solutions grouped under the term of Logistics 4.0.


Subject(s)
Acceleration , Radius , Cell Communication
15.
Sensors (Basel) ; 22(4)2022 Feb 09.
Article in English | MEDLINE | ID: mdl-35214219

ABSTRACT

The paradigm of dynamic shared access aims to provide flexible spectrum usage. Recently, Federal Communications Commission (FCC) has proposed a new dynamic spectrum management framework for the sharing of a 3.5 GHz (3550-3700 MHz) federal band, called a citizen broadband radio service (CBRS) band, which is governed by spectrum access system (SAS). It is the responsibility of SAS to manage the set of CBRS-SAS users. The set of users are classified in three tiers: incumbent access (IA) users, primary access license (PAL) users and the general authorized access (GAA) users. In this article, dynamic channel assignment algorithm for PAL and GAA users is designed with the goal of maximizing the transmission rate and minimizing the total cost of GAA users accessing PAL reserved channels. We proposed a new mathematical model based on multi-objective optimization for the selection of PAL operators and idle PAL reserved channels allocation to GAA users considering the diversity of PAL reserved channels' attributes and the diversification of GAA users' business needs. The proposed model is estimated and validated on various performance metrics through extensive simulations and compared with existing algorithms such as Hungarian algorithm, auction algorithm and Gale-Shapley algorithm. The proposed model results indicate that overall transmission rate, net cost and data-rate per unit cost remain the same in comparison to the classical Hungarian method and auction algorithm. However, the improved model solves the resource allocation problem approximately up to four times faster with better load management, which validates the efficiency of our model.

16.
Sensors (Basel) ; 21(19)2021 Sep 29.
Article in English | MEDLINE | ID: mdl-34640821

ABSTRACT

The widespread development in wireless technologies and the advancements in multimedia communication have brought about a positive impact on the performance of wireless transceivers. We investigate the performance of our three-stage turbo detected system using state-of-the-art high efficiency video coding (HEVC), also known as the H.265 video standard. The system makes use of sphere packing (SP) modulation with the combinational gain technique of layered steered space-time code (LSSTC). The proposed three-stage system is simulated for the correlated Rayleigh fading channel and the bit-error rate (BER) curve obtained after simulation is free of any floor formation. The system employs low complexity source-bit coding (SBC) for protecting the H.265 coded stream. An intermediate recursive unity-rate code (URC) with an infinite impulse response is employed as an inner precoder. More specifically, the URC assists in the prevention of the BER floor by distributing the information across the decoders. There is an observable gain in the BER and peak signal-to-noise ratio (PSNR) performances with the increasing value of minimum Hamming distance (dH,min) using the three-stage system. Convergence analysis of the proposed system is investigated through an extrinsic information transfer (EXIT) chart. Our proposed system demonstrates better performance of about 22 dB than the benchmarker utilizing LSSTC-SP for iterative source-channel detection, but without exploiting the optimized SBC schemes.

17.
Sensors (Basel) ; 21(16)2021 Aug 13.
Article in English | MEDLINE | ID: mdl-34450901

ABSTRACT

The introduction of 5G with excessively high speeds and ever-advancing cellular device capabilities has increased the demand for high data rate wireless multimedia communication. Data compression, transmission robustness and error resilience are introduced to meet the increased demands of high data rates of today. An innovative approach is to come up with a unique setup of source bit codes (SBCs) that ensure the convergence and joint source-channel coding (JSCC) correspondingly results in lower bit error ratio (BER). The soft-bit assisted source and channel codes are optimized jointly for optimum convergence. Source bit codes assisted by iterative detection are used with a rate-1 precoder for performance evaluation of the above mentioned scheme of transmitting sata-partitioned (DP) H.264/AVC frames from source through a narrowband correlated Rayleigh fading channel. A novel approach of using sphere packing (SP) modulation aided differential space time spreading (DSTS) in combination with SBC is designed for the video transmission to cope with channel fading. Furthermore, the effects of SBC with different hamming distances d(H,min) but similar coding rates is explored on objective video quality such as peak signal to noise ratio (PSNR) and also the overall bit error ratio (BER). EXtrinsic Information Transfer Charts (EXIT) are used for analysis of the convergence behavior of SBC and its iterative scheme. Specifically, the experiments exhibit that the proposed scheme of error protection of SBC d(H,min) = 6 outperforms the SBCs having same code rate, but with d(H,min) = 3 by 3 dB with PSNR degradation of 1 dB. Furthermore, simulation results show that a gain of 27 dB Eb/N0 is achieved with SBC having code rate 1/3 compared to the benchmark Rate-1 SBC codes.

18.
Entropy (Basel) ; 23(5)2021 May 01.
Article in English | MEDLINE | ID: mdl-34062751

ABSTRACT

This article investigates the performance of various sophisticated channel coding and transmission schemes for achieving reliable transmission of a highly compressed video stream. Novel error protection schemes including Non-Convergent Coding (NCC) scheme, Non-Convergent Coding assisted with Differential Space Time Spreading (DSTS) and Sphere Packing (SP) modulation (NCDSTS-SP) scheme and Convergent Coding assisted with DSTS and SP modulation (CDSTS-SP) are analyzed using Bit Error Ratio (BER) and Peak Signal to Noise Ratio (PSNR) performance metrics. Furthermore, error reduction is achieved using sophisticated transceiver comprising SP modulation technique assisted by Differential Space Time Spreading. The performance of the iterative Soft Bit Source Decoding (SBSD) in combination with channel codes is analyzed using various error protection setups by allocating consistent overall bit-rate budget. Additionally, the iterative behavior of SBSD assisted RSC decoder is analyzed with the aid of Extrinsic Information Transfer (EXIT) Chart in order to analyze the achievable turbo cliff of the iterative decoding process. The subjective and objective video quality performance of the proposed error protection schemes is analyzed while employing H.264 advanced video coding and H.265 high efficient video coding standards, while utilizing diverse video sequences having different resolution, motion and dynamism. It was observed that in the presence of noisy channel the low resolution videos outperforms its high resolution counterparts. Furthermore, it was observed that the performance of video sequence with low motion contents and dynamism outperforms relative to video sequence with high motion contents and dynamism. More specifically, it is observed that while utilizing H.265 video coding standard, the Non-Convergent Coding assisted with DSTS and SP modulation scheme with enhanced transmission mechanism results in Eb/N0 gain of 20 dB with reference to the Non-Convergent Coding and transmission mechanism at the objective PSNR value of 42 dB. It is important to mention that both the schemes have employed identical code rate. Furthermore, the Convergent Coding assisted with DSTS and SP modulation mechanism achieved superior performance with reference to the equivalent rate Non-Convergent Coding assisted with DSTS and SP modulation counterpart mechanism, with a performance gain of 16 dB at the objective PSNR grade of 42 dB. Moreover, it is observed that the maximum achievable PSNR gain through H.265 video coding standard is 45 dB, with a PSNR gain of 3 dB with reference to the identical code rate H.264 coding scheme.

19.
Entropy (Basel) ; 23(2)2021 Feb 18.
Article in English | MEDLINE | ID: mdl-33670499

ABSTRACT

The reliable transmission of multimedia information that is coded through highly compression efficient encoders is a challenging task. This article presents the iterative convergence performance of IrRegular Convolutional Codes (IRCCs) with the aid of the multidimensional Sphere Packing (SP) modulation assisted Differential Space Time Spreading Codes (IRCC-SP-DSTS) scheme for the transmission of H.264/Advanced Video Coding (AVC) compressed video coded stream. In this article, three different regular and irregular error protection schemes are presented. In the presented Regular Error Protection (REP) scheme, all of the partitions of the video sequence are regular error protected with a rate of 3/4 IRCC. In Irregular Error Protection scheme-1 (IREP-1) the H.264/AVC partitions are prioritized as A, B & C, respectively. Whereas, in Irregular Error Protection scheme-2 (IREP-2), the H.264/AVC partitions are prioritized as B, A, and C, respectively. The performance of the iterative paradigm of an inner IRCC and outer Rate-1 Precoder is analyzed by the EXtrinsic Information Transfer (EXIT) Chart and the Quality of Experience (QoE) performance of the proposed mechanism is evaluated using the Bit Error Rate (BER) metric and Peak Signal to Noise Ratio (PSNR)-based objective quality metric. More specifically, it is concluded that the proposed IREP-2 scheme exhibits a gain of 1 dB Eb/N0 with reference to the IREP-1 and Eb/N0 gain of 0.6 dB with reference to the REP scheme over the PSNR degradation of 1 dB.

20.
Sensors (Basel) ; 21(3)2021 Jan 24.
Article in English | MEDLINE | ID: mdl-33498805

ABSTRACT

The increasing popularity of using wireless devices to handle routine tasks has increased the demand for incorporating multiple-input-multiple-output (MIMO) technology to utilize limited bandwidth efficiently. The presence of comparatively large space at the base station (BS) makes it straightforward to exploit the MIMO technology's useful properties. From a mobile handset point of view, and limited space at the mobile handset, complex procedures are required to increase the number of active antenna elements. In this paper, to address such type of issues, a four-element MIMO dual band, dual diversity, dipole antenna has been proposed for 5G-enabled handsets. The proposed antenna design relies on space diversity as well as pattern diversity to provide an acceptable MIMO performance. The proposed dipole antenna simultaneously operates at 3.6 and 4.7 sub-6 GHz bands. The usefulness of the proposed 4×4 MIMO dipole antenna has been verified by comparing the simulated and measured results using a fabricated version of the proposed antenna. A specific absorption rate (SAR) analysis has been carried out using CST Voxel (a heterogeneous biological human head) model, which shows maximum SAR value for 10 g of head tissue is well below the permitted value of 2.0 W/kg. The total efficiency of each antenna element in this structure is -2.88, -3.12, -1.92 and -2.45 dB at 3.6 GHz, while at 4.7 GHz are -1.61, -2.19, -1.72 and -1.18 dB respectively. The isolation, envelope correlation coefficient (ECC) between the adjacent ports and the loss in capacity is below the standard margin, making the structure appropriate for MIMO applications. The effect of handgrip and the housing box on the total antenna efficiency is analyzed, and only 5% variation is observed, which results from careful placement of antenna elements.

SELECTION OF CITATIONS
SEARCH DETAIL
...