Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
PeerJ Comput Sci ; 9: e1656, 2023.
Article in English | MEDLINE | ID: mdl-38077568

ABSTRACT

Background: Software process improvement (SPI) is an indispensable phenomenon in the evolution of a software development company that adopts global software development (GSD) or in-house development. Several software development companies do not only adhere to in-house development but also go for the GSD paradigm. Both development approaches are of paramount significance because of their respective advantages. Many studies have been conducted to find the SPI success factors in the case of companies that opt for in-house development. Still, less attention has been paid to the SPI success factors in the case of the GSD environment for large-scale software companies. Factors that contribute to the SPI success of small as well as medium-sized companies have been identified, but large-scale companies have still been overlooked. The research aims to identify the success factors of SPI for both development approaches (GSD and in-house) in the case of large-scale software companies. Methods: Two systematic literature reviews have been performed. An industrial survey has been conducted to detect additional SPI success factors for both development environments. In the subsequent step, a comparison has been made to find similar SPI success factors in both development environments. Lastly, another industrial survey is conducted to compare the common SPI success factors of GSD and in-house software development, in the case of large-scale companies, to divulge which SPI success factor carries more value in which development environment. For this reason, parametric (Pearson correlation) and non-parametric (Kendall's Tau correlation and the Spearman correlation) tests have been performed. Results: The 17 common SPI factors have been identified. The pinpointed common success factors expedite and contribute to SPI in both environments in the case of large-scale companies.

2.
Multimed Tools Appl ; : 1-51, 2023 Feb 24.
Article in English | MEDLINE | ID: mdl-36855614

ABSTRACT

Because mobile technology and the widespread usage of mobile devices have swiftly and radically evolved, several training centers have started to offer mobile training (m-training) via mobile devices. Thus, designing suitable m-training course content for training employees via mobile device applications has become an important professional development issue to allow employees to obtain knowledge and improve their skills in the rapidly changing mobile environment. Previous studies have identified challenges in this domain. One important challenge is that no solid theoretical framework serves as a foundation to provide instructional design guidelines for interactive m-training course content that motivates and attracts trainees to the training process via mobile devices. This study proposes a framework for designing interactive m-training course content using mobile augmented reality (MAR). A mixed-methods approach was adopted. Key elements were extracted from the literature to create an initial framework. Then, the framework was validated by interviewing experts, and it was tested by trainees. This integration led us to evaluate and prove the validity of the proposed framework. The framework follows a systematic approach guided by six key elements and offers a clear instructional design guideline checklist to ensure the design quality of interactive m-training course content. This study contributes to the knowledge by establishing a framework as a theoretical foundation for designing interactive m-training course content. Additionally, it supports the m-training domain by assisting trainers and designers in creating interactive m-training courses to train employees, thus increasing their engagement in m-training. Recommendations for future studies are proposed.

3.
Sensors (Basel) ; 23(6)2023 Mar 11.
Article in English | MEDLINE | ID: mdl-36991755

ABSTRACT

The exponentially growing concern of cyber-attacks on extremely dense underwater sensor networks (UWSNs) and the evolution of UWSNs digital threat landscape has brought novel research challenges and issues. Primarily, varied protocol evaluation under advanced persistent threats is now becoming indispensable yet very challenging. This research implements an active attack in the Adaptive Mobility of Courier Nodes in Threshold-optimized Depth-based Routing (AMCTD) protocol. A variety of attacker nodes were employed in diverse scenarios to thoroughly assess the performance of AMCTD protocol. The protocol was exhaustively evaluated both with and without active attacks with benchmark evaluation metrics such as end-to-end delay, throughput, transmission loss, number of active nodes and energy tax. The preliminary research findings show that active attack drastically lowers the AMCTD protocol's performance (i.e., active attack reduces the number of active nodes by up to 10%, reduces throughput by up to 6%, increases transmission loss by 7%, raises energy tax by 25%, and increases end-to-end delay by 20%).

4.
Sensors (Basel) ; 23(6)2023 Mar 16.
Article in English | MEDLINE | ID: mdl-36991903

ABSTRACT

The exponential growth in the number of smart devices connected to the Internet of Things (IoT) that are associated with various IoT-based smart applications and services, raises interoperability challenges. Service-oriented architecture for IoT (SOA-IoT) solutions has been introduced to deal with these interoperability challenges by integrating web services into sensor networks via IoT-optimized gateways to fill the gap between devices, networks, and access terminals. The main aim of service composition is to transform user requirements into a composite service execution. Different methods have been used to perform service composition, which has been classified as trust-based and non-trust-based. The existing studies in this field have reported that trust-based approaches outperform non-trust-based ones. Trust-based service composition approaches use the trust and reputation system as a brain to select appropriate service providers (SPs) for the service composition plan. The trust and reputation system computes each candidate SP's trust value and selects the SP with the highest trust value for the service composition plan. The trust system computes the trust value from the self-observation of the service requestor (SR) and other service consumers' (SCs) recommendations. Several experimental solutions have been proposed to deal with trust-based service composition in the IoT; however, a formal method for trust-based service composition in the IoT is lacking. In this study, we used the formal method for representing the components of trust-based service management in the IoT, by using higher-order logic (HOL) and verifying the different behaviors in the trust system and the trust value computation processes. Our findings showed that the presence of malicious nodes performing trust attacks leads to biased trust value computation, which results in inappropriate SP selection during the service composition. The formal analysis has given us a clear insight and complete understanding, which will assist in the development of a robust trust system.

5.
Sensors (Basel) ; 22(24)2022 Dec 12.
Article in English | MEDLINE | ID: mdl-36560104

ABSTRACT

Travel time prediction is essential to intelligent transportation systems directly affecting smart cities and autonomous vehicles. Accurately predicting traffic based on heterogeneous factors is highly beneficial but remains a challenging problem. The literature shows significant performance improvements when traditional machine learning and deep learning models are combined using an ensemble learning approach. This research mainly contributes by proposing an ensemble learning model based on hybridized feature spaces obtained from a bidirectional long short-term memory module and a bidirectional gated recurrent unit, followed by support vector regression to produce the final travel time prediction. The proposed approach consists of three stages-initially, six state-of-the-art deep learning models are applied to traffic data obtained from sensors. Then the feature spaces and decision scores (outputs) of the model with the highest performance are fused to obtain hybridized deep feature spaces. Finally, a support vector regressor is applied to the hybridized feature spaces to get the final travel time prediction. The performance of our proposed heterogeneous ensemble using test data showed significant improvements compared to the baseline techniques in terms of the root mean square error (53.87±3.50), mean absolute error (12.22±1.35) and the coefficient of determination (0.99784±0.00019). The results demonstrated that the hybridized deep feature space concept could produce more stable and superior results than the other baseline techniques.


Subject(s)
Machine Learning , Time Factors
6.
Sensors (Basel) ; 22(19)2022 Oct 02.
Article in English | MEDLINE | ID: mdl-36236583

ABSTRACT

Automatic modulation recognition (AMR) is used in various domains-from general-purpose communication to many military applications-thanks to the growing popularity of the Internet of Things (IoT) and related communication technologies. In this research article, we propose an innovative idea of combining the classical mathematical technique of computing linear combinations (LCs) of cumulants with a genetic algorithm (GA) to create super-cumulants. These super-cumulants are further used to classify five digital modulation schemes on fading channels using the K-nearest neighbor (KNN). Our proposed classifier significantly improves the percentage recognition accuracy at lower SNRs when using smaller sample sizes. A comparison with existing techniques manifests the supremacy of our proposed classifier.


Subject(s)
Algorithms , Cluster Analysis , Mathematics
7.
Sensors (Basel) ; 22(17)2022 Aug 27.
Article in English | MEDLINE | ID: mdl-36080922

ABSTRACT

Nowadays, Human Activity Recognition (HAR) is being widely used in a variety of domains, and vision and sensor-based data enable cutting-edge technologies to detect, recognize, and monitor human activities. Several reviews and surveys on HAR have already been published, but due to the constantly growing literature, the status of HAR literature needed to be updated. Hence, this review aims to provide insights on the current state of the literature on HAR published since 2018. The ninety-five articles reviewed in this study are classified to highlight application areas, data sources, techniques, and open research challenges in HAR. The majority of existing research appears to have concentrated on daily living activities, followed by user activities based on individual and group-based activities. However, there is little literature on detecting real-time activities such as suspicious activity, surveillance, and healthcare. A major portion of existing studies has used Closed-Circuit Television (CCTV) videos and Mobile Sensors data. Convolutional Neural Network (CNN), Long short-term memory (LSTM), and Support Vector Machine (SVM) are the most prominent techniques in the literature reviewed that are being utilized for the task of HAR. Lastly, the limitations and open challenges that needed to be addressed are discussed.


Subject(s)
Human Activities , Neural Networks, Computer , Activities of Daily Living , Humans , Monitoring, Physiologic , Support Vector Machine
8.
IEEE Access ; 10: 35094-35105, 2022.
Article in English | MEDLINE | ID: mdl-35582498

ABSTRACT

In the current era, data is growing exponentially due to advancements in smart devices. Data scientists apply a variety of learning-based techniques to identify underlying patterns in the medical data to address various health-related issues. In this context, automated disease detection has now become a central concern in medical science. Such approaches can reduce the mortality rate through accurate and timely diagnosis. COVID-19 is a modern virus that has spread all over the world and is affecting millions of people. Many countries are facing a shortage of testing kits, vaccines, and other resources due to significant and rapid growth in cases. In order to accelerate the testing process, scientists around the world have sought to create novel methods for the detection of the virus. In this paper, we propose a hybrid deep learning model based on a convolutional neural network (CNN) and gated recurrent unit (GRU) to detect the viral disease from chest X-rays (CXRs). In the proposed model, a CNN is used to extract features, and a GRU is used as a classifier. The model has been trained on 424 CXR images with 3 classes (COVID-19, Pneumonia, and Normal). The proposed model achieves encouraging results of 0.96, 0.96, and 0.95 in terms of precision, recall, and f1-score, respectively. These findings indicate how deep learning can significantly contribute to the early detection of COVID-19 in patients through the analysis of X-ray scans. Such indications can pave the way to mitigate the impact of the disease. We believe that this model can be an effective tool for medical practitioners for early diagnosis.

9.
JMIR Res Protoc ; 11(1): e27935, 2022 Jan 28.
Article in English | MEDLINE | ID: mdl-35089146

ABSTRACT

BACKGROUND: Walking recovery post stroke can be slow and incomplete. Determining effective stroke rehabilitation frequency requires the assessment of neuroplasticity changes. Neurobiological signals from electroencephalogram (EEG) can measure neuroplasticity through incremental changes of these signals after rehabilitation. However, changes seen with a different frequency of rehabilitation require further investigation. It is hypothesized that the association between the incremental changes from EEG signals and the improved functional outcome measure scores are greater in higher rehabilitation frequency, implying enhanced neuroplasticity changes. OBJECTIVE: The purpose of this study is to identify the changes in the neurobiological signals from EEG, to associate these with functional outcome measures scores, and to compare their associations in different therapy frequency for gait rehabilitation among subacute stroke individuals. METHODS: A randomized, single-blinded, controlled study among patients with subacute stroke will be conducted with two groups: an intervention group (IG) and a control group (CG). Each participant in the IG and CG will receive therapy sessions three times a week (high frequency) and once a week (low frequency), respectively, for a total of 12 consecutive weeks. Each session will last for an hour with strengthening, balance, and gait training. The main variables to be assessed are the 6-Minute Walk Test (6MWT), Motor Assessment Scale (MAS), Berg Balance Scale (BBS), Modified Barthel Index (MBI), and quantitative EEG indices in the form of delta to alpha ratio (DAR) and delta-plus-theta to alpha-plus-beta ratio (DTABR). These will be measured at preintervention (R0) and postintervention (R1). Key analyses are to determine the changes in the 6MWT, MAS, BBS, MBI, DAR, and DTABR at R0 and R1 for the CG and IG. The changes in the DAR and DTABR will be analyzed for association with the changes in the 6MWT, MAS, BBS, and MBI to measure neuroplasticity changes for both the CG and IG. RESULTS: We have recruited 18 participants so far. We expect to publish our results in early 2023. CONCLUSIONS: These associations are expected to be positive in both groups, with a higher correlation in the IG compared to the CG, reflecting enhanced neuroplasticity changes and objective evaluation on the dose-response relationship. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/27935.

10.
Expert Syst ; 39(3): e12823, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34898799

ABSTRACT

Currently, many deep learning models are being used to classify COVID-19 and normal cases from chest X-rays. However, the available data (X-rays) for COVID-19 is limited to train a robust deep-learning model. Researchers have used data augmentation techniques to tackle this issue by increasing the numbers of samples through flipping, translation, and rotation. However, by adopting this strategy, the model compromises for the learning of high-dimensional features for a given problem. Hence, there are high chances of overfitting. In this paper, we used deep-convolutional generative adversarial networks algorithm to address this issue, which generates synthetic images for all the classes (Normal, Pneumonia, and COVID-19). To validate whether the generated images are accurate, we used the k-mean clustering technique with three clusters (Normal, Pneumonia, and COVID-19). We only selected the X-ray images classified in the correct clusters for training. In this way, we formed a synthetic dataset with three classes. The generated dataset was then fed to The EfficientNetB4 for training. The experiments achieved promising results of 95% in terms of area under the curve (AUC). To validate that our network has learned discriminated features associated with lung in the X-rays, we used the Grad-CAM technique to visualize the underlying pattern, which leads the network to its final decision.

11.
Sensors (Basel) ; 21(22)2021 Nov 09.
Article in English | MEDLINE | ID: mdl-34833507

ABSTRACT

Effective communication in vehicular networks depends on the scheduling of wireless channel resources. There are two types of channel resource scheduling in Release 14 of the 3GPP, i.e., (1) controlled by eNodeB and (2) a distributed scheduling carried out by every vehicle, known as Autonomous Resource Selection (ARS). The most suitable resource scheduling for vehicle safety applications is the ARS mechanism. ARS includes (a) counter selection (i.e., specifying the number of subsequent transmissions) and (b) resource reselection (specifying the reuse of the same resource after counter expiry). ARS is a decentralized approach for resource selection. Therefore, resource collisions can occur during the initial selection, where multiple vehicles might select the same resource, hence resulting in packet loss. ARS is not adaptive towards vehicle density and employs a uniform random selection probability approach for counter selection and reselection. As a result, it can prevent some vehicles from transmitting in a congested vehicular network. To this end, the paper presents Truly Autonomous Resource Selection (TARS) for vehicular networks. TARS considers resource allocation as a problem of locally detecting the selected resources at neighbor vehicles to avoid resource collisions. The paper also models the behavior of counter selection and resource block reselection on resource collisions using the Discrete Time Markov Chain (DTMC). Observation of the model is used to propose a fair policy of counter selection and resource reselection in ARS. The simulation of the proposed TARS mechanism showed better performance in terms of resource collision probability and the packet delivery ratio when compared with the LTE Mode 4 standard and with a competing approach proposed by Jianhua He et al.


Subject(s)
Computer Simulation
12.
Sensors (Basel) ; 20(18)2020 Sep 21.
Article in English | MEDLINE | ID: mdl-32967124

ABSTRACT

The domain of underwater wireless sensor networks (UWSNs) had received a lot of attention recently due to its significant advanced capabilities in the ocean surveillance, marine monitoring and application deployment for detecting underwater targets. However, the literature have not compiled the state-of-the-art along its direction to discover the recent advancements which were fuelled by the underwater sensor technologies. Hence, this paper offers the newest analysis on the available evidences by reviewing studies in the past five years on various aspects that support network activities and applications in UWSN environments. This work was motivated by the need for robust and flexible solutions that can satisfy the requirements for the rapid development of the underwater wireless sensor networks. This paper identifies the key requirements for achieving essential services as well as common platforms for UWSN. It also contributes a taxonomy of the critical elements in UWSNs by devising a classification on architectural elements, communications, routing protocol and standards, security, and applications of UWSNs. Finally, the major challenges that remain open are presented as a guide for future research directions.

13.
PLoS One ; 14(10): e0222759, 2019.
Article in English | MEDLINE | ID: mdl-31577809

ABSTRACT

This paper presents the Hybrid Scalable-Minimized-Butterfly-Fat-Tree (H-SMBFT) topology for on-chip communication. Main aspects of this work are the description of the architectural design and the characteristics as well as a comparative analysis against two established indirect topologies namely Butterfly-Fat-Tree (BFT) and Scalable-Minimized-Butterfly-Fat-Tree (SMBFT). Simulation results demonstrate that the proposed topology outperforms its predecessors in terms of performance, area and power dissipation. Specifically, it improves the link interconnectivity between routing levels, such that the number of required links isreduced. This results into reduced router complexity and shortened routing paths between any pair of communicating nodes in the network. Moreover, simulation results under synthetic as well as real-world embedded applications workloads reveal that H-SMBFT can reduce the average latency by up-to35.63% and 17.36% compared to BFT and SMBFT, respectively. In addition, the power dissipation of the network can be reduced by up-to33.82% and 19.45%, while energy consumption can be improved byup-to32.91% and 16.83% compared to BFT and SMBFT, respectively.


Subject(s)
Algorithms , Computer Communication Networks , Electric Power Supplies , Computer Simulation
14.
Cyberpsychol Behav Soc Netw ; 22(7): 433-450, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31074639

ABSTRACT

Social media has taken an important place in the routine life of people. Every single second, users from all over the world are sharing interests, emotions, and other useful information that leads to the generation of huge volumes of user-generated data. Profiling users by extracting attribute information from social media data has been gaining importance with the increasing user-generated content over social media platforms. Meeting the user's satisfaction level for information collection is becoming more challenging and difficult. This is because of too much noise generated, which affects the process of information collection due to explosively increasing online data. Social profiling is an emerging approach to overcome the challenges faced in meeting user's demands by introducing the concept of personalized search while keeping in consideration user profiles generated using social network data. This study reviews and classifies research inferring users social profile attributes from social media data as individual and group profiling. The existing techniques along with utilized data sources, the limitations, and challenges are highlighted. The prominent approaches adopted include Machine Learning, Ontology, and Fuzzy logic. Social media data from Twitter and Facebook have been used by most of the studies to infer the social attributes of users. The studies show that user social attributes, including age, gender, home location, wellness, emotion, opinion, relation, influence, and so on, still need to be explored. This review gives researchers insights of the current state of literature and challenges for inferring user profile attributes using social media data.


Subject(s)
Data Collection/methods , Social Identification , Social Media , Female , Humans , Male , Personal Satisfaction
15.
Sensors (Basel) ; 19(1)2019 Jan 04.
Article in English | MEDLINE | ID: mdl-30621241

ABSTRACT

Multivariate data sets are common in various application areas, such as wireless sensor networks (WSNs) and DNA analysis. A robust mechanism is required to compute their similarity indexes regardless of the environment and problem domain. This study describes the usefulness of a non-metric-based approach (i.e., longest common subsequence) in computing similarity indexes. Several non-metric-based algorithms are available in the literature, the most robust and reliable one is the dynamic programming-based technique. However, dynamic programming-based techniques are considered inefficient, particularly in the context of multivariate data sets. Furthermore, the classical approaches are not powerful enough in scenarios with multivariate data sets, sensor data or when the similarity indexes are extremely high or low. To address this issue, we propose an efficient algorithm to measure the similarity indexes of multivariate data sets using a non-metric-based methodology. The proposed algorithm performs exceptionally well on numerous multivariate data sets compared with the classical dynamic programming-based algorithms. The performance of the algorithms is evaluated on the basis of several benchmark data sets and a dynamic multivariate data set, which is obtained from a WSN deployed in the Ghulam Ishaq Khan (GIK) Institute of Engineering Sciences and Technology. Our evaluation suggests that the proposed algorithm can be approximately 39.9% more efficient than its counterparts for various data sets in terms of computational time.

16.
PLoS One ; 12(4): e0174715, 2017.
Article in English | MEDLINE | ID: mdl-28384312

ABSTRACT

Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.


Subject(s)
Computer Communication Networks , Software , Algorithms , Cluster Analysis , Internet , Reproducibility of Results
17.
PLoS One ; 11(9): e0161340, 2016.
Article in English | MEDLINE | ID: mdl-27658194

ABSTRACT

A wireless sensor network (WSN) comprises small sensor nodes with limited energy capabilities. The power constraints of WSNs necessitate efficient energy utilization to extend the overall network lifetime of these networks. We propose a distance-based and low-energy adaptive clustering (DISCPLN) protocol to streamline the green issue of efficient energy utilization in WSNs. We also enhance our proposed protocol into the multi-hop-DISCPLN protocol to increase the lifetime of the network in terms of high throughput with minimum delay time and packet loss. We also propose the mobile-DISCPLN protocol to maintain the stability of the network. The modelling and comparison of these protocols with their corresponding benchmarks exhibit promising results.

18.
Appl Opt ; 54(1): 37-45, 2015 Jan 01.
Article in English | MEDLINE | ID: mdl-25967004

ABSTRACT

Lens system design is an important factor in image quality. The main aspect of the lens system design methodology is the optimization procedure. Since optimization is a complex, nonlinear task, soft computing optimization algorithms can be used. There are many tools that can be employed to measure optical performance, but the spot diagram is the most useful. The spot diagram gives an indication of the image of a point object. In this paper, the spot size radius is considered an optimization criterion. Intelligent soft computing scheme support vector machines (SVMs) coupled with the firefly algorithm (FFA) are implemented. The performance of the proposed estimators is confirmed with the simulation results. The result of the proposed SVM-FFA model has been compared with support vector regression (SVR), artificial neural networks, and generic programming methods. The results show that the SVM-FFA model performs more accurately than the other methodologies. Therefore, SVM-FFA can be used as an efficient soft computing technique in the optimization of lens system designs.

19.
PLoS One ; 10(1): e0115324, 2015.
Article in English | MEDLINE | ID: mdl-25602616

ABSTRACT

Wireless sensor networks (WSNs) are ubiquitous and pervasive, and therefore; highly susceptible to a number of security attacks. Denial of Service (DoS) attack is considered the most dominant and a major threat to WSNs. Moreover, the wormhole attack represents one of the potential forms of the Denial of Service (DoS) attack. Besides, crafting the wormhole attack is comparatively simple; though, its detection is nontrivial. On the contrary, the extant wormhole defense methods need both specialized hardware and strong assumptions to defend against static and dynamic wormhole attack. The ensuing paper introduces a novel scheme to detect wormhole attacks in a geographic routing protocol (DWGRP). The main contribution of this paper is to detect malicious nodes and select the best and the most reliable neighbors based on pairwise key pre-distribution technique and the beacon packet. Moreover, this novel technique is not subject to any specific assumption, requirement, or specialized hardware, such as a precise synchronized clock. The proposed detection method is validated by comparisons with several related techniques in the literature, such as Received Signal Strength (RSS), Authentication of Nodes Scheme (ANS), Wormhole Detection uses Hound Packet (WHOP), and Wormhole Detection with Neighborhood Information (WDI) using the NS-2 simulator. The analysis of the simulations shows promising results with low False Detection Rate (FDR) in the geographic routing protocols.


Subject(s)
Computer Communication Networks , Computer Security , Models, Theoretical , Wireless Technology , Algorithms
20.
ScientificWorldJournal ; 2014: 269357, 2014.
Article in English | MEDLINE | ID: mdl-25121114

ABSTRACT

Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.


Subject(s)
Algorithms , Computer Security , Information Management/methods , Information Storage and Retrieval/methods , Models, Theoretical , Research Design , Computer Simulation
SELECTION OF CITATIONS
SEARCH DETAIL
...