Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 852
Filter
1.
PeerJ Comput Sci ; 10: e2128, 2024.
Article in English | MEDLINE | ID: mdl-38983206

ABSTRACT

Fog computing has emerged as a prospective paradigm to address the computational requirements of IoT applications, extending the capabilities of cloud computing to the network edge. Task scheduling is pivotal in enhancing energy efficiency, optimizing resource utilization and ensuring the timely execution of tasks within fog computing environments. This article presents a comprehensive review of the advancements in task scheduling methodologies for fog computing systems, covering priority-based, greedy heuristics, metaheuristics, learning-based, hybrid heuristics, and nature-inspired heuristic approaches. Through a systematic analysis of relevant literature, we highlight the strengths and limitations of each approach and identify key challenges facing fog computing task scheduling, including dynamic environments, heterogeneity, scalability, resource constraints, security concerns, and algorithm transparency. Furthermore, we propose future research directions to address these challenges, including the integration of machine learning techniques for real-time adaptation, leveraging federated learning for collaborative scheduling, developing resource-aware and energy-efficient algorithms, incorporating security-aware techniques, and advancing explainable AI methodologies. By addressing these challenges and pursuing these research directions, we aim to facilitate the development of more robust, adaptable, and efficient task-scheduling solutions for fog computing environments, ultimately fostering trust, security, and sustainability in fog computing systems and facilitating their widespread adoption across diverse applications and domains.

2.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38941113

ABSTRACT

This study describes the development of a resource module that is part of a learning platform named "NIGMS Sandbox for Cloud-based Learning" (https://github.com/NIGMS/NIGMS-Sandbox). The overall genesis of the Sandbox is described in the editorial NIGMS Sandbox at the beginning of this Supplement. This module delivers learning materials on de novo transcriptome assembly using Nextflow in an interactive format that uses appropriate cloud resources for data access and analysis. Cloud computing is a powerful new means by which biomedical researchers can access resources and capacity that were previously either unattainable or prohibitively expensive. To take advantage of these resources, however, the biomedical research community needs new skills and knowledge. We present here a cloud-based training module, developed in conjunction with Google Cloud, Deloitte Consulting, and the NIH STRIDES Program, that uses the biological problem of de novo transcriptome assembly to demonstrate and teach the concepts of computational workflows (using Nextflow) and cost- and resource-efficient use of Cloud services (using Google Cloud Platform). Our work highlights the reduced necessity of on-site computing resources and the accessibility of cloud-based infrastructure for bioinformatics applications.


Subject(s)
Cloud Computing , Transcriptome , Computational Biology/methods , Computational Biology/education , Software , Humans , Gene Expression Profiling/methods , Internet
3.
Sci Rep ; 14(1): 14668, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38918484

ABSTRACT

A growing dependence on real-time positioning apps for navigation, safety, and location-based services necessitates a deep understanding of latency challenges within cloud-based Global Navigation Satellite System (GNSS) solutions. This study analyses a GNSS real-time positioning app on smartphones that utilizes cloud computing for positioning data delivery. The study investigates and quantifies diverse latency contributors throughout the system architecture, including GNSS signal acquisition, data transmission, cloud processing, and result dissemination. Controlled experiments and real-world scenarios are employed to assess the influence of network conditions, device capabilities, and cloud server load on overall positioning latency. Findings highlight system bottlenecks and their relative contributions to latency. Additionally, practical recommendations are presented for developers and cloud service providers to mitigate these challenges and guarantee an optimal user experience for real-time positioning applications. This study not only elucidates the complex interplay of factors affecting GNSS app latency, but also paves the way for future advancements in cloud-based positioning solutions, ensuring the accuracy and timeliness critical for safety-critical and emerging applications.

4.
JMIR Biomed Eng ; 9: e50175, 2024 Mar 06.
Article in English | MEDLINE | ID: mdl-38875671

ABSTRACT

BACKGROUND: The increasing adoption of telehealth Internet of Things (IoT) devices in health care informatics has led to concerns about energy use and data processing efficiency. OBJECTIVE: This paper introduces an innovative model that integrates telehealth IoT devices with a fog and cloud computing-based platform, aiming to enhance energy efficiency in telehealth IoT systems. METHODS: The proposed model incorporates adaptive energy-saving strategies, localized fog nodes, and a hybrid cloud infrastructure. Simulation analyses were conducted to assess the model's effectiveness in reducing energy consumption and enhancing data processing efficiency. RESULTS: Simulation results demonstrated significant energy savings, with a 2% reduction in energy consumption achieved through adaptive energy-saving strategies. The sample size for the simulation was 10-40, providing statistical robustness to the findings. CONCLUSIONS: The proposed model successfully addresses energy and data processing challenges in telehealth IoT scenarios. By integrating fog computing for local processing and a hybrid cloud infrastructure, substantial energy savings are achieved. Ongoing research will focus on refining the energy conservation model and exploring additional functional enhancements for broader applicability in health care and industrial contexts.

5.
PeerJ Comput Sci ; 10: e2024, 2024.
Article in English | MEDLINE | ID: mdl-38855254

ABSTRACT

With the cutting-edge advancements in computer vision, facial expression recognition (FER) is an active research area due to its broad practical applications. It has been utilized in various fields, including education, advertising and marketing, entertainment and gaming, health, and transportation. The facial expression recognition-based systems are rapidly evolving due to new challenges, and significant research studies have been conducted on both basic and compound facial expressions of emotions; however, measuring emotions is challenging. Fueled by the recent advancements and challenges to the FER systems, in this article, we have discussed the basics of FER and architectural elements, FER applications and use-cases, FER-based global leading companies, interconnection between FER, Internet of Things (IoT) and Cloud computing, summarize open challenges in-depth to FER technologies, and future directions through utilizing Preferred Reporting Items for Systematic reviews and Meta Analyses Method (PRISMA). In the end, the conclusion and future thoughts are discussed. By overcoming the identified challenges and future directions in this research study, researchers will revolutionize the discipline of facial expression recognition in the future.

6.
Network ; : 1-26, 2024 Jun 03.
Article in English | MEDLINE | ID: mdl-38829364

ABSTRACT

The dynamic workload is evenly distributed among all nodes using balancing methods like hosts or VMs. Load Balancing as a Service (LBaaS) is another name for load balancing in the cloud. In this research work, the load is balanced by the application of Virtual Machine (VM) migration carried out by proposed Sail Jelly Fish Optimization (SJFO). The SJFO is formed by combining Sail Fish Optimizer (SFO) and Jellyfish Search (JS) optimizer. In the Cloud model, many Physical Machines (PMs) are present, where these PMs are comprised of many VMs. Each VM has many tasks, and these tasks depend on various parameters like Central Processing Unit (CPU), memory, Million Instructions per Second (MIPS), capacity, total number of processing entities, as well as bandwidth. Here, the load is predicted by Deep Recurrent Neural Network (DRNN) and this predicted load is compared with a threshold value, where VM migration is done based on predicted values. Furthermore, the performance of SJFO-VM is analysed using the metrics like capacity, load, and resource utilization. The proposed method shows better performance with a superior capacity of 0.598, an inferior load of 0.089, and an inferior resource utilization of 0.257.

7.
Ecol Evol ; 14(6): e11341, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38826171

ABSTRACT

To address our climate emergency, "we must rapidly, radically reshape society"-Johnson & Wilkinson, All We Can Save. In science, reshaping requires formidable technical (cloud, coding, reproducibility) and cultural shifts (mindsets, hybrid collaboration, inclusion). We are a group of cross-government and academic scientists that are exploring better ways of working and not being too entrenched in our bureaucracies to do better science, support colleagues, and change the culture at our organizations. We share much-needed success stories and action for what we can all do to reshape science as part of the Open Science movement and 2023 Year of Open Science.

8.
Network ; : 1-22, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-38934441

ABSTRACT

Cloud services are one of the most quickly developing technologies. Furthermore, load balancing is recognized as a fundamental challenge for achieving energy efficiency. The primary function of load balancing is to deliver optimal services by releasing the load over multiple resources. Fault tolerance is being used to improve the reliability and accessibility of the network. In this paper, a hybrid Deep Learning-based load balancing algorithm is developed. Initially, tasks are allocated to all VMs in a round-robin method. Furthermore, the Deep Embedding Cluster (DEC) utilizes the Central Processing Unit (CPU), bandwidth, memory, processing elements, and frequency scaling factors while determining if a VM is overloaded or underloaded. The task performed on the overloaded VM is valued and the tasks accomplished on the overloaded VM are assigned to the underloaded VM for cloud load balancing. In addition, the Deep Q Recurrent Neural Network (DQRNN) is proposed to balance the load based on numerous factors such as supply, demand, capacity, load, resource utilization, and fault tolerance. Furthermore, the effectiveness of this model is assessed by load, capacity, resource consumption, and success rate, with ideal values of 0.147, 0.726, 0.527, and 0.895 are achieved.

9.
F1000Res ; 13: 203, 2024.
Article in English | MEDLINE | ID: mdl-38868668

ABSTRACT

Converged computing is an emerging area of computing that brings together the best of both worlds for high performance computing (HPC) and cloud-native communities. The economic influence of cloud computing and the need for workflow portability, flexibility, and manageability are driving this emergence. Navigating the uncharted territory and building an effective space for both HPC and cloud require collaborative technological development and research. In this work, we focus on developing components for the converged workload manager, the central component of batch workflows running in any environment. From the cloud we base our work on Kubernetes, the de facto standard batch workload orchestrator. From HPC the orchestrator counterpart is Flux Framework, a fully hierarchical resource management and graph-based scheduler with a modular architecture that supports sophisticated scheduling and job management. Bringing these managers together consists of implementing Flux inside of Kubernetes, enabling hierarchical resource management and scheduling that scales without burdening the Kubernetes scheduler. This paper introduces the Flux Operator - an on-demand HPC workload manager deployed in Kubernetes. Our work describes design decisions, mapping components between environments, and experimental features. We perform experiments that compare application performance when deployed by the Flux Operator and the MPI Operator and present the results. Finally, we review remaining challenges and describe our vision of the future for improved technological innovation and collaboration through converged computing.


Subject(s)
Cloud Computing , Workload , Workflow
10.
Sensors (Basel) ; 24(10)2024 May 16.
Article in English | MEDLINE | ID: mdl-38794018

ABSTRACT

This paper explores the development of a smart Structural Health Monitoring (SHM) platform tailored for long-span bridge monitoring, using the Forth Road Bridge (FRB) as a case study. It discusses the selection of smart sensors available for real-time monitoring, the formulation of an effective data strategy encompassing the collection, processing, management, analysis, and visualization of monitoring data sets to support decision-making, and the establishment of a cost-effective and intelligent sensor network aligned with the objectives set through comprehensive communication with asset owners. Due to the high data rates and dense sensor installations, conventional processing techniques are inadequate for fulfilling monitoring functionalities and ensuring security. Cloud-computing emerges as a widely adopted solution for processing and storing vast monitoring data sets. Drawing from the authors' experience in implementing long-span bridge monitoring systems in the UK and China, this paper compares the advantages and limitations of employing cloud- computing for long-span bridge monitoring. Furthermore, it explores strategies for developing a robust data strategy and leveraging artificial intelligence (AI) and digital twin (DT) technologies to extract relevant information or patterns regarding asset health conditions. This information is then visualized through the interaction between physical and virtual worlds, facilitating timely and informed decision-making in managing critical road transport infrastructure.

11.
Sensors (Basel) ; 24(10)2024 May 17.
Article in English | MEDLINE | ID: mdl-38794035

ABSTRACT

When resource demand increases and decreases rapidly, container clusters in the cloud environment need to respond to the number of containers in a timely manner to ensure service quality. Resource load prediction is a prominent challenge issue with the widespread adoption of cloud computing. A novel cloud computing load prediction method has been proposed, the Double-channel residual Self-attention Temporal convolutional Network with Weight adaptive updating (DSTNW), in order to make the response of the container cluster more rapid and accurate. A Double-channel Temporal Convolution Network model (DTN) has been developed to capture long-term sequence dependencies and enhance feature extraction capabilities when the model handles long load sequences. Double-channel dilated causal convolution has been adopted to replace the single-channel dilated causal convolution in the DTN. A residual temporal self-attention mechanism (SM) has been proposed to improve the performance of the network and focus on features with significant contributions from the DTN. DTN and SM jointly constitute a dual-channel residual self-attention temporal convolutional network (DSTN). In addition, by evaluating the accuracy aspects of single and stacked DSTNs, an adaptive weight strategy has been proposed to assign corresponding weights for the single and stacked DSTNs, respectively. The experimental results highlight that the developed method has outstanding prediction performance for cloud computing in comparison with some state-of-the-art methods. The proposed method achieved an average improvement of 24.16% and 30.48% on the Container dataset and Google dataset, respectively.

12.
Sensors (Basel) ; 24(10)2024 May 19.
Article in English | MEDLINE | ID: mdl-38794080

ABSTRACT

This paper presents a novel approach to address the challenges of self-adaptive privacy in cloud computing environments (CCE). Under the Cloud-InSPiRe project, the aim is to provide an interdisciplinary framework and a beta-version tool for self-adaptive privacy design, effectively focusing on the integration of technical measures with social needs. To address that, a pilot taxonomy that aligns technical, infrastructural, and social requirements is proposed after two supplementary surveys that have been conducted, focusing on users' privacy needs and developers' perspectives on self-adaptive privacy. Through the integration of users' social identity-based practices and developers' insights, the taxonomy aims to provide clear guidance for developers, ensuring compliance with regulatory standards and fostering a user-centric approach to self-adaptive privacy design tailored to diverse user groups, ultimately enhancing satisfaction and confidence in cloud services.

13.
Network ; : 1-28, 2024 May 21.
Article in English | MEDLINE | ID: mdl-38771196

ABSTRACT

Effective management of data is a major issue in Distributed File System (DFS), like the cloud. This issue is handled by replicating files in an effective manner, which can minimize the time of data access and elevate the data availability. This paper devises a Fractional Social Optimization Algorithm (FSOA) for replica management along with balancing load in DFS in the cloud stage. Balancing the workload for DFS is the main objective. Here, the chunk creation is done by partitioning the file into a different number of chunks considering Deep Fuzzy Clustering (DFC) and then in the round-robin manner the Virtual machine (VM) is assigned. In that case for balancing the load considering certain objectives like resource use, energy consumption and migration cost thereby the load balancing is performed with the proposed FSOA. Here, the FSOA is formulated by uniting the Social optimization algorithm (SOA) and Fractional Calculus (FC). The replica management is done in DFS using the proposed FSOA by considering the various objectives. The FSOA has the smallest load of 0.299, smallest cost of 0.395, smallest energy consumption of 0.510, smallest overhead of 0.358, and smallest throughput of 0.537.

14.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732863

ABSTRACT

Cloud computing has become a major component of the modern IT ecosystem. A key contributor to this has been the development of Infrastructure as a Service (IaaS) architecture, in which users' virtual machines (VMs) are run on the service provider's physical infrastructure, making it possible to become independent of the need to purchase one's own physical machines (PMs). One of the main aspects to consider when designing such systems is achieving the optimal utilization of individual resources, such as processor, RAM, disk, and available bandwidth. In response to these challenges, the authors developed an analytical model (the ARU method) to determine the average utilization levels of the aforementioned resources. The effectiveness of the proposed analytical model was evaluated by comparing the results obtained by utilizing the model with those obtained by conducting a digital simulation of the operation of a cloud system according to the IaaS paradigm. The results show the effectiveness of the model regardless of the structure of the emerging requests, the variability of the capacity of individual resources, and the number of physical machines in the system. This translates into the applicability of the model in the design process of cloud systems.

15.
Res Sq ; 2024 May 03.
Article in English | MEDLINE | ID: mdl-38746269

ABSTRACT

Rapid advances in medical imaging Artificial Intelligence (AI) offer unprecedented opportunities for automatic analysis and extraction of data from large imaging collections. Computational demands of such modern AI tools may be difficult to satisfy with the capabilities available on premises. Cloud computing offers the promise of economical access and extreme scalability. Few studies examine the price/performance tradeoffs of using the cloud, in particular for medical image analysis tasks. We investigate the use of cloud-provisioned compute resources for AI-based curation of the National Lung Screening Trial (NLST) Computed Tomography (CT) images available from the National Cancer Institute (NCI) Imaging Data Commons (IDC). We evaluated NCI Cancer Research Data Commons (CRDC) Cloud Resources - Terra (FireCloud) and Seven Bridges-Cancer Genomics Cloud (SB-CGC) platforms - to perform automatic image segmentation with TotalSegmentator and pyradiomics feature extraction for a large cohort containing >126,000 CT volumes from >26,000 patients. Utilizing >21,000 Virtual Machines (VMs) over the course of the computation we completed analysis in under 9 hours, as compared to the estimated 522 days that would be needed on a single workstation. The total cost of utilizing the cloud for this analysis was $1,011.05. Our contributions include: 1) an evaluation of the numerous tradeoffs towards optimizing the use of cloud resources for large-scale image analysis; 2) CloudSegmentator, an open source reproducible implementation of the developed workflows, which can be reused and extended; 3) practical recommendations for utilizing the cloud for large-scale medical image computing tasks. We also share the results of the analysis: the total of 9,565,554 segmentations of the anatomic structures and the accompanying radiomics features in IDC as of release v18.

16.
Sci Total Environ ; 931: 172678, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38677415

ABSTRACT

Reservoirs play an important role in relation to water security, flood risk, hydropower and natural flow regime. This study derives a novel dataset with a long-term daily water-balance (reservoir volume, inflow, outflow, evaporation and precipitation) of headwater reservoirs and storage dynamics across the globe. The data is generated using cloud computing infrastructure and a high resolution distributed hydrological model wflow_sbm. Model results are validated against earth observed surface water area and in-situ measured reservoir volume and show an overall good model performance. Simulated headwater reservoir storage indicate that 19.4-24.4 % of the reservoirs had a significant decrease in storage. This change is mainly driven by a decrease in reservoir inflow and increase in evaporation. Deployment on a kubernetes cloud environment and using reproducible workflows shows that these kind of simulations and analyses can be conducted in less than a day.

17.
bioRxiv ; 2024 Mar 16.
Article in English | MEDLINE | ID: mdl-38559152

ABSTRACT

As large-scale biobanks provide increasing access to deep phenotyping and genomic data, genome-wide association studies (GWAS) are rapidly uncovering the genetic architecture behind various complex traits and diseases. GWAS publications typically make their summary-level data (GWAS summary statistics) publicly available, enabling further exploration of genetic overlaps between phenotypes gathered from different studies and cohorts. However, systematically analyzing high-dimensional GWAS summary statistics for thousands of phenotypes can be both logistically challenging and computationally demanding. In this paper, we introduce BIGA (https://bigagwas.org/), a website that aims to offer unified data analysis pipelines and processed data resources for cross-trait genetic architecture analyses using GWAS summary statistics. We have developed a framework to implement statistical genetics tools on a cloud computing platform, combined with extensive curated GWAS data resources. Through BIGA, users can upload data, submit jobs, and share results, providing the research community with a convenient tool for consolidating GWAS data and generating new insights.

18.
Heliyon ; 10(7): e28109, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38560228

ABSTRACT

The Internet of Vehicles (IoV) emerges as a pivotal extension of the Internet of Things (IoT), specifically geared towards transforming the automotive landscape. In this evolving ecosystem, the demand for a seamless end-to-end system becomes paramount for enhancing operational efficiency and safety. Hence, this study introduces an innovative method for real-time driver identification by integrating cloud computing with deep learning. Utilizing the integrated capabilities of Google Cloud, Thingsboard, and Apache Kafka, the developed solution tailored for IoV technology is adept at managing real-time data collection, processing, prediction, and visualization, with resilience against sensor data anomalies. Also, this research suggests an appropriate method for driver identification by utilizing a combination of Convolutional Neural Networks (CNN) and multi-head self-attention in the proposed approach. The proposed model is validated on two datasets: Security and collected. Moreover, the results show that the proposed model surpassed the previous works by achieving an accuracy and F1 score of 99.95%. Even when challenged with data anomalies, this model maintains a high accuracy of 96.2%. By achieving accurate driver identification results, the proposed end-to-end IoV system can aid in optimizing fleet management, vehicle security, personalized driving experiences, insurance, and risk assessment. This emphasizes its potential for road safety and managing transportation more effectively.

19.
Heliyon ; 10(7): e29044, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38601602

ABSTRACT

Cloud computing has emerged as a transformative force in healthcare and biomedical sciences, offering scalable, on-demand resources for managing vast amounts of data. This review explores the integration of cloud computing within these fields, highlighting its pivotal role in enhancing data management, security, and accessibility. We examine the application of cloud computing in various healthcare domains, including electronic medical records, telemedicine, and personalized patient care, as well as its impact on bioinformatics research, particularly in genomics, proteomics, and metabolomics. The review also addresses the challenges and ethical considerations associated with cloud-based healthcare solutions, such as data privacy and cybersecurity. By providing a comprehensive overview, we aim to assist readers in understanding the significance of cloud computing in modern medical applications and its potential to revolutionize both patient care and biomedical research.

20.
Front Big Data ; 7: 1349116, 2024.
Article in English | MEDLINE | ID: mdl-38638340

ABSTRACT

With the rapid growth of information and communication technologies, governments worldwide are embracing digital transformation to enhance service delivery and governance practices. In the rapidly evolving landscape of information technology (IT), secure data management stands as a cornerstone for organizations aiming to safeguard sensitive information. Robust data modeling techniques are pivotal in structuring and organizing data, ensuring its integrity, and facilitating efficient retrieval and analysis. As the world increasingly emphasizes sustainability, integrating eco-friendly practices into data management processes becomes imperative. This study focuses on the specific context of Pakistan and investigates the potential of cloud computing in advancing e-governance capabilities. Cloud computing offers scalability, cost efficiency, and enhanced data security, making it an ideal technology for digital transformation. Through an extensive literature review, analysis of case studies, and interviews with stakeholders, this research explores the current state of e-governance in Pakistan, identifies the challenges faced, and proposes a framework for leveraging cloud computing to overcome these challenges. The findings reveal that cloud computing can significantly enhance the accessibility, scalability, and cost-effectiveness of e-governance services, thereby improving citizen engagement and satisfaction. This study provides valuable insights for policymakers, government agencies, and researchers interested in the digital transformation of e-governance in Pakistan and offers a roadmap for leveraging cloud computing technologies in similar contexts. The findings contribute to the growing body of knowledge on e-governance and cloud computing, supporting the advancement of digital governance practices globally. This research identifies monitoring parameters necessary to establish a sustainable e-governance system incorporating big data and cloud computing. The proposed framework, Monitoring and Assessment System using Cloud (MASC), is validated through secondary data analysis and successfully fulfills the research objectives. By leveraging big data and cloud computing, governments can revolutionize their digital governance practices, driving transformative changes and enhancing efficiency and effectiveness in public administration.

SELECTION OF CITATIONS
SEARCH DETAIL
...