Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters










Publication year range
1.
Entropy (Basel) ; 26(5)2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38785643

ABSTRACT

In this paper, the problem of joint transmission and computation resource allocation for a multi-user probabilistic semantic communication (PSC) network is investigated. In the considered model, users employ semantic information extraction techniques to compress their large-sized data before transmitting them to a multi-antenna base station (BS). Our model represents large-sized data through substantial knowledge graphs, utilizing shared probability graphs between the users and the BS for efficient semantic compression. The resource allocation problem is formulated as an optimization problem with the objective of maximizing the sum of the equivalent rate of all users, considering the total power budget and semantic resource limit constraints. The computation load considered in the PSC network is formulated as a non-smooth piecewise function with respect to the semantic compression ratio. To tackle this non-convex non-smooth optimization challenge, a three-stage algorithm is proposed, where the solutions for the received beamforming matrix of the BS, the transmit power of each user, and the semantic compression ratio of each user are obtained stage by stage. The numerical results validate the effectiveness of our proposed scheme.

2.
IEEE Trans Cybern ; 54(2): 1223-1235, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38117628

ABSTRACT

The distributed subgradient (DSG) method is a widely used algorithm for coping with large-scale distributed optimization problems in machine-learning applications. Most existing works on DSG focus on ideal communication between cooperative agents, where the shared information between agents is exact and perfect. This assumption, however, can lead to potential privacy concerns and is not feasible when wireless transmission links are of poor quality. To meet this challenge, a common approach is to quantize the data locally before transmission, which avoids exposure of raw data and significantly reduces the size of the data. Compared with perfect data, quantization poses fundamental challenges to maintaining data accuracy, which further impacts the convergence of the algorithms. To overcome this problem, we propose a DSG method with random quantization and flexible weights and provide comprehensive results on the convergence of the algorithm for (strongly/weakly) convex objective functions. We also derive the upper bounds on the convergence rates in terms of the quantization error, the distortion, the step sizes, and the number of network agents. Our analysis extends the existing results, for which special cases of step sizes and convex objective functions are considered, to general conclusions on weakly convex cases. Numerical simulations are conducted in convex and weakly convex settings to support our theoretical results.

3.
Proc Natl Acad Sci U S A ; 121(1): e2313171120, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-38147553

ABSTRACT

Networks allow us to describe a wide range of interaction phenomena that occur in complex systems arising in such diverse fields of knowledge as neuroscience, engineering, ecology, finance, and social sciences. Until very recently, the primary focus of network models and tools has been on describing the pairwise relationships between system entities. However, increasingly more studies indicate that polyadic or higher-order group relationships among multiple network entities may be the key toward better understanding of the intrinsic mechanisms behind the functionality of complex systems. Such group interactions can be, in turn, described in a holistic manner by simplicial complexes of graphs. Inspired by these recently emerging results on the utility of the simplicial geometry of complex networks for contagion propagation and armed with a large-scale synthetic social contact network (also known as a digital twin) of the population in the U.S. state of Virginia, in this paper, we aim to glean insights into the role of higher-order social interactions and the associated varying social group determinants on COVID-19 propagation and mitigation measures.


Subject(s)
COVID-19 , Epidemics , Humans , COVID-19/epidemiology , Virginia
4.
Proc Natl Acad Sci U S A ; 120(48): e2305227120, 2023 Nov 28.
Article in English | MEDLINE | ID: mdl-37983514

ABSTRACT

Disease surveillance systems provide early warnings of disease outbreaks before they become public health emergencies. However, pandemics containment would be challenging due to the complex immunity landscape created by multiple variants. Genomic surveillance is critical for detecting novel variants with diverse characteristics and importation/emergence times. Yet, a systematic study incorporating genomic monitoring, situation assessment, and intervention strategies is lacking in the literature. We formulate an integrated computational modeling framework to study a realistic course of action based on sequencing, analysis, and response. We study the effects of the second variant's importation time, its infectiousness advantage and, its cross-infection on the novel variant's detection time, and the resulting intervention scenarios to contain epidemics driven by two-variants dynamics. Our results illustrate the limitation in the intervention's effectiveness due to the variants' competing dynamics and provide the following insights: i) There is a set of importation times that yields the worst detection time for the second variant, which depends on the first variant's basic reproductive number; ii) When the second variant is imported relatively early with respect to the first variant, the cross-infection level does not impact the detection time of the second variant. We found that depending on the target metric, the best outcomes are attained under different interventions' regimes. Our results emphasize the importance of sustained enforcement of Non-Pharmaceutical Interventions on preventing epidemic resurgence due to importation/emergence of novel variants. We also discuss how our methods can be used to study when a novel variant emerges within a population.


Subject(s)
COVID-19 , Pandemics , Humans , Pandemics/prevention & control , Public Health , Disease Outbreaks/prevention & control , Genomics
5.
Entropy (Basel) ; 25(8)2023 Aug 21.
Article in English | MEDLINE | ID: mdl-37628272

ABSTRACT

Unique digital circuit outputs, considered as physical unclonable function (PUF) circuit outputs, can facilitate a secure and reliable secret key agreement. To tackle noise and high correlations between the PUF circuit outputs, transform coding methods combined with scalar quantizers are typically applied to extract the uncorrelated bit sequences reliably. In this paper, we create realistic models for these transformed outputs by fitting truncated distributions to them. We also show that the state-of-the-art models are inadequate to guarantee a target reliability level for all PUF outputs, which also means that secrecy cannot be guaranteed. Therefore, we introduce a quality of security parameter to control the percentage of the PUF circuit outputs for which a target security level can be guaranteed. By applying the finite-length information theory results to a public ring oscillator output dataset, we illustrate that security guarantees can be provided for each bit extracted from any PUF device by eliminating only a small subset of PUF circuit outputs. Furthermore, we conversely show that it is not possible to provide reliability or security guarantees without eliminating any PUF circuit output. Our holistic methods and analyses can be applied to any PUF type, as well as any biometric secrecy system, with continuous-valued outputs to extract secret keys with low hardware complexity.

6.
Phys Rev E ; 108(1-1): 014306, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37583147

ABSTRACT

Masks have remained an important mitigation strategy in the fight against COVID-19 due to their ability to prevent the transmission of respiratory droplets between individuals. In this work, we provide a comprehensive quantitative analysis of the impact of mask-wearing. To this end, we propose a novel agent-based model of viral spread on networks where agents may either wear no mask or wear one of several types of masks with different properties (e.g., cloth or surgical). We derive analytical expressions for three key epidemiological quantities: The probability of emergence, the epidemic threshold, and the expected epidemic size. In particular, we show how the aforementioned quantities depend on the structure of the contact network, viral transmission dynamics, and the distribution of the different types of masks within the population. Through extensive simulations, we then investigate the impact of different allocations of masks within the population and tradeoffs between the outward efficiency and inward efficiency of the masks. Interestingly, we find that masks with high outward efficiency and low inward efficiency are most useful for controlling the spread in the early stages of an epidemic, while masks with high inward efficiency but low outward efficiency are most useful in reducing the size of an already large spread. Last, we study whether degree-based mask allocation is more effective in reducing the probability of epidemic as well as epidemic size compared to random allocation. The result echoes the previous findings that mitigation strategies should differ based on the stage of the spreading process, focusing on source control before the epidemic emerges and on self-protection after the emergence.


Subject(s)
COVID-19 , Epidemics , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Epidemics/prevention & control
7.
Entropy (Basel) ; 25(7)2023 Jul 11.
Article in English | MEDLINE | ID: mdl-37509992

ABSTRACT

In video streaming applications, especially during live streaming events, video traffic can account for a significant portion of the network traffic and can lead to severe network congestion. For such applications, multicast provides an efficient means to deliver the same content to a large number of users simultaneously. However, in multicast, if the base station transmits content at rates higher than what can be decoded by users with the worst channels, these users will experience outages. This makes the multicast system's performance dependent on the weakest users in the system. Interestingly, video streams can tolerate some packet loss without a significant degradation in the quality experienced by the users. This property can be leveraged to improve the multicast system's performance by reducing the dependence of the multicast transmissions on the weakest users. In this work, we design a loss-tolerant video multicasting system that allows for some controlled packet loss while satisfying the quality requirements of the users. In particular, we solve the resource allocation problem in a multimedia broadcast multicast services (MBMS) system by transforming it into the problem of stabilizing a virtual queuing system. We propose two loss-optimal policies and demonstrate their effectiveness using numerical examples with realistic traffic patterns from real video streams. It is shown that the proposed policies are able to keep the loss encountered by every user below its tolerable loss. The proposed policies are also able to achieve a significantly lower peak SNR degradation than the existing schemes.

8.
iScience ; 26(7): 107194, 2023 Jul 21.
Article in English | MEDLINE | ID: mdl-37456856

ABSTRACT

Despite the world's relentless efforts to achieve the United Nations' sustainable energy target by 2030, the current pace of progress is insufficient to reach the objective. Continuous support and development across various domains of the energy sector are required to achieve sustainability targets. This article focuses on the potential of dynamic operating limits to drive the world's sustainability efforts, specifically in addressing critical challenges of distribution networks of the power system by progressively setting the nodal limits on the active and reactive power injection into the distribution network based on data-driven computer simulation. While the importance of dynamic operating limits has recently been recognized, its crucial role in the residential energy sustainability sector, which requires a significant push to provide universal energy access by 2030, has not been adequately investigated. This perspective explains the fundamental concepts and benefits of dynamic operating limits in encouraging the adoption of distributed renewable energy resources in the residential sector to support the United Nation's sustainable energy objective. Additionally, we discuss the limitations of computing this limit and applying it to the electricity network and some motivational models that can encourage electricity customers to come forward to address the challenges. Finally, we explore new research and implementation prospects for designing comprehensive, dependable, accountable, and complementary dynamic operating limit programs to accelerate the attainment of sustainable energy targets.

9.
Proc Natl Acad Sci U S A ; 120(24): e2302245120, 2023 Jun 13.
Article in English | MEDLINE | ID: mdl-37289806

ABSTRACT

A key scientific challenge during the outbreak of novel infectious diseases is to predict how the course of the epidemic changes under countermeasures that limit interaction in the population. Most epidemiological models do not consider the role of mutations and heterogeneity in the type of contact events. However, pathogens have the capacity to mutate in response to changing environments, especially caused by the increase in population immunity to existing strains, and the emergence of new pathogen strains poses a continued threat to public health. Further, in the light of differing transmission risks in different congregate settings (e.g., schools and offices), different mitigation strategies may need to be adopted to control the spread of infection. We analyze a multilayer multistrain model by simultaneously accounting for i) pathways for mutations in the pathogen leading to the emergence of new pathogen strains, and ii) differing transmission risks in different settings, modeled as network layers. Assuming complete cross-immunity among strains, namely, recovery from any infection prevents infection with any other (an assumption that will need to be relaxed to deal with COVID-19 or influenza), we derive the key epidemiological parameters for the multilayer multistrain framework. We demonstrate that reductions to existing models that discount heterogeneity in either the strain or the network layers may lead to incorrect predictions. Our results highlight that the impact of imposing/lifting mitigation measures concerning different contact network layers (e.g., school closures or work-from-home policies) should be evaluated in connection with their effect on the likelihood of the emergence of new strains.


Subject(s)
COVID-19 , Epidemics , Influenza, Human , Humans , COVID-19/epidemiology , COVID-19/genetics , Disease Outbreaks , Influenza, Human/epidemiology , Influenza, Human/genetics , Mutation
10.
Article in English | MEDLINE | ID: mdl-37021855

ABSTRACT

Data-driven approaches are promising to address the modeling issues of modern power electronics-based power systems, due to the black-box feature. Frequency-domain analysis has been applied to address the emerging small-signal oscillation issues caused by converter control interactions. However, the frequency-domain model of a power electronic system is linearized around a specific operating condition. It thus requires measurement or identification of frequency-domain models repeatedly at many operating points (OPs) due to the wide operation range of the power systems, which brings significant computation and data burden. This article addresses this challenge by developing a deep learning approach using multilayer feedforward neural networks (FNNs) to train the frequency-domain impedance model of power electronic systems that is continuous of OP. Distinguished from the prior neural network designs relying on trial-and-error and sufficient data size, this article proposes to design the FNN based on latent features of power electronic systems, i.e., the number of system poles and zeros. To further investigate the impacts of data quantity and quality, learning procedures from a small dataset are developed, and K-medoids clustering based on dynamic time warping is used to reveal insights into multivariable sensitivity, which helps improve the data quality. The proposed approaches for the FNN design and learning have been proven simple, effective, and optimal based on case studies on a power electronic converter, and future prospects in its industrial applications are also discussed.

11.
Entropy (Basel) ; 25(3)2023 Feb 21.
Article in English | MEDLINE | ID: mdl-36981281

ABSTRACT

It is anticipated that future communication systems will involve the use of new technologies, requiring high-speed computations using large amounts of data, in order to take advantage of data-driven methods for improving services and providing reliability and other benefits [...].

12.
Proc Natl Acad Sci U S A ; 119(42): e2205772119, 2022 10 18.
Article in English | MEDLINE | ID: mdl-36215503

ABSTRACT

The power grid is going through significant changes with the introduction of renewable energy sources and the incorporation of smart grid technologies. These rapid advancements necessitate new models and analyses to keep up with the various emergent phenomena they induce. A major prerequisite of such work is the acquisition of well-constructed and accurate network datasets for the power grid infrastructure. In this paper, we propose a robust, scalable framework to synthesize power distribution networks that resemble their physical counterparts for a given region. We use openly available information about interdependent road and building infrastructures to construct the networks. In contrast to prior work based on network statistics, we incorporate engineering and economic constraints to create the networks. Additionally, we provide a framework to create ensembles of power distribution networks to generate multiple possible instances of the network for a given region. The comprehensive dataset consists of nodes with attributes, such as geocoordinates; type of node (residence, transformer, or substation); and edges with attributes, such as geometry, type of line (feeder lines, primary or secondary), and line parameters. For validation, we provide detailed comparisons of the generated networks with actual distribution networks. The generated datasets represent realistic test systems (as compared with standard test cases published by Institute of Electrical and Electronics Engineers (IEEE)) that can be used by network scientists to analyze complex events in power grids and to perform detailed sensitivity and statistical analyses over ensembles of networks.


Subject(s)
Electric Power Supplies
14.
Proc Natl Acad Sci U S A ; 119(24): e2202235119, 2022 Jun 14.
Article in English | MEDLINE | ID: mdl-35687669

ABSTRACT

Entanglement-assisted concatenated quantum codes (EACQCs), constructed by concatenating two quantum codes, are proposed. These EACQCs show significant advantages over standard concatenated quantum codes (CQCs). First, we prove that, unlike standard CQCs, EACQCs can beat the nondegenerate Hamming bound for entanglement-assisted quantum error-correction codes (EAQECCs). Second, we construct families of EACQCs with parameters better than the best-known standard quantum error-correction codes (QECCs) and EAQECCs. Moreover, these EACQCs require very few Einstein-Podolsky-Rosen (EPR) pairs to begin with. Finally, it is shown that EACQCs make entanglement-assisted quantum communication possible, even if the ebits are noisy. Furthermore, EACQCs can outperform CQCs in entanglement fidelity over depolarizing channels if the ebits are less noisy than the qubits. We show that the error-probability threshold of EACQCs is larger than that of CQCs when the error rate of ebits is sufficiently lower than that of qubits. Specifically, we derive a high threshold of 47% when the error probability of the preshared entanglement is 1% to that of qubits.

15.
Entropy (Basel) ; 24(5)2022 Apr 30.
Article in English | MEDLINE | ID: mdl-35626522

ABSTRACT

Fifth generation mobile communication systems (5G) have to accommodate both Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB) services. While eMBB applications support high data rates, URLLC services aim at guaranteeing low-latencies and high-reliabilities. eMBB and URLLC services are scheduled on the same frequency band, where the different latency requirements of the communications render their coexistence challenging. In this survey, we review, from an information theoretic perspective, coding schemes that simultaneously accommodate URLLC and eMBB transmissions and show that they outperform traditional scheduling approaches. Various communication scenarios are considered, including point-to-point channels, broadcast channels, interference networks, cellular models, and cloud radio access networks (C-RANs). The main focus is on the set of rate pairs that can simultaneously be achieved for URLLC and eMBB messages, which captures well the tension between the two types of communications. We also discuss finite-blocklength results where the measure of interest is the set of error probability pairs that can simultaneously be achieved in the two communication regimes.

16.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Article in English | MEDLINE | ID: mdl-35046025

ABSTRACT

The ongoing COVID-19 pandemic underscores the importance of developing reliable forecasts that would allow decision makers to devise appropriate response strategies. Despite much recent research on the topic, epidemic forecasting remains poorly understood. Researchers have attributed the difficulty of forecasting contagion dynamics to a multitude of factors, including complex behavioral responses, uncertainty in data, the stochastic nature of the underlying process, and the high sensitivity of the disease parameters to changes in the environment. We offer a rigorous explanation of the difficulty of short-term forecasting on networked populations using ideas from computational complexity. Specifically, we show that several forecasting problems (e.g., the probability that at least a given number of people will get infected at a given time and the probability that the number of infections will reach a peak at a given time) are computationally intractable. For instance, efficient solvability of such problems would imply that the number of satisfying assignments of an arbitrary Boolean formula in conjunctive normal form can be computed efficiently, violating a widely believed hypothesis in computational complexity. This intractability result holds even under the ideal situation, where all the disease parameters are known and are assumed to be insensitive to changes in the environment. From a computational complexity viewpoint, our results, which show that contagion dynamics become unpredictable for both macroscopic and individual properties, bring out some fundamental difficulties of predicting disease parameters. On the positive side, we develop efficient algorithms or approximation algorithms for restricted versions of forecasting problems.


Subject(s)
Epidemiological Models , Forecasting/methods , Algorithms , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19/transmission , Humans , Probability , SARS-CoV-2 , Time Factors
17.
Entropy (Basel) ; 23(11)2021 Oct 27.
Article in English | MEDLINE | ID: mdl-34828111

ABSTRACT

In this paper, the optimization of network performance to support the deployment of federated learning (FL) is investigated. In particular, in the considered model, each user owns a machine learning (ML) model by training through its own dataset, and then transmits its ML parameters to a base station (BS) which aggregates the ML parameters to obtain a global ML model and transmits it to each user. Due to limited radio frequency (RF) resources, the number of users that participate in FL is restricted. Meanwhile, each user uploading and downloading the FL parameters may increase communication costs thus reducing the number of participating users. To this end, we propose to introduce visible light communication (VLC) as a supplement to RF and use compression methods to reduce the resources needed to transmit FL parameters over wireless links so as to further improve the communication efficiency and simultaneously optimize wireless network through user selection and resource allocation. This user selection and bandwidth allocation problem is formulated as an optimization problem whose goal is to minimize the training loss of FL. We first use a model compression method to reduce the size of FL model parameters that are transmitted over wireless links. Then, the optimization problem is separated into two subproblems. The first subproblem is a user selection problem with a given bandwidth allocation, which is solved by a traversal algorithm. The second subproblem is a bandwidth allocation problem with a given user selection, which is solved by a numerical method. The ultimate user selection and bandwidth allocation are obtained by iteratively compressing the model and solving these two subproblems. Simulation results show that the proposed FL algorithm can improve the accuracy of object recognition by up to 16.7% and improve the number of selected users by up to 68.7%, compared to a conventional FL algorithm using only RF.

18.
iScience ; 24(11): 103278, 2021 Nov 19.
Article in English | MEDLINE | ID: mdl-34755098

ABSTRACT

Despite extensive research in the past five years and several successfully completed and on-going pilot projects, regulators are still reluctant to implement peer-to-peer trading at a large scale in today's electricity market. The reason could partly be attributed to the perceived disadvantage of current market participants such as retailers due to their exclusion from market participation-a fundamental property of decentralized peer-to-peer trading. As a consequence, recently, there has been growing pressure from energy service providers in favor of retailers' participation in peer-to-peer trading. However, the role of retailers in the peer-to-peer market is yet to be established, as no existing study has challenged this fundamental circumspection of decentralized trading. In this context, this perspective takes the first step to discuss the feasibility of retailers' involvement in the peer-to-peer market. In doing so, we identify key characteristics of retail-based and peer-to-peer electricity markets and discuss our viewpoint on how to incorporate a single retailer in a peer-to-peer market without compromising the fundamental decision-making characteristics of both markets. Finally, we give an example of a hypothetical business model to demonstrate how a retailer can be a part of a peer-to-peer market with a promise of collective benefits for the participants.

19.
Entropy (Basel) ; 23(7)2021 Jul 19.
Article in English | MEDLINE | ID: mdl-34356457

ABSTRACT

Short-packet transmission has attracted considerable attention due to its potential to achieve ultralow latency in automated driving, telesurgery, the Industrial Internet of Things (IIoT), and other applications emerging in the coming era of the Six-Generation (6G) wireless networks. In 6G systems, a paradigm-shifting infrastructure is anticipated to provide seamless coverage by integrating low-Earth orbit (LEO) satellite networks, which enable long-distance wireless relaying. However, how to efficiently transmit short packets over a sizeable spatial scale remains open. In this paper, we are interested in low-latency short-packet transmissions between two distant nodes, in which neither propagation delay, nor propagation loss can be ignored. Decode-and-forward (DF) relays can be deployed to regenerate packets reliably during their delivery over a long distance, thereby reducing the signal-to-noise ratio (SNR) loss. However, they also cause decoding delay in each hop, the sum of which may become large and cannot be ignored given the stringent latency constraints. This paper presents an optimal relay deployment to minimize the error probability while meeting both the latency and transmission power constraints. Based on an asymptotic analysis, a theoretical performance bound for distant short-packet transmission is also characterized by the optimal distance-latency-reliability tradeoff, which is expected to provide insights into designing integrated LEO satellite communications in 6G.

20.
Entropy (Basel) ; 23(8)2021 Jul 27.
Article in English | MEDLINE | ID: mdl-34441100

ABSTRACT

Lightweight session key agreement schemes are expected to play a central role in building Internet of things (IoT) security in sixth-generation (6G) networks. A well-established approach deriving from the physical layer is a secret key generation (SKG) from shared randomness (in the form of wireless fading coefficients). However, although practical, SKG schemes have been shown to be vulnerable to active attacks over the initial "advantage distillation" phase, throughout which estimates of the fading coefficients are obtained at the legitimate users. In fact, by injecting carefully designed signals during this phase, a man-in-the-middle (MiM) attack could manipulate and control part of the reconciled bits and thus render SKG vulnerable to brute force attacks. Alternatively, a denial of service attack can be mounted by a reactive jammer. In this paper, we investigate the impact of injection and jamming attacks during the advantage distillation in a multiple-input-multiple-output (MIMO) system. First, we show that a MiM attack can be mounted as long as the attacker has one extra antenna with respect to the legitimate users, and we propose a pilot randomization scheme that allows the legitimate users to successfully reduce the injection attack to a less harmful jamming attack. Secondly, by taking a game-theoretic approach we evaluate the optimal strategies available to the legitimate users in the presence of reactive jammers.

SELECTION OF CITATIONS
SEARCH DETAIL
...