Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Environ Manage ; 345: 118587, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37442038

ABSTRACT

This empirical study examines the impact of environmental regulations on carbon productivity under varying conditions using panel data from Chinese provinces from 2011 to 2019. Prior research has reported inconsistent results regarding the relationships between these variables. We developed a spatial Durbin model (SDM) and tested the non-linear effects of environmental regulation on carbon productivity from a spatial linkage perspective. The results demonstrate a U-shaped curve representing the local-neighborhood effect of environmental regulations on carbon productivity. This curve is further dissected into two components: the average direct effect (ADE) and the average indirect effect (AIE). Furthermore, the findings indicate that green technology progress and pollution transfer act as moderating factors in shaping the U-shaped curve. Green technological progress has steepened the U-shape curve, whereas pollution transfer has flattened it. Based on these findings, we propose three recommendations for the formulation of environmental regulation policies.


Subject(s)
Carbon , Environmental Pollution , Carbon/analysis , Environmental Pollution/analysis , Technology , Efficiency , Carbon Dioxide/analysis , China , Economic Development
2.
Proc Natl Acad Sci U S A ; 117(48): 30079-30087, 2020 12 01.
Article in English | MEDLINE | ID: mdl-32817541

ABSTRACT

The combination of reinforcement learning with deep learning is a promising approach to tackle important sequential decision-making problems that are currently intractable. One obstacle to overcome is the amount of data needed by learning systems of this type. In this article, we propose to address this issue through a divide-and-conquer approach. We argue that complex decision problems can be naturally decomposed into multiple tasks that unfold in sequence or in parallel. By associating each task with a reward function, this problem decomposition can be seamlessly accommodated within the standard reinforcement-learning formalism. The specific way we do so is through a generalization of two fundamental operations in reinforcement learning: policy improvement and policy evaluation. The generalized version of these operations allow one to leverage the solution of some tasks to speed up the solution of others. If the reward function of a task can be well approximated as a linear combination of the reward functions of tasks previously solved, we can reduce a reinforcement-learning problem to a simpler linear regression. When this is not the case, the agent can still exploit the task solutions by using them to interact with and learn about the environment. Both strategies considerably reduce the amount of data needed to solve a reinforcement-learning problem.

SELECTION OF CITATIONS
SEARCH DETAIL
...