Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(6): e0304467, 2024.
Article in English | MEDLINE | ID: mdl-38905256

ABSTRACT

The security crowd-testing regulatory mechanism is a vital means to promote collaborative vulnerability disclosure. However, existing regulatory mechanisms have not considered multi-agent responsibility boundaries and stakeholders' conflicts of interest, leading to their dysfunction. Distinguishing from previous research on the motivations and constraints of ethical hacks' vulnerability disclosure behaviors from a legal perspective, this paper constructs an evolutionary game model of SRCs, security researchers, and the government from a managerial perspective to propose regulatory mechanisms promoting tripartite collaborative vulnerability disclosure. The results show that the higher the initial willingness of the three parties to choose the collaborative strategy, the faster the system evolves into a stable state. Regarding the government's incentive mechanism, establishing reward and punishment mechanisms based on effective thresholds is essential. However, it is worth noting that the government has an incentive to adopt such mechanisms only if it receives sufficient regulatory benefits. To further facilitate collaborative disclosure, Security Response Centers (SRC) should establish incentive mechanisms including punishment and trust mechanisms. Additionally, publicity and training mechanisms for security researchers should be introduced to reduce their revenue from illegal participation, which promotes the healthy development of security crowd-testing. These findings contribute to improving SRCs' service quality, guiding security researchers' legal participation, enhancing the government's regulatory effectiveness, and ultimately establishing a multi-party collaborative vulnerability disclosure system.


Subject(s)
Game Theory , Humans , Disclosure , Cooperative Behavior , Security Measures , Punishment/psychology
2.
Math Biosci Eng ; 20(11): 19012-19039, 2023 Oct 10.
Article in English | MEDLINE | ID: mdl-38052589

ABSTRACT

There are various regulatory mechanisms to coordinate vulnerability disclosure behaviors during crowdsourcing cybersecurity testing. However, in the case of unclear regulatory effectiveness, enterprises cannot obtain sufficient vulnerability information, third-party crowdsourcing cybersecurity testing platforms fail to provide trusted services, and the government lacks strong credibility. We have constructed a tripartite evolutionary game model to analyze the evolutionary process of the equilibrium of {legal disclosure, active operation, strict regulation}, and the paper reveals the impact of three regulatory mechanisms. We find that these participants' positive behaviors are in a stable state. Higher initial willingness accelerates the speed of reaching the evolutionary stability of the system, and this equilibrium is satisfied only if the governmental regulatory benefits are sufficiently high. Regarding the punishment mechanism, increased punishment for enterprises causes them to adopt positive behaviors faster, while the opposite occurs for platforms; increased punishment for platforms drives both participants to adopt positive behaviors faster. Concerning the subsidy mechanism, increased subsidy to enterprises causes them to adopt legal disclosure behaviors faster, while platforms remain unresponsive; increased subsidy to platforms motivates both players to choose their own positive behaviors. In terms of the collaborative disclosure mechanism, excessive collaborative costs reduce the platforms' willingness to operate actively, which decreases the enterprises' incentives to disclose vulnerability legally. These findings guide the government to establish suitable mechanisms to regulate the participants' behavior and promote the healthy development of the cybersecurity crowdsourcing industry.


Subject(s)
Crowdsourcing , Humans , Disclosure , Biological Evolution , Computer Security , Health Status , China
3.
PLoS One ; 18(2): e0281314, 2023.
Article in English | MEDLINE | ID: mdl-36745656

ABSTRACT

This study investigates the effect of information sharing and deferral option on a firm's information security investment strategies by considering strategic interactions between a firm and an attacker. We find that 1) information sharing decreases a firm's security investment rate. 2) If a deferral decision is possible, the firm will decrease its immediate investment, and avoid non-investment. 3) After information sharing, the probability of a firm's deferral decision increases for low-benefit information (SL) but decreases for high-benefit information (SH). 4) When information sharing accuracy is low, a firm only defers decisions in a fraction of SL; when information sharing accuracy is high, the firm defers its decisions in all SL and a fraction of SH. 5) Information sharing can improve the effect of deferral decision when accuracy is low but weaken it when accuracy is high. These results contradict the literature, wherein information sharing reduces a firm's uncertainty on cybersecurity investment and decreases deferment options associated with investment.


Subject(s)
Investments , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL
...