Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 9.530
Filter
1.
Euro Surveill ; 29(38)2024 Sep.
Article in English | MEDLINE | ID: mdl-39301744

ABSTRACT

BackgroundThe wide application of machine learning (ML) holds great potential to improve public health by supporting data analysis informing policy and practice. Its application, however, is often hampered by data fragmentation across organisations and strict regulation by the General Data Protection Regulation (GDPR). Federated learning (FL), as a decentralised approach to ML, has received considerable interest as a means to overcome the fragmentation of data, but it is yet unclear to which extent this approach complies with the GDPR.AimOur aim was to understand the potential data protection implications of the use of federated learning for public health purposes.MethodsBuilding upon semi-structured interviews (n = 14) and a panel discussion (n = 5) with key opinion leaders in Europe, including both FL and GDPR experts, we explored how GDPR principles would apply to the implementation of FL within public health.ResultsWhereas this study found that FL offers substantial benefits such as data minimisation, storage limitation and effective mitigation of many of the privacy risks of sharing personal data, it also identified various challenges. These challenges mostly relate to the increased difficulty of checking data at the source and the limited understanding of potential adverse outcomes of the technology.ConclusionSince FL is still in its early phase and under rapid development, it is expected that knowledge on its impracticalities will increase rapidly, potentially addressing remaining challenges. In the meantime, this study reflects on the potential of FL to align with data protection objectives and offers guidance on GDPR compliance.


Subject(s)
Public Health , Humans , Europe , Qualitative Research , Machine Learning , Computer Security , Information Dissemination
2.
Sci Rep ; 14(1): 21532, 2024 09 15.
Article in English | MEDLINE | ID: mdl-39278954

ABSTRACT

The advancement in technology, with the "Internet of Things (IoT) is continuing a crucial task to accomplish distance medical care observation, where the effective and secure healthcare information retrieval is complex. However, the IoT systems have restricted resources hence it is complex to attain effective and secure healthcare information acquisition. The idea of smart healthcare has developed in diverse regions, where small-scale implementations of medical facilities are evaluated. In the IoT-aided medical devices, the security of the IoT systems and related information is highly essential on the other hand, the edge computing is a significant framework that rectifies their processing and computational issues. The edge computing is inexpensive, and it is a powerful framework to offer low latency information assistance by enhancing the computation and the transmission speed of the IoT systems in the medical sectors. The main intention of this work is to design a secure framework for Edge computing in IoT-enabled healthcare systems using heuristic-based authentication and "Named Data Networking (NDN)". There are three layers in the proposed model. In the first layer, many IoT devices are connected together, and using the cluster head formation, the patients are transmitting their data to the edge cloud layer. The edge cloud layer is responsible for storage and computing resources for rapidly caching and providing medical data. Hence, the patient layer is a new heuristic-based sanitization algorithm called Revised Position of Cat Swarm Optimization (RPCSO) with NDN for hiding the sensitive data that should not be leaked to unauthorized users. This authentication procedure is adopted as a multi-objective function key generation procedure considering constraints like hiding failure rate, information preservation rate, and degree of modification. Further, the data from the edge cloud layer is transferred to the user layer, where the optimal key generation with NDN-based restoration is adopted, thus achieving efficient and secure medical data retrieval. The framework is evaluated quantitatively on diverse healthcare datasets from University of California (UCI) and Kaggle repository and experimental analysis shows the superior performance of the proposed model in terms of latency and cost when compared to existing solutions. The proposed model performs the comparative analysis of the existing algorithms such as Cat Swarm Optimization (CSO), Osprey Optimization Algorithm (OOA), Mexican Axolotl Optimization (MAO), Single candidate optimizer (SCO). Similarly, the cryptography tasks like "Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), and Data sanitization and Restoration (DSR) are applied and compared with the RPCSO in the proposed work. The results of the proposed model is compared on the basis of the best, worst, mean, median and standard deviation. The proposed RPCSO outperforms all other models with values of 0.018069361, 0.50564046, 0.112643119, 0.018069361, 0.156968355 and 0.283597992, 0.467442652, 0.32920734, 0.328581887, 0.063687386 for both dataset 1 and dataset 2 respectively.


Subject(s)
Cloud Computing , Computer Security , Internet of Things , Humans , Heuristics , Algorithms , Delivery of Health Care , Computer Communication Networks
3.
J Med Syst ; 48(1): 90, 2024 Sep 19.
Article in English | MEDLINE | ID: mdl-39298041

ABSTRACT

IT has made significant progress in various fields over the past few years, with many industries transitioning from paper-based to electronic media. However, sharing electronic medical records remains a long-term challenge, particularly when patients are in emergency situations, making it difficult to access and control their medical information. Previous studies have proposed permissioned blockchains with limited participants or mechanisms that allow emergency medical information sharing to pre-designated participants. However, permissioned blockchains require prior participation by medical institutions, and limiting sharing entities restricts the number of potential partners. This means that sharing medical information with local emergency doctors becomes impossible if a patient is unconscious and far away from home, such as when traveling abroad. To tackle this challenge, we propose an emergency access control system for a global electronic medical information system that can be shared using a public blockchain, allowing anyone to participate. Our proposed system assumes that the patient wears a pendant with tamper-proof and biometric authentication capabilities. In the event of unconsciousness, emergency doctors can perform biometrics on behalf of the patient, allowing the family doctor to share health records with the emergency doctor through a secure channel that uses the Diffie-Hellman (DH) key exchange protocol. The pendant's biometric authentication function prevents unauthorized use if it is stolen, and we have tested the blockchain's fee for using the public blockchain, demonstrating that the proposed system is practical.


Subject(s)
Blockchain , Computer Security , Electronic Health Records , Humans , Electronic Health Records/organization & administration , Confidentiality , Health Information Exchange
4.
PLoS One ; 19(9): e0309743, 2024.
Article in English | MEDLINE | ID: mdl-39298389

ABSTRACT

The unauthorized replication and distribution of digital images pose significant challenges to copyright protection. While existing solutions incorporate blockchain-based techniques such as perceptual hashing and digital watermarking, they lack large-scale experimental validation and a dedicated blockchain consensus protocol for image copyright management. This paper introduces DRPChain, a novel digital image copyright management system that addresses these issues. DRPChain employs an efficient cropping-resistant robust image hashing algorithm to defend against 14 common image attacks, demonstrating an 85% success rate in watermark extraction, 10% higher than the original scheme. Moreover, the paper designs the K-Raft consensus algorithm tailored for image copyright protection. Comparative experiments with Raft and benchmarking against PoW and PBFT algorithms show that K-Raft reduces block error rates by 2%, improves efficiency by 300ms compared to Raft, and exhibits superior efficiency,decentralization, and throughput compared to PoW and PBFT. These advantages make K-Raft more suitable for digital image copyright protection. This research contributes valuable insights into using blockchain technology for digital copyright protection, providing a solid foundation for future exploration.


Subject(s)
Algorithms , Blockchain , Computer Security , Copyright , Image Processing, Computer-Assisted/methods
5.
PLoS One ; 19(9): e0309809, 2024.
Article in English | MEDLINE | ID: mdl-39255289

ABSTRACT

More and more attention has been paid to computer security, and its vulnerabilities urgently need more sensitive solutions. Due to the incomplete data of most vulnerability libraries, it is difficult to obtain pre-permission and post-permission of vulnerabilities, and construct vulnerability exploitation chains, so it cannot to respond to vulnerabilities in time. Therefore, a vulnerability extraction and prediction method based on improved information gain algorithm is proposed. Considering the accuracy and response speed of deep neural network, deep neural network is adopted as the basic framework. The Dropout method effectively reduces overfitting in the case of incomplete data, thus improving the ability to extract and predict vulnerabilities. These experiments confirmed that the excellent F1 and Recall of the improved method reached 0.972 and 0.968, respectively. Compared to the function fingerprints vulnerability detection method and K-nearest neighbor algorithm, the convergence is better. Its response time is 0.12 seconds, which is excellent. To ensure the reliability and validity of the proposed method in the face of missing data, the reliability and validity of Mask test are verified. The false negative rate was 0.3% and the false positive rate was 0.6%. The prediction accuracy of this method for existing permissions reached 97.9%, and it can adapt to the development of permissions more actively, so as to deal with practical challenges. In this way, companies can detect and discover vulnerabilities earlier. In security repair, this method can effectively improve the repair speed and reduce the response time. The prediction accuracy of post-existence permission reaches 96.8%, indicating that this method can significantly improve the speed and efficiency of vulnerability response, and strengthen the understanding and construction of vulnerability exploitation chain. The prediction of the posterior permission can reduce the attack surface of the vulnerability, thus reducing the risk of breach, speeding up the detection of the vulnerability, and ensuring the timely implementation of security measures. This model can be applied to public network security and application security scenarios in the field of computer security, as well as personal computer security and enterprise cloud server security. In addition, the model can also be used to analyze attack paths and security gaps after security accidents. However, the prediction of post-permissions is susceptible to dynamic environments and relies heavily on the updated guidance of security policy rules. This method can improve the accuracy of vulnerability extraction and prediction, quickly identify and respond to security vulnerabilities, shorten the window period of vulnerability exploitation, effectively reduce security risks, and improve the overall network security defense capability. Through the application of this model, the occurrence frequency of security vulnerability time is reduced effectively, and the repair time of vulnerability is shortened.


Subject(s)
Algorithms , Computer Security , Neural Networks, Computer , Reproducibility of Results , Humans
6.
PLoS One ; 19(9): e0310407, 2024.
Article in English | MEDLINE | ID: mdl-39292723

ABSTRACT

The recent global outbreaks of infectious diseases such as COVID-19, yellow fever, and Ebola have highlighted the critical need for robust health data management systems that can rapidly adapt to and mitigate public health emergencies. In contrast to traditional systems, this study introduces an innovative blockchain-based Electronic Health Record (EHR) access control mechanism that effectively safeguards patient data integrity and privacy. The proposed approach uniquely integrates granular data access control mechanism within a blockchain framework, ensuring that patient data is only accessible to explicitly authorized users and thereby enhancing patient consent and privacy. This system addresses key challenges in healthcare data management, including preventing unauthorized access and overcoming the inefficiencies inherent in traditional access mechanisms. Since the latency is a sensitive factor in healthcare data management, the simulations of the proposed model reveal substantial improvements over existing benchmarks in terms of reduced computing overhead, increased throughput, minimized latency, and strengthened overall security. By demonstrating these advantages, the study contributes significantly to the evolution of health data management, offering a scalable, secure solution that prioritizes patient autonomy and privacy in an increasingly digital healthcare landscape.


Subject(s)
Blockchain , COVID-19 , Electronic Health Records , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Computer Security , SARS-CoV-2 , Privacy , Confidentiality , Communicable Diseases/epidemiology
7.
Stud Health Technol Inform ; 317: 11-19, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234702

ABSTRACT

BACKGROUND: In the context of the telematics infrastructure, new data usage regulations, and the growing potential of artificial intelligence, cloud computing plays a key role in driving the digitalization in the German hospital sector. METHODS: Against this background, the study aims to develop and validate a scale for assessing the cloud readiness of German hospitals. It uses the TPOM (Technology, People, Organization, Macro-Environment) framework to create a scoring system. A survey involving 110 Chief Information Officers (CIOs) from German hospitals was conducted, followed by an exploratory factor analysis and reliability testing to refine the items, resulting in a final set of 30 items. RESULTS: The analysis confirmed the statistical robustness and identified key factors contributing to cloud readiness. These include IT security in the dimension "technology", collaborative research and acceptance for the need to make high quality data available in the dimension "people", scalability of IT resources in the dimension "organization", and legal aspects in the dimension "macroenvironment". The macroenvironment dimension emerged as particularly stable, highlighting the critical role of regulatory compliance in the healthcare sector. CONCLUSION: The findings suggest a certain degree of cloud readiness among German hospitals, with potential for improvement in all four dimensions. Systemically, legal requirements and a challenging political environment are top concerns for CIOs, impacting their cloud readiness.


Subject(s)
Cloud Computing , Germany , Hospitals , Computer Security , Humans , Surveys and Questionnaires
8.
Stud Health Technol Inform ; 317: 85-93, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234710

ABSTRACT

INTRODUCTION: With the establishment of the Data Sharing Framework (DSF) as a distributed business process engine in German research networks, it is becoming increasingly important to coordinate authentication, authorization, and role information between peer-to-peer network components. This information is provided in the form of an allowlist. This paper presents a concept and implementation of an Allowlist Management Application. STATE OF THE ART: In research networks using the DSF, allowlists were initially generated manually. CONCEPT: The Allowlist Management Application provides comprehensive tool support for the participating organizations and the administrators of the Allowlist Management Application. It automates the process of creating and distributing allowlists and additionally reduces errors associated with manual entries. In addition, security is improved through extensive validation of entries and enforcing review of requested changes by implementing a four-eyes principle. IMPLEMENTATION: Our implementation serves as a preliminary development for the complete automation of onboarding and allowlist management processes using established frontend and backend frameworks. The application has been deployed in the Medical Informatics Initiative and the Network University Medicine with over 40 participating organizations. LESSONS LEARNED: We learned the need for user guidance, unstructured communication in a structured tool, generalizability, and checks to ensure that the tool's outputs have actually been applied.


Subject(s)
Information Dissemination , Germany , Computer Security , Humans
9.
Stud Health Technol Inform ; 317: 59-66, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234707

ABSTRACT

INTRODUCTION: To support research projects that require medical data from multiple sites is one of the goals of the German Medical Informatics Initiative (MII). The data integration centers (DIC) at university medical centers in Germany provide patient data via FHIR® in compliance with the MII core data set (CDS). Requirements for data protection and other legal bases for processing prefer decentralized processing of the relevant data in the DICs and the subsequent exchange of aggregated results for cross-site evaluation. METHODS: Requirements from clinical experts were obtained in the context of the MII use case INTERPOLAR. A software architecture was then developed, modeled using 3LGM2, finally implemented and published in a github repository. RESULTS: With the CDS tool chain, we have created software components for decentralized processing on the basis of the MII CDS. The CDS tool chain requires access to a local FHIR endpoint and then transfers the data to an SQL database. This is accessed by the DataProcessor component, which performs calculations with the help of rules (input repo) and writes the results back to the database. The CDS tool chain also has a frontend module (REDCap), which is used to display the output data and calculated results, and allows verification, evaluation, comments and other responses. This feedback is also persisted in the database and is available for further use, analysis or data sharing in the future. DISCUSSION: Other solutions are conceivable. Our solution utilizes the advantages of an SQL database. This enables flexible and direct processing of the stored data using established analysis methods. Due to the modularization, adjustments can be made so that it can be used in other projects. We are planning further developments to support pseudonymization and data sharing. Initial experience is being gathered. An evaluation is pending and planned.


Subject(s)
Software , Germany , Electronic Health Records , Humans , Medical Informatics , Computer Security , Datasets as Topic
10.
Stud Health Technol Inform ; 317: 171-179, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234720

ABSTRACT

INTRODUCTION: The German Medical Text Project (GeMTeX) is one of the largest infrastructure efforts targeting German-language clinical documents. We here introduce the architecture of the de-identification pipeline of GeMTeX. METHODS: This pipeline comprises the export of raw clinical documents from the local hospital information system, the import into the annotation platform INCEpTION, fully automatic pre-tagging with protected health information (PHI) items by the Averbis Health Discovery pipeline, a manual curation step of these pre-annotated data, and, finally, the automatic replacement of PHI items with type-conformant substitutes. This design was implemented in a pilot study involving six annotators and two curators each at the Data Integration Centers of the University Hospitals Leipzig and Erlangen. RESULTS: As a proof of concept, the publicly available Graz Synthetic Text Clinical Corpus (GRASSCO) was enhanced with PHI annotations in an annotation campaign for which reasonable inter-annotator agreement values of Krippendorff's α ≈ 0.97 can be reported. CONCLUSION: These curated 1.4 K PHI annotations are released as open-source data constituting the first publicly available German clinical language text corpus with PHI metadata.


Subject(s)
Electronic Health Records , Pilot Projects , Germany , Natural Language Processing , Confidentiality , Humans , Computer Security
11.
Stud Health Technol Inform ; 317: 75-84, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234709

ABSTRACT

INTRODUCTION: Medical research studies which involve electronic data capture of sensitive data about human subjects need to manage medical and identifying participant data in a secure manner. To protect the identity of data subjects, an independent trusted third party should be responsible for pseudonymization and management of the identifying data. METHODS: We have developed a web-based integrated solution that combines REDCap as an electronic data capture system with the trusted third party software tools of the University Medicine Greifswald, which provides study personnel with a single user interface for both clinical data entry and management of identities, pseudonyms and informed consents. RESULTS: Integration of the two platforms enables a seamless workflow of registering new participants, entering identifying and consent information, and generating pseudonyms in the trusted third party system, with subsequent capturing of medical data in the electronic data capture system, while maintaining strict separation of medical and identifying data in the two independently managed systems. CONCLUSION: Our solution enables a time-efficient data entry workflow, provides a high level of data protection by minimizing visibility of identifying information and pseudonym lists, and avoids errors introduced by manual transfer of pseudonyms between separate systems.


Subject(s)
Biomedical Research , Computer Security , Confidentiality , Software , Informed Consent , Anonyms and Pseudonyms , Humans , Electronic Health Records , Systems Integration , User-Computer Interface
12.
Stud Health Technol Inform ; 317: 270-279, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234731

ABSTRACT

INTRODUCTION: A modern approach to ensuring privacy when sharing datasets is the use of synthetic data generation methods, which often claim to outperform classic anonymization techniques in the trade-off between data utility and privacy. Recently, it was demonstrated that various deep learning-based approaches are able to generate useful synthesized datasets, often based on domain-specific analyses. However, evaluating the privacy implications of releasing synthetic data remains a challenging problem, especially when the goal is to conform with data protection guidelines. METHODS: Therefore, the recent privacy risk quantification framework Anonymeter has been built for evaluating multiple possible vulnerabilities, which are specifically based on privacy risks that are considered by the European Data Protection Board, i.e. singling out, linkability, and attribute inference. This framework was applied to a synthetic data generation study from the epidemiological domain, where the synthesization replicates time and age trends previously found in data collected during the DONALD cohort study (1312 participants, 16 time points). The conducted privacy analyses are presented, which place a focus on the vulnerability of outliers. RESULTS: The resulting privacy scores are discussed, which vary greatly between the different types of attacks. CONCLUSION: Challenges encountered during their implementation and during the interpretation of their results are highlighted, and it is concluded that privacy risk assessment for synthetic data remains an open problem.


Subject(s)
Computer Security , Risk Assessment , Humans , Longitudinal Studies , Confidentiality , Privacy
13.
Stud Health Technol Inform ; 317: 244-250, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234728

ABSTRACT

INTRODUCTION: Secure Multi-Party Computation (SMPC) offers a powerful tool for collaborative healthcare research while preserving patient data privacy. STATE OF THE ART: However, existing SMPC frameworks often require separate executions for each desired computation and measurement period, limiting user flexibility. CONCEPT: This research explores the potential of a client-driven metaprotocol for the Federated Secure Computing (FSC) framework and its SImple Multiparty ComputatiON (SIMON) protocol as a step towards more flexible SMPC solutions. IMPLEMENTATION: This client-driven metaprotocol empowers users to specify and execute multiple calculations across diverse measurement periods within a single client-side code execution. This eliminates the need for repeated code executions and streamlines the analysis process. The metaprotocol offers a user-friendly interface, enabling researchers with limited cryptography expertise to leverage the power of SMPC for complex healthcare analyses. LESSONS LEARNED: We evaluate the performance of the client-driven metaprotocol against a baseline iterative approach. Our evaluation demonstrates performance improvements compared to traditional iterative approaches, making this metaprotocol a valuable tool for advancing secure and efficient collaborative healthcare research.


Subject(s)
Computer Security , Humans , Confidentiality
14.
Stud Health Technol Inform ; 317: 261-269, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39234730

ABSTRACT

INTRODUCTION: Retrieving comprehensible rule-based knowledge from medical data by machine learning is a beneficial task, e.g., for automating the process of creating a decision support system. While this has recently been studied by means of exception-tolerant hierarchical knowledge bases (i.e., knowledge bases, where rule-based knowledge is represented on several levels of abstraction), privacy concerns have not been addressed extensively in this context yet. However, privacy plays an important role, especially for medical applications. METHODS: When parts of the original dataset can be restored from a learned knowledge base, there may be a practically and legally relevant risk of re-identification for individuals. In this paper, we study privacy issues of exception-tolerant hierarchical knowledge bases which are learned from data. We propose approaches for determining and eliminating privacy issues of the learned knowledge bases. RESULTS: We present results for synthetic as well as for real world datasets. CONCLUSION: The results show that our approach effectively prevents privacy breaches while only moderately decreasing the inference quality.


Subject(s)
Confidentiality , Knowledge Bases , Machine Learning , Humans , Computer Security , Privacy , Electronic Health Records
15.
PLoS One ; 19(9): e0309919, 2024.
Article in English | MEDLINE | ID: mdl-39240999

ABSTRACT

In location-based service (LBS), private information retrieval (PIR) is an efficient strategy used for preserving personal privacy. However, schemes with traditional strategy that constructed by information indexing are usually denounced by its processing time and ineffective in preserving the attribute privacy of the user. Thus, in order to cope with above two weaknesses, in this paper, based on the conception of ciphertext policy attribute-based encryption (CP-ABE), a PIR scheme based on CP-ABE is proposed for preserving the personal privacy in LBS (location privacy preservation scheme with CP-ABE based PIR, short for LPPCAP). In this scheme, query and feedback are encrypted with security two-parties calculation by the user and the LBS server, so as not to violate any personal privacy and decrease the processing time in encrypting the retrieved information. In addition, this scheme can also preserve the attribute privacy of users such as the query frequency as well as the moving manner. At last, we analyzed the availability and the privacy of the proposed scheme, and then several groups of comparison experiment are given, so that the effectiveness and the usability of proposed scheme can be verified theoretically, practically, and the quality of service is also preserved.


Subject(s)
Computer Security , Privacy , Humans , Information Storage and Retrieval/methods , Algorithms , Confidentiality
16.
Ethics Hum Res ; 46(5): 13-25, 2024.
Article in English | MEDLINE | ID: mdl-39277876

ABSTRACT

Drawing on the authors' own ethnographic research, this article discusses the importance of developing polymedia literacy as a key step toward ethical online research on social networking sites (SNS). Polymedia literacy entails the ability to critically analyze the vast landscape of SNS, their affordances, and users' social motivations for choosing specific SNS for their interactions. Internet researchers face several ethical challenges, including issues of informed consent, "public" and "private" online spaces, and data protection. Even when research ethics committees waive the need for a formal ethics approval process, researchers of online spaces need to ensure that their studies are conducted and presented in an ethical and responsible manner. This is particularly important in research contexts that pertain to vulnerable populations in online communities.


Subject(s)
Anthropology, Cultural , Informed Consent , Social Networking , Humans , Informed Consent/ethics , Anthropology, Cultural/ethics , Ethics, Research , Internet , Social Media/ethics , Ethics Committees, Research , Computer Security/ethics
17.
BMC Med Inform Decis Mak ; 24(1): 260, 2024 Sep 16.
Article in English | MEDLINE | ID: mdl-39285411

ABSTRACT

BACKGROUND: Graded diagnosis and treatment, referral, and expert consultations between medical institutions all require cross domain access to patient medical information to support doctors' treatment decisions, leading to an increase in cross domain access among various medical institutions within the medical consortium. However, patient medical information is sensitive and private, and it is essential to control doctors' cross domain access to reduce the risk of leakage. Access control is a continuous and long-term process, and it first requires verification of the legitimacy of user identities, while utilizing control policies for selection and management. After verifying user identity and access permissions, it is also necessary to monitor unauthorized operations. Therefore, the content of access control includes authentication, implementation of control policies, and security auditing. Unlike the existing focus on authentication and control strategy implementation in access control, this article focuses on the control based on access log security auditing for doctors who have obtained authorization to access medical resources. This paper designs a blockchain based doctor intelligent cross domain access log recording system, which is used to record, query and analyze the cross domain access behavior of doctors after authorization. Through DBSCAN clustering analysis of doctors' cross domain access logs, we find the abnormal phenomenon of cross domain access, and build a penalty function to dynamically control doctors' cross domain access process, so as to reduce the risk of Data breach. Finally, through comparative analysis and experiments, it is shown that the proposed cross domain access control model for medical consortia based on DBSCAN and penalty function has good control effect on the cross domain access behavior of doctors in various medical institutions of the medical consortia, and has certain feasibility for the cross domain access control of doctors.


Subject(s)
Computer Security , Humans , Computer Security/standards , Blockchain
18.
PLoS One ; 19(9): e0308807, 2024.
Article in English | MEDLINE | ID: mdl-39283894

ABSTRACT

Information hiding in images has gained popularity. As image steganography gains relevance, techniques for detecting hidden messages have emerged. Statistical steganalysis mechanisms detect the presence of hidden secret messages in images, rendering images a prime target for cyber-attacks. Also, studies examining image steganography techniques are limited. This paper aims to fill the existing gap in extant literature on image steganography schemes capable of resisting statistical steganalysis attacks, by providing a comprehensive systematic literature review. This will ensure image steganography researchers and data protection practitioners are updated on current trends in information security assurance mechanisms. The study sampled 125 articles from ACM Digital Library, IEEE Explore, Science Direct, and Wiley. Using PRISMA, articles were synthesized and analyzed using quantitative and qualitative methods. A comprehensive discussion on image steganography techniques in terms of their robustness against well-known universal statistical steganalysis attacks including Regular-Singular (RS) and Chi-Square (X2) are provided. Trends in publication, techniques and methods, performance evaluation metrics, and security impacts were discussed. Extensive comparisons were drawn among existing techniques to evaluate their merits and limitations. It was observed that Generative Adversarial Networks dominate image steganography techniques and have become the preferred method by scholars within the domain. Artificial intelligence-powered algorithms including Machine Learning, Deep Learning, Convolutional Neural Networks, and Genetic Algorithms are recently dominating image steganography research as they enhance security. The implication is that previously preferred traditional techniques such as LSB algorithms are receiving less attention. Future Research may consider emerging technologies like blockchain technology, artificial neural networks, and biometric and facial recognition technologies to improve the robustness and security capabilities of image steganography applications.


Subject(s)
Computer Security , Humans , Algorithms , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
19.
Sci Rep ; 14(1): 21340, 2024 09 12.
Article in English | MEDLINE | ID: mdl-39266648

ABSTRACT

Digital image steganography serves as a technology facilitating covert communication through digital images by subtly incorporating secret data into a cover image. This practice poses a potential threat, as criminals exploit steganography to transmit illicit content, thereby jeopardizing information security. Consequently, it becomes imperative to implement defensive strategies against steganographic techniques. This paper proposes a novel defense mechanism termed "image vaccine" to safeguard digital images from steganography. The process of "vaccinating" an image renders it immune to steganographic manipulation. Notably, when criminals attempt to embed secret data into vaccinated images, the presence of such hidden information can be detected with a 100% probability, ensuring the consistent identification of stego images. This proactive approach enables the interception of stego image transmission, thereby neutralizing covert communication channels.


Subject(s)
Computer Security , Humans , Image Processing, Computer-Assisted/methods
20.
PLoS One ; 19(9): e0308265, 2024.
Article in English | MEDLINE | ID: mdl-39240910

ABSTRACT

Steganography, the use of algorithms to embed secret information in a carrier image, is widely used in the field of information transmission, but steganalysis tools built using traditional steganographic algorithms can easily identify them. Steganography without embedding (SWE) can effectively resist detection by steganography analysis tools by mapping noise onto secret information and generating secret images from secret noise. However, most SWE still have problems with the small capacity of steganographic data and the difficulty of extracting the data. Based on the above problems, this paper proposes image steganography without embedding carrier secret information. The objective of this approach is to enhance the capacity of secret information and the accuracy of secret information extraction for the purpose of improving the performance of security network communication. The proposed technique exploits the carrier characteristics to generate the carrier secret tensor, which improves the accuracy of information extraction while ensuring the accuracy of secret information extraction. Furthermore, the Wasserstein distance is employed as a constraint for the discriminator, and weight clipping is introduced to enhance the secret information capacity and extraction accuracy. Experimental results show that the proposed method can improve the data extraction accuracy by 10.03% at the capacity of 2304 bits, which verifies the effectiveness and universality of the method. The research presented here introduces a new intelligent information steganography secure communication model for secure communication in networks, which can improve the information capacity and extraction accuracy of image steganography without embedding.


Subject(s)
Algorithms , Computer Communication Networks , Computer Security , Image Processing, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL