Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.766
Filter
1.
Neural Netw ; 178: 106490, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38968777

ABSTRACT

Model Inversion Attack reconstructs confidential training dataset from a target deep learning model. Most of the existing methods assume the adversary has an auxiliary dataset that has similar distribution with the private dataset. However, this assumption does not always hold in real-world scenarios. Since the private dataset is unknown, the domain divergence between the auxiliary dataset and the private dataset is inevitable. In this paper, we use Cross Domain Model Inversion Attack to represent the distribution divergence scenario in MIA. With the distribution divergence between the private images and auxiliary images, the distribution between the feature vectors of the private images and those of the auxiliary images is also different. Moreover, the outputted prediction vectors of the auxiliary images are also misclassified. The inversion attack is thus hard to be performed. We perform both the feature vector inversion task and prediction vector inversion task in this cross domain setting. For feature vector inversion, Domain Alignment MIA (DA-MIA) is proposed. While performing the reconstruction task, DA-MIA aligns the feature vectors of auxiliary images with the feature vectors of private images in an adversarial manner to mitigate the domain divergence between them. Thus, semantically meaningful images can be reconstructed. For prediction vector inversion, we further introduce an auxiliary classifier and propose Domain Alignment MIA with Auxiliary Classifier (DA-MIA-AC). The auxiliary classifier is pretrained by the auxiliary dataset and fine-tuned during the adversarial training stage. Thus, the misclassification problem caused by domain divergence can be solved, and the images can be reconstructed correctly. Various experiments are performed to show the advancement of our methods, the results show that DA-MIA can improve the SSIM score of the reconstructed images for up to 191%, DA-MIA-AC can increase the classification accuracy score of the reconstructed images from 9.18% to 81.32% in Cross Domain Model Inversion Attack.

2.
J Forensic Sci ; 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38978157

ABSTRACT

During an investigation using Forensic Investigative Genetic Genealogy, which is a novel approach for solving violent crimes and identifying human remains, reference testing-when law enforcement requests a DNA sample from a person in a partially constructed family tree-is sometimes used when an investigation has stalled. Because the people considered for a reference test have not opted in to allow law enforcement to use their DNA profile in this way, reference testing is viewed by many as an invasion of privacy and by some as unethical. We generalize an existing mathematical optimization model of the genealogy process by incorporating the option of reference testing. Using simulated versions of 17 DNA Doe Project cases, we find that reference testing can solve cases more quickly (although many reference tests are required to substantially hasten the investigative process), but only rarely (<1%) solves cases that cannot otherwise be solved. Through a mixture of mathematical and computational analysis, we find that the most desirable people to test are at the bottom of a path descending from an ancestral couple that is most likely to be related to the target. We also characterize the rare cases where reference testing is necessary for solving the case: when there is only one descending path from an ancestral couple, which precludes the possibility of identifying an intersection (e.g., marriage) between two descendants of two different ancestral couples.

3.
Comput Biol Med ; 179: 108792, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38964242

ABSTRACT

BACKGROUND AND OBJECTIVE: Concerns about patient privacy issues have limited the application of medical deep learning models in certain real-world scenarios. Differential privacy (DP) can alleviate this problem by injecting random noise into the model. However, naively applying DP to medical models will not achieve a satisfactory balance between privacy and utility due to the high dimensionality of medical models and the limited labeled samples. METHODS: This work proposed the DP-SSLoRA model, a privacy-preserving classification model for medical images combining differential privacy with self-supervised low-rank adaptation. In this work, a self-supervised pre-training method is used to obtain enhanced representations from unlabeled publicly available medical data. Then, a low-rank decomposition method is employed to mitigate the impact of differentially private noise and combined with pre-trained features to conduct the classification task on private datasets. RESULTS: In the classification experiments using three real chest-X ray datasets, DP-SSLoRA achieves good performance with strong privacy guarantees. Under the premise of ɛ=2, with the AUC of 0.942 in RSNA, the AUC of 0.9658 in Covid-QU-mini, and the AUC of 0.9886 in Chest X-ray 15k. CONCLUSION: Extensive experiments on real chest X-ray datasets show that DP-SSLoRA can achieve satisfactory performance with stronger privacy guarantees. This study provides guidance for studying privacy-preserving in the medical field. Source code is publicly available online. https://github.com/oneheartforone/DP-SSLoRA.

4.
Comput Biol Med ; 179: 108734, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38964243

ABSTRACT

Artificial intelligence (AI) has played a vital role in computer-aided drug design (CADD). This development has been further accelerated with the increasing use of machine learning (ML), mainly deep learning (DL), and computing hardware and software advancements. As a result, initial doubts about the application of AI in drug discovery have been dispelled, leading to significant benefits in medicinal chemistry. At the same time, it is crucial to recognize that AI is still in its infancy and faces a few limitations that need to be addressed to harness its full potential in drug discovery. Some notable limitations are insufficient, unlabeled, and non-uniform data, the resemblance of some AI-generated molecules with existing molecules, unavailability of inadequate benchmarks, intellectual property rights (IPRs) related hurdles in data sharing, poor understanding of biology, focus on proxy data and ligands, lack of holistic methods to represent input (molecular structures) to prevent pre-processing of input molecules (feature engineering), etc. The major component in AI infrastructure is input data, as most of the successes of AI-driven efforts to improve drug discovery depend on the quality and quantity of data, used to train and test AI algorithms, besides a few other factors. Additionally, data-gulping DL approaches, without sufficient data, may collapse to live up to their promise. Current literature suggests a few methods, to certain extent, effectively handle low data for better output from the AI models in the context of drug discovery. These are transferring learning (TL), active learning (AL), single or one-shot learning (OSL), multi-task learning (MTL), data augmentation (DA), data synthesis (DS), etc. One different method, which enables sharing of proprietary data on a common platform (without compromising data privacy) to train ML model, is federated learning (FL). In this review, we compare and discuss these methods, their recent applications, and limitations while modeling small molecule data to get the improved output of AI methods in drug discovery. Article also sums up some other novel methods to handle inadequate data.

5.
PeerJ Comput Sci ; 10: e2137, 2024.
Article in English | MEDLINE | ID: mdl-38983222

ABSTRACT

The topic of privacy-preserving collaborative filtering is gaining more and more attention. Nevertheless, privacy-preserving collaborative filtering techniques are vulnerable to shilling or profile injection assaults. Hence, it is crucial to identify counterfeit profiles in order to achieve total success. Various techniques have been devised to identify and prevent intrusion patterns from infiltrating the system. Nevertheless, these strategies are specifically designed for collaborative filtering algorithms that do not prioritize privacy. There is a scarcity of research on identifying shilling attacks in recommender systems that prioritize privacy. This work presents a novel technique for identifying shilling assaults in privacy-preserving collaborative filtering systems. We employ an ant colony clustering detection method to effectively identify and eliminate fake profiles that are created by six widely recognized shilling attacks on compromised data. The objective of the study is to categorize the fraudulent profiles into a specific cluster and separate this cluster from the system. Empirical experiments are conducted with actual data. The empirical findings demonstrate that the strategy derived from the study effectively eliminates fraudulent profiles in privacy-preserving collaborative filtering.

6.
Pharm Stat ; 2024 Jul 07.
Article in English | MEDLINE | ID: mdl-38973072

ABSTRACT

Cox regression and Kaplan-Meier estimations are often needed in clinical research and this requires access to individual patient data (IPD). However, IPD cannot always be shared because of privacy or proprietary restrictions, which complicates the making of such estimations. We propose a method that generates pseudodata replacing the IPD by only sharing non-disclosive aggregates such as IPD marginal moments and a correlation matrix. Such aggregates are collected by a central computer and input as parameters to a Gaussian copula (GC) that generates the pseudodata. Survival inferences are computed on the pseudodata as if it were the IPD. Using practical examples we demonstrate the utility of the method, via the amount of IPD inferential content recoverable by the GC. We compare GC to a summary-based meta-analysis and an IPD bootstrap distributed across several centers. Other pseudodata approaches are also considered. In the empirical results, GC approximates the utility of the IPD bootstrap although it might yield more conservative inferences and it might have limitations in subgroup analyses. Overall, GC avoids many legal problems related to IPD privacy or property while enabling approximation of common IPD survival analyses otherwise difficult to conduct. Sharing more IPD aggregates than is currently practiced could facilitate "second purpose"-research and relax concerns regarding IPD access.

7.
Sci Rep ; 14(1): 15589, 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38971879

ABSTRACT

Federated learning (FL) has emerged as a significant method for developing machine learning models across multiple devices without centralized data collection. Candidemia, a critical but rare disease in ICUs, poses challenges in early detection and treatment. The goal of this study is to develop a privacy-preserving federated learning framework for predicting candidemia in ICU patients. This approach aims to enhance the accuracy of antifungal drug prescriptions and patient outcomes. This study involved the creation of four predictive FL models for candidemia using data from ICU patients across three hospitals in China. The models were designed to prioritize patient privacy while aggregating learnings across different sites. A unique ensemble feature selection strategy was implemented, combining the strengths of XGBoost's feature importance and statistical test p values. This strategy aimed to optimize the selection of relevant features for accurate predictions. The federated learning models demonstrated significant improvements over locally trained models, with a 9% increase in the area under the curve (AUC) and a 24% rise in true positive ratio (TPR). Notably, the FL models excelled in the combined TPR + TNR metric, which is critical for feature selection in candidemia prediction. The ensemble feature selection method proved more efficient than previous approaches, achieving comparable performance. The study successfully developed a set of federated learning models that significantly enhance the prediction of candidemia in ICU patients. By leveraging a novel feature selection method and maintaining patient privacy, the models provide a robust framework for improved clinical decision-making in the treatment of candidemia.


Subject(s)
Candidemia , Intensive Care Units , Machine Learning , Humans , Candidemia/drug therapy , Candidemia/diagnosis , Antifungal Agents/therapeutic use , China , Male , Female , Delivery of Health Care
8.
J Law Med ; 31(2): 258-272, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38963246

ABSTRACT

This section explores the challenges involved in translating genomic research into genomic medicine. A number of priorities have been identified in the Australian National Health Genomics Framework for addressing these challenges. Responsible collection, storage, use and management of genomic data is one of these priorities, and is the primary theme of this section. The recent release of Genomical, an Australian data-sharing platform, is used as a case study to illustrate the type of assistance that can be provided to the health care sector in addressing this priority. The section first describes the National Framework and other drivers involved in the move towards genomic medicine. The section then examines key ethical, legal and social factors at play in genomics, with particular focus on privacy and consent. Finally, the section examines how Genomical is being used to help ensure that the move towards genomic medicine is ethically, legally and socially sound and that it optimises advances in both genomic and information technology.


Subject(s)
Genomics , Information Dissemination , Humans , Genomics/legislation & jurisprudence , Genomics/ethics , Australia , Information Dissemination/legislation & jurisprudence , Information Dissemination/ethics , Informed Consent/legislation & jurisprudence , Genetic Privacy/legislation & jurisprudence , Confidentiality/legislation & jurisprudence
9.
J Law Med ; 31(2): 370-385, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38963251

ABSTRACT

Terminating a pregnancy is now lawful in all Australian jurisdictions, although on diverse bases. While abortions have not been subject to the same degree of heated debate in Australia as elsewhere, protests aimed at persuading women not to have a termination of their pregnancy have occurred outside abortion service providers in the past. Over the last decade, this has led to the introduction of laws setting out so-called safe access zones around provider premises. Anti-abortion protests are prohibited within a specific distance from abortion services and infringements attract criminal liability. As safe access zone laws prevent protesters from expressing their views in certain spaces, the question arises as to the laws' compliance with protesters' human rights. This article analyses this by considering the human rights compliance of the Queensland ban in light of Queensland human rights legislation. It concludes that the imposed prohibition of anti-abortion protests near abortion clinics is compatible with human rights.


Subject(s)
Abortion, Induced , Human Rights , Humans , Female , Human Rights/legislation & jurisprudence , Pregnancy , Australia , Abortion, Induced/legislation & jurisprudence , Health Services Accessibility/legislation & jurisprudence , Abortion, Legal/legislation & jurisprudence
10.
J Law Med Ethics ; 52(S1): 70-74, 2024.
Article in English | MEDLINE | ID: mdl-38995251

ABSTRACT

Here, we analyze the public health implications of recent legal developments - including privacy legislation, intergovernmental data exchange, and artificial intelligence governance - with a view toward the future of public health informatics and the potential of diverse data to inform public health actions and drive population health outcomes.


Subject(s)
Artificial Intelligence , Humans , Artificial Intelligence/legislation & jurisprudence , United States , Confidentiality/legislation & jurisprudence , Public Health Informatics/legislation & jurisprudence , Public Health/legislation & jurisprudence , Privacy/legislation & jurisprudence
11.
Int J Med Inform ; 190: 105549, 2024 Jul 14.
Article in English | MEDLINE | ID: mdl-39018707

ABSTRACT

INTRODUCTION AND PURPOSE: We present the needs, design, development, implementation, and accessibility of a crafted experimental PACS (ePACS) system to securely store images, ensuring efficiency and ease of use for AI processing, specifically tailored for research scenarios, including phantoms, animal and human studies and quality assurance (QA) exams. The ePACS system plays a crucial role in any medical imaging departments that handle non-care profile studies, such as protocol adjustments and dummy runs. By effectively segregating non-care profile studies from the healthcare assistance, the ePACS usefully prevents errors both in clinical practice and storage security. METHODS AND RESULTS: The developed ePACS system considers the best practices for management, maintenance, access, long-term storage and backups, regulatory audits, and economic aspects. Moreover, key aspects of the ePACS system include the design of data flows with a focus on incorporating data security and privacy, access control and levels based on user profiles, internal data management policies, standardized architecture, infrastructure and application monitorization and traceability, and periodic backup policies. A new tool called DicomStudiesQA has been developed to standardize the analysis of DICOM studies. The tool automatically identifies, extracts, and renames series using a consistent nomenclature. It also detects corrupted images and merges separated dynamic series that were initially split, allowing for streamlined post-processing. DISCUSSION AND CONCLUSIONS: The developed ePACS system encompasses a successful implementation, both in hospital and research environments, showcasing its transformative nature and the challenging yet crucial transfer of knowledge to industry. This underscores the practicality and real-world applicability of our innovative approach, highlighting the significant impact it has on the field of experimental radiology.

12.
Cureus ; 16(6): e62443, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39011215

ABSTRACT

Artificial intelligence (AI) and machine learning (ML) technologies are revolutionizing health care by offering unprecedented opportunities to enhance patient care, optimize clinical workflows, and advance medical research. However, the integration of AI and ML into healthcare systems raises significant ethical considerations that must be carefully addressed to ensure responsible and equitable deployment. This comprehensive review explored the multifaceted ethical considerations surrounding the use of AI and ML in health care, including privacy and data security, algorithmic bias, transparency, clinical validation, and professional responsibility. By critically examining these ethical dimensions, stakeholders can navigate the ethical complexities of AI and ML integration in health care, while safeguarding patient welfare and upholding ethical principles. By embracing ethical best practices and fostering collaboration across interdisciplinary teams, the healthcare community can harness the full potential of AI and ML technologies to usher in a new era of personalized data-driven health care that prioritizes patient well-being and equity.

13.
Cas Lek Cesk ; 163(3): 106-114, 2024.
Article in English | MEDLINE | ID: mdl-38981731

ABSTRACT

Telemedicine, defined as the practice of delivering healthcare services remotely using information and communications technologies, raises a plethora of ethical considerations. As telemedicine evolves, its ethical dimensions play an increasingly pivotal role in balancing the benefits of advanced technologies, ensuring responsible healthcare practices within telemedicine environments, and safeguarding patient rights. Healthcare providers, patients, policymakers, and technology developers involved in telemedicine encounter numerous ethical challenges that need to be addressed. Key ethical topics include prioritizing the protection of patient rights and privacy, which entails ensuring equitable access to remote healthcare services and maintaining the doctor-patient relationship in virtual settings. Additional areas of focus encompass data security concerns and the quality of healthcare delivery, underscoring the importance of upholding ethical standards in the digital realm. A critical examination of these ethical dimensions highlights the necessity of establishing binding ethical guidelines and legal regulations. These measures could assist stakeholders in formulating effective strategies and methodologies to navigate the complex telemedicine landscape, ensuring adherence to the highest ethical standards and promoting patient welfare. A balanced approach to telemedicine ethics should integrate the benefits of telemedicine with proactive measures to address emerging ethical challenges and should be grounded in a well-prepared and respected ethical framework.


Subject(s)
Telemedicine , Telemedicine/ethics , Humans , Patient Rights/ethics , Confidentiality/ethics , Computer Security/ethics , Physician-Patient Relations/ethics
14.
IEEE Trans Inf Forensics Secur ; 19: 5751-5766, 2024.
Article in English | MEDLINE | ID: mdl-38993695

ABSTRACT

Conducting secure computations to protect against malicious adversaries is an emerging field of research. Current models designed for malicious security typically necessitate the involvement of two or more servers in an honest-majority setting. Among privacy-preserving data mining techniques, significant attention has been focused on the classification problem. Logistic regression emerges as a well-established classification model, renowned for its impressive performance. We introduce a novel matrix encryption method to build a maliciously secure logistic model. Our scheme involves only a single semi-honest server and is resilient to malicious data providers that may deviate arbitrarily from the scheme. The d -transformation ensures that our scheme achieves indistinguishability (i.e., no adversary can determine, in polynomial time, which of the plaintexts corresponds to a given ciphertext in a chosen-plaintext attack). Malicious activities of data providers can be detected in the verification stage. A lossy compression method is implemented to minimize communication costs while preserving negligible degradation in accuracy. Experiments illustrate that our scheme is highly efficient to analyze large-scale datasets and achieves accuracy similar to non-private models. The proposed scheme outperforms other maliciously secure frameworks in terms of computation and communication costs.

15.
Sci Rep ; 14(1): 16223, 2024 Jul 13.
Article in English | MEDLINE | ID: mdl-39003319

ABSTRACT

Advancements in cloud computing, flying ad-hoc networks, wireless sensor networks, artificial intelligence, big data, 5th generation mobile network and internet of things have led to the development of smart cities. Owing to their massive interconnectedness, high volumes of data are collected and exchanged over the public internet. Therefore, the exchanged messages are susceptible to numerous security and privacy threats across these open public channels. Although many security techniques have been designed to address this issue, most of them are still vulnerable to attacks while some deploy computationally extensive cryptographic operations such as bilinear pairings and blockchain. In this paper, we leverage on biometrics, error correction codes and fuzzy commitment schemes to develop a secure and energy efficient authentication scheme for the smart cities. This is informed by the fact that biometric data is cumbersome to reproduce and hence attacks such as side-channeling are thwarted. We formally analyze the security of our protocol using the Burrows-Abadi-Needham logic logic, which shows that our scheme achieves strong mutual authentication among the communicating entities. The semantic analysis of our protocol shows that it mitigates attacks such as de-synchronization, eavesdropping, session hijacking, forgery and side-channeling. In addition, its formal security analysis demonstrates that it is secure under the Canetti and Krawczyk attack model. In terms of performance, our scheme is shown to reduce the computation overheads by 20.7% and hence is the most efficient among the state-of-the-art protocols.

16.
Patterns (N Y) ; 5(6): 101006, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-39005485

ABSTRACT

For healthcare datasets, it is often impossible to combine data samples from multiple sites due to ethical, privacy, or logistical concerns. Federated learning allows for the utilization of powerful machine learning algorithms without requiring the pooling of data. Healthcare data have many simultaneous challenges, such as highly siloed data, class imbalance, missing data, distribution shifts, and non-standardized variables, that require new methodologies to address. Federated learning adds significant methodological complexity to conventional centralized machine learning, requiring distributed optimization, communication between nodes, aggregation of models, and redistribution of models. In this systematic review, we consider all papers on Scopus published between January 2015 and February 2023 that describe new federated learning methodologies for addressing challenges with healthcare data. We reviewed 89 papers meeting these criteria. Significant systemic issues were identified throughout the literature, compromising many methodologies reviewed. We give detailed recommendations to help improve methodology development for federated learning in healthcare.

17.
Healthcare (Basel) ; 12(13)2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38998897

ABSTRACT

BACKGROUND: With the rapid improvement in healthcare technologies, the security and privacy of the most sensitive data are at risk. Patient privacy has many components, even when data are in electronic format. Although patient privacy has extensively been discussed in the literature, there is no study that has presented all components of patient privacy. METHODS: This study presents a complete assessment framework, develops an inventory as an assessment tool, and examines the reliability and validity of the inventory. The study was carried out in three phases: conceptual framework development, inventory development, and an evaluation case study. Fuzzy conjoint analysis was used in the evaluation to deal with subjectivity and ambiguity. As a result of the evaluation, the case study institution was given a patient privacy maturity level between 1 and 5, where 1 is the worst and 5 is the best. RESULTS: The case study evaluated the largest hospital in Turkey, which employs 800 nurses. Half of the nurses, 400, participated in the study. According to the literature, healthcare institutions do not invest enough in protecting patients' privacy, and the results of the study support this finding. The institution's maturity level was 2, which is poor. CONCLUSIONS: This study measured privacy maturity with many assessment components. The result of the assessment explains to patients and the public whether their data are secure or not. With the implementation of this maturity level, patients have an idea about which institution to choose, and the public can infer the reliability of institutions in terms of patient privacy.

18.
Radiother Oncol ; 198: 110419, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38969106

ABSTRACT

OBJECTIVES: This work aims to explore the impact of multicenter data heterogeneity on deep learning brain metastases (BM) autosegmentation performance, and assess the efficacy of an incremental transfer learning technique, namely learning without forgetting (LWF), to improve model generalizability without sharing raw data. MATERIALS AND METHODS: A total of six BM datasets from University Hospital Erlangen (UKER), University Hospital Zurich (USZ), Stanford, UCSF, New York University (NYU), and BraTS Challenge 2023 were used. First, the performance of the DeepMedic network for BM autosegmentation was established for exclusive single-center training and mixed multicenter training, respectively. Subsequently privacy-preserving bilateral collaboration was evaluated, where a pretrained model is shared to another center for further training using transfer learning (TL) either with or without LWF. RESULTS: For single-center training, average F1 scores of BM detection range from 0.625 (NYU) to 0.876 (UKER) on respective single-center test data. Mixed multicenter training notably improves F1 scores at Stanford and NYU, with negligible improvement at other centers. When the UKER pretrained model is applied to USZ, LWF achieves a higher average F1 score (0.839) than naive TL (0.570) and single-center training (0.688) on combined UKER and USZ test data. Naive TL improves sensitivity and contouring accuracy, but compromises precision. Conversely, LWF demonstrates commendable sensitivity, precision and contouring accuracy. When applied to Stanford, similar performance was observed. CONCLUSION: Data heterogeneity (e.g., variations in metastases density, spatial distribution, and image spatial resolution across centers) results in varying performance in BM autosegmentation, posing challenges to model generalizability. LWF is a promising approach to peer-to-peer privacy-preserving model training.

19.
Sci Rep ; 14(1): 15763, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38982129

ABSTRACT

The timely identification of autism spectrum disorder (ASD) in children is imperative to prevent potential challenges as they grow. When sharing data related to autism for an accurate diagnosis, safeguarding its security and privacy is a paramount concern to fend off unauthorized access, modification, or theft during transmission. Researchers have devised diverse security and privacy models or frameworks, most of which often leverage proprietary algorithms or adapt existing ones to address data leakage. However, conventional anonymization methods, although effective in the sanitization process, proved inadequate for the restoration process. Furthermore, despite numerous scholarly contributions aimed at refining the restoration process, the accuracy of restoration remains notably deficient. Based on the problems identified above, this paper presents a novel approach to data restoration for sanitized sensitive autism datasets with improved performance. In the prior study, we constructed an optimal key for the sanitization process utilizing the proposed Enhanced Combined PSO-GWO framework. This key was implemented to conceal sensitive autism data in the database, thus avoiding information leakage. In this research, the same key was employed during the data restoration process to enhance the accuracy of the original data recovery. Therefore, the study enhanced the restoration process for ASD data's security and privacy by utilizing an optimal key produced via the Enhanced Combined PSO-GWO framework. When compared to existing meta-heuristic algorithms, the simulation results from the autism data restoration experiments demonstrated highly competitive accuracies with 99.90%, 99.60%, 99.50%, 99.25%, and 99.70%, respectively. Among the four types of datasets used, this method outperforms other existing methods on the 30-month autism children dataset, mostly.


Subject(s)
Algorithms , Autism Spectrum Disorder , Databases, Factual , Humans , Autistic Disorder/diagnosis , Computer Security , Child , Privacy
20.
Per Med ; 21(3): 163-166, 2024.
Article in English | MEDLINE | ID: mdl-38963136

ABSTRACT

In the transformative landscape of healthcare, personalized medicine emerges as a pivotal shift, harnessing genetic, environmental and lifestyle data to tailor medical treatments for enhanced outcomes and cost efficiency. Central to its success is public engagement and consent to share health data amidst rising data privacy concerns. To investigate European public opinion on this paradigm, we executed a comprehensive cross-sectional survey to capture the general public's views on personalized medicine and data-sharing modalities, including digital tools and electronic records. The survey was distributed in eight major European Union countries and the results aim at guiding future policymaking and trust-building measures for secure health data exchange. This article delineates our methodological approach, whereby survey findings will be expounded in subsequent publications.


[Box: see text].


Subject(s)
Genetic Testing , Information Dissemination , Precision Medicine , Public Opinion , Humans , Precision Medicine/methods , Genetic Testing/methods , Information Dissemination/methods , Cross-Sectional Studies , Surveys and Questionnaires , Europe , Male , Female , Adult , Middle Aged , Electronic Health Records , Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...