Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
1.
Entropy (Basel) ; 26(3)2024 Mar 07.
Article in English | MEDLINE | ID: mdl-38539746

ABSTRACT

Studies of collective motion have heretofore been dominated by a thermodynamic perspective in which the emergent "flocked" phases are analyzed in terms of their time-averaged orientational and spatial properties. Studies that attempt to scrutinize the dynamical processes that spontaneously drive the formation of these flocks from initially random configurations are far more rare, perhaps owing to the fact that said processes occur far from the eventual long-time steady state of the system and thus lie outside the scope of traditional statistical mechanics. For systems whose dynamics are simulated numerically, the nonstationary distribution of system configurations can be sampled at different time points, and the time evolution of the average structural properties of the system can be quantified. In this paper, we employ this strategy to characterize the spatial dynamics of the standard Vicsek flocking model using two correlation functions common to condensed matter physics. We demonstrate, for modest system sizes with 800 to 2000 agents, that the self-assembly dynamics can be characterized by three distinct and disparate time scales that we associate with the corresponding physical processes of clustering (compaction), relaxing (expansion), and mixing (rearrangement). We further show that the behavior of these correlation functions can be used to reliably distinguish between phenomenologically similar models with different underlying interactions and, in some cases, even provide a direct measurement of key model parameters.

2.
Curr Diabetes Rev ; 2024 Jan 04.
Article in English | MEDLINE | ID: mdl-38178670

ABSTRACT

BACKGROUND: This article focuses on extracting a standard feature set for predicting the complications of diabetes mellitus by systematically reviewing the literature. It is conducted and reported by following the guidelines of PRISMA, a well-known systematic review and meta-analysis method. The research articles included in this study are extracted using the search engine "Web of Science" over eight years. The most common complications of diabetes, diabetic neuropathy, retinopathy, nephropathy, and cardiovascular diseases are considered in the study. METHOD: The features used to predict the complications are identified and categorised by scrutinising the standards of electronic health records. RESULT: Overall, 102 research articles have been reviewed, resulting in 59 frequent features being identified. Nineteen attributes are recognised as a standard in all four considered complications, which are age, gender, ethnicity, weight, height, BMI, smoking history, HbA1c, SBP, eGFR, DBP, HDL, LDL, total cholesterol, triglyceride, use of insulin, duration of diabetes, family history of CVD, and diabetes. The existence of a well-accepted and updated feature set for health analytics models to predict the complications of diabetes mellitus is a vital and contemporary requirement. A widely accepted feature set is beneficial for benchmarking the risk factors of complications of diabetes. CONCLUSION: This study is a thorough literature review to provide a clear state of the art for academicians, clinicians, and other stakeholders regarding the risk factors and their importance.

3.
Clin Exp Ophthalmol ; 51(8): 764-774, 2023 11.
Article in English | MEDLINE | ID: mdl-37885379

ABSTRACT

BACKGROUND: Ophthalmic clinic non-attendance in New Zealand is associated with poorer health outcomes, marked inequities and costs NZD$30 million per annum. Initiatives to improve attendance typically involve expensive and ineffective brute-force strategies. The aim was to develop machine learning models to accurately predict ophthalmic clinic non-attendance. METHODS: This multicentre, retrospective observational study developed and validated predictive models of clinic non-attendance. Attendance data for 3.1 million appointments from all New Zealand government-funded ophthalmology clinics from 2009 to 2018 were aggregated for analysis. Repeated ten-fold cross validation was used to train and optimise XGBoost and logistic regression models on several demographic and clinic-related variables. Models developed using the entire training set were compared with those restricted to regional subsets of the data. RESULTS: In the testing data set from 2019, there were 407 574 appointments (median [range] age, 66 [0-105] years; 210 365 [51.6%] female) with a non-attendance rate of 5.7% (n = 23 309 missed appointments), XGBoost models trained on each region's data achieved the highest mean AUROC of 0.764 (SD 0.058) and mean AUPRC of 0.157 (SD 0.072). XGBoost performed better than logistic regression (mean AUROC = 0.756, p = 0.002). Training individual XGBoost models for each region led to better performance than training a single model on the complete nationwide dataset (mean AUROC = 0.754, p = 0.04). CONCLUSION: Machine learning algorithms can predict ophthalmic clinic non-attendance with relatively basic demographic and clinic data. These findings suggest further research examining implementation of such algorithms in scheduling systems or public health interventions may be useful.


Subject(s)
Ambulatory Care Facilities , Appointments and Schedules , Humans , Female , Aged , Male , Retrospective Studies , Machine Learning , Algorithms
4.
J Healthc Manag ; 67(5): 380-402, 2022.
Article in English | MEDLINE | ID: mdl-36074701

ABSTRACT

GOAL: Moral distress literature is firmly rooted in the nursing and clinician experience, with a paucity of literature that considers the extent to which moral distress affects clinical and administrative healthcare leaders. Moreover, the little evidence that has been collected on this phenomenon has not been systematically mapped to identify key areas for both theoretical and practical elaboration. We conducted a scoping review to frame our understanding of this largely unexplored dynamic of moral distress and better situate our existing knowledge of moral distress and leadership. METHODS: Using moral distress theory as our conceptual framework, we evaluated recent literature on moral distress and leadership to understand how prior studies have conceptualized the effects of moral distress. Our search yielded 1,640 total abstracts. Further screening with the PRISMA process resulted in 72 included articles. PRINCIPAL FINDINGS: Our scoping review found that leaders-not just their employees- personally experience moral distress. In addition, we identified an important role for leaders and organizations in addressing the theoretical conceptualization and practical effects of moral distress. PRACTICAL APPLICATIONS: Although moral distress is unlikely to ever be eliminated, the literature in this review points to a singular need for organizational responses that are intended to intervene at the level of the organization itself, not just at the individual level. Best practices require creating stronger organizational cultures that are designed to mitigate moral distress. This can be achieved through transparency and alignment of personal, professional, and organizational values.


Subject(s)
Organizational Culture , Stress, Psychological , Delivery of Health Care , Humans , Leadership , Morals
5.
Environ Sci Technol ; 56(18): 13189-13199, 2022 09 20.
Article in English | MEDLINE | ID: mdl-36055240

ABSTRACT

Per- and polyfluoroalkyl substances (PFAS) are pervasive environmental contaminants, and their relative stability and high bioaccumulation potential create a challenging risk assessment problem. Zebrafish (Danio rerio) data, in principle, can be synthesized within a quantitative adverse outcome pathway (qAOP) framework to link molecular activity with individual or population level hazards. However, even as qAOP models are still in their infancy, there is a need to link internal dose and toxicity endpoints in a more rigorous way to further not only qAOP models but adverse outcome pathway frameworks in general. We address this problem by suggesting refinements to the current state of toxicokinetic modeling for the early development zebrafish exposed to PFAS up to 120 h post-fertilization. Our approach describes two key physiological transformation phenomena of the developing zebrafish: dynamic volume of an individual and dynamic hatching of a population. We then explore two different modeling strategies to describe the mass transfer, with one strategy relying on classical kinetic rates and the other incorporating mechanisms of membrane transport and adsorption/binding potential. Moving forward, we discuss the challenges of extending this model in both timeframe and chemical class, in conjunction with providing a conceptual framework for its integration with ongoing qAOP modeling efforts.


Subject(s)
Fluorocarbons , Water Pollutants, Chemical , Animals , Fluorocarbons/toxicity , Kinetics , Toxicokinetics , Water Pollutants, Chemical/metabolism , Water Pollutants, Chemical/toxicity , Zebrafish/metabolism
7.
Proc Natl Acad Sci U S A ; 119(15): e2113561119, 2022 04 12.
Article in English | MEDLINE | ID: mdl-35394862

ABSTRACT

Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks.


Subject(s)
COVID-19 , COVID-19/mortality , Data Accuracy , Forecasting , Humans , Pandemics , Probability , Public Health/trends , United States/epidemiology
8.
Comput Biol Med ; 145: 105388, 2022 06.
Article in English | MEDLINE | ID: mdl-35349798

ABSTRACT

BACKGROUND AND OBJECTIVE: Diabetes mellitus manifests as prolonged elevated blood glucose levels resulting from impaired insulin production. Such high glucose levels over a long period of time damage multiple internal organs. To mitigate this condition, researchers and engineers have developed the closed loop artificial pancreas consisting of a continuous glucose monitor and an insulin pump connected via a microcontroller or smartphone. A problem, however, is how to accurately predict short term future glucose levels in order to exert efficient glucose-level control. Much work in the literature focuses on least prediction error as a key metric and therefore pursues complex prediction methods such a deep learning. Such an approach neglects other important and significant design issues such as method complexity (impacting interpretability and safety), hardware requirements for low-power devices such as the insulin pump, the required amount of input data for training (potentially rendering the method infeasible for new patients), and the fact that very small improvements in accuracy may not have significant clinical benefit. METHODS: We propose a novel low-complexity, explainable blood glucose prediction method derived from the Intel P6 branch predictor algorithm. We use Meta-Differential Evolution to determine predictor parameters on training data splits of the benchmark datasets we use. A comparison is made between our new algorithm and a state-of-the-art deep-learning method for blood glucose level prediction. RESULTS: To evaluate the new method, the Blood Glucose Level Prediction Challenge benchmark dataset is utilised. On the official test data split after training, the state-of-the-art deep learning method predicted glucose levels 30 min ahead of current time with 96.3% of predicted glucose levels having relative error less than 30% (which is equivalent to the safe zone of the Surveillance Error Grid). Our simpler, interpretable approach prolonged the prediction horizon by another 5 min with 95.8% of predicted glucose levels of all patients having relative error less than 30%. CONCLUSIONS: When considering predictive performance as assessed using the Blood Glucose Level Prediction Challenge benchmark dataset and Surveillance Error Grid metrics, we found that the new algorithm delivered comparable predictive accuracy performance, while operating only on the glucose-level signal with considerably less computational complexity.


Subject(s)
Blood Glucose Self-Monitoring , Diabetes Mellitus, Type 1 , Algorithms , Blood Glucose , Humans , Insulin
9.
Sci Rep ; 11(1): 10875, 2021 05 25.
Article in English | MEDLINE | ID: mdl-34035322

ABSTRACT

The SARS-CoV-2 virus is responsible for the novel coronavirus disease 2019 (COVID-19), which has spread to populations throughout the continental United States. Most state and local governments have adopted some level of "social distancing" policy, but infections have continued to spread despite these efforts. Absent a vaccine, authorities have few other tools by which to mitigate further spread of the virus. This begs the question of how effective social policy really is at reducing new infections that, left alone, could potentially overwhelm the existing hospitalization capacity of many states. We developed a mathematical model that captures correlations between some state-level "social distancing" policies and infection kinetics for all U.S. states, and use it to illustrate the link between social policy decisions, disease dynamics, and an effective reproduction number that changes over time, for case studies of Massachusetts, New Jersey, and Washington states. In general, our findings indicate that the potential for second waves of infection, which result after reopening states without an increase to immunity, can be mitigated by a return of social distancing policies as soon as possible after the waves are detected.


Subject(s)
COVID-19/epidemiology , Health Policy , COVID-19/pathology , COVID-19/virology , Databases, Factual , Humans , Massachusetts/epidemiology , New Jersey/epidemiology , Physical Distancing , Public Policy , SARS-CoV-2/isolation & purification , Washington/epidemiology
10.
Phys Rev E ; 103(4-1): 042417, 2021 Apr.
Article in English | MEDLINE | ID: mdl-34005977

ABSTRACT

Establishing formal mathematical analogies between disparate physical systems can be a powerful tool, allowing for the well studied behavior of one system to be directly translated into predictions about the behavior of another that may be harder to probe. In this paper we lay the foundation for such an analogy between the macroscale electrodynamics of simple magnetic circuits and the microscale chemical kinetics of transcriptional regulation in cells. By artificially allowing the inductor coils of the former to elastically expand under the action of their Lorentz pressure, we introduce nonlinearities into the system that we interpret through the lens of our analogy as a schematic model for the impact of crosstalk on the rates of gene expression near steady state. Synthetic plasmids introduced into a cell must compete for a finite pool of metabolic and enzymatic resources against a maelstrom of crisscrossing biological processes, and our theory makes sensible predictions about how this noisy background might impact the expression profiles of synthetic constructs without explicitly modeling the kinetics of numerous interconnected regulatory interactions. We conclude the paper with a discussion of how our theory might be expanded to a broader class of plasmid circuits and how our predictions might be tested experimentally.


Subject(s)
Models, Biological , Gene Regulatory Networks , Kinetics , Signal Transduction
11.
PLoS One ; 16(1): e0245094, 2021.
Article in English | MEDLINE | ID: mdl-33439904

ABSTRACT

The transcriptional network determines a cell's internal state by regulating protein expression in response to changes in the local environment. Due to the interconnected nature of this network, information encoded in the abundance of various proteins will often propagate across chains of noisy intermediate signaling events. The data-processing inequality (DPI) leads us to expect that this intracellular game of "telephone" should degrade this type of signal, with longer chains losing successively more information to noise. However, a previous modeling effort predicted that because the steps of these signaling cascades do not truly represent independent stages of data processing, the limits of the DPI could seemingly be surpassed, and the amount of transmitted information could actually increase with chain length. What that work did not examine was whether this regime of growing information transmission was attainable by a signaling system constrained by the mechanistic details of more complex protein-binding kinetics. Here we address this knowledge gap through the lens of information theory by examining a model that explicitly accounts for the binding of each transcription factor to DNA. We analyze this model by comparing stochastic simulations of the fully nonlinear kinetics to simulations constrained by the linear response approximations that displayed a regime of growing information. Our simulations show that even when molecular binding is considered, there remains a regime wherein the transmitted information can grow with cascade length, but ends after a critical number of links determined by the kinetic parameter values. This inflection point marks where correlations decay in response to an oversaturation of binding sites, screening informative transcription factor fluctuations from further propagation down the chain where they eventually become indistinguishable from the surrounding levels of noise.


Subject(s)
Gene Expression Regulation , Gene Regulatory Networks , Models, Biological , Signal Transduction , Animals , Humans , Kinetics
12.
PLoS One ; 15(11): e0241664, 2020.
Article in English | MEDLINE | ID: mdl-33253235

ABSTRACT

RNA aptamers are relatively short nucleic acid sequences that bind targets with high affinity, and when combined with a riboswitch that initiates translation of a fluorescent reporter protein, can be used as a biosensor for chemical detection in various types of media. These processes span target binding at the molecular scale to fluorescence detection at the macroscale, which involves a number of intermediate rate-limiting physical (e.g., molecular conformation change) and biochemical changes (e.g., reaction velocity), which together complicate assay design. Here we describe a mathematical model developed to aid environmental detection of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) using the DsRed fluorescent reporter protein, but is general enough to potentially predict fluorescence from a broad range of water-soluble chemicals given the values of just a few kinetic rate constants as input. If we expose a riboswitch test population of Escherichia coli bacteria to a chemical dissolved in media, then the model predicts an empirically distinct, power-law relationship between the exposure concentration and the elapsed time of exposure. This relationship can be used to deduce an exposure time that meets or exceeds the optical threshold of a fluorescence detection device and inform new biosensor designs.


Subject(s)
Aptamers, Nucleotide/chemistry , Riboswitch , Triazines/chemistry , Biosensing Techniques
13.
Phys Rev E ; 101(2-1): 022412, 2020 Feb.
Article in English | MEDLINE | ID: mdl-32168619

ABSTRACT

Gene drives offer unprecedented control over the fate of natural ecosystems by leveraging non-Mendelian inheritance mechanisms to proliferate synthetic genes across wild populations. However, these benefits are offset by a need to avoid the potentially disastrous consequences of unintended ecological interactions. The efficacy of many gene-editing drives has been brought into question due to predictions that they will inevitably be thwarted by the emergence of drive-resistant mutations, but these predictions derive largely from models of large or infinite populations that cannot be driven to extinction faster than mutations can fixate. To address this issue, we characterize the impact of a simple, meiotic gene drive on a small, homeostatic population whose genotypic composition may vary due to the stochasticity inherent in natural mating events (e.g., partner choice, number of offspring) or the genetic inheritance process (e.g., mutation rate, gene drive fitness). To determine whether the ultimate genotypic fate of such a population is sensitive to such stochastic fluctuations, we compare the results of two dynamical models: a deterministic model that attempts to predict how the genetics of an average population evolve over successive generations, and an agent-based model that examines how stable these predictions are to fluctuations. We find that, even on average, our stochastic model makes qualitatively distinct predictions from those of the deterministic model, and we identify the source of these discrepancies as a dynamic instability that arises at short times, when genetic diversity is maximized as a consequence of the gene drive's rapid proliferation. While we ultimately conclude that extinction can only beat out the fixation of drive-resistant mutations over a limited region of parameter space, the reason for this is more complex than previously understood, which could open new avenues for engineered gene drives to circumvent this weakness.


Subject(s)
Gene Drive Technology , Homeostasis/genetics , Meiosis/genetics , Models, Genetic
14.
J Diabetes Sci Technol ; 14(5): 878-882, 2020 09.
Article in English | MEDLINE | ID: mdl-31876179

ABSTRACT

Digital innovations have led to an explosion of data in healthcare, driving processes of democratization and foreshadowing the end of the paternalistic era of medicine and the inception of a new epoch characterized by patient-centered care. We illustrate that the "do it yourself" (DIY) automated insulin delivery (AID) innovation of diabetes is a leading example of democratization of medicine as evidenced by its application to the three pillars of democratization in healthcare (intelligent computing; sharing of information; and privacy, security, and safety) outlined by Stanford but also within a broader context of democratization. The heuristic algorithms integral to DIY AID have been developed and refined by human intelligence and demonstrate intelligent computing. We deliver examples of research in artificial pancreas technology which actively pursues the use of machine learning representative of artificial intelligence (AI) and also explore alternate approaches to AI within the DIY AID example. Sharing of information symbolizes the core philosophy behind the success of the DIY AID evolution. We examine data sharing for algorithm development and refinement, for sharing of the open-source algorithm codes online, for peer to peer support, and sharing with medical and scientific communities. Do it yourself AID systems have no regulatory approval raising safety concerns as well as medico-legal and ethical implications for healthcare professionals. Other privacy and security factors are also discussed. Democratization of healthcare promises better health access for all and we recognize the limitations of DIY AID as it exists presently, however, we believe it has great potential.


Subject(s)
Blood Glucose/drug effects , Diabetes Mellitus, Type 1/drug therapy , Glycemic Control , Hypoglycemic Agents/administration & dosage , Insulin Infusion Systems , Insulin/administration & dosage , Pancreas, Artificial , Patient Participation , Artificial Intelligence , Biomarkers/blood , Blood Glucose/metabolism , Blood Glucose Self-Monitoring , Computer Security , Diabetes Mellitus, Type 1/blood , Diabetes Mellitus, Type 1/diagnosis , Diffusion of Innovation , Glycemic Control/adverse effects , Humans , Hypoglycemic Agents/adverse effects , Insulin/adverse effects , Insulin Infusion Systems/adverse effects , Monitoring, Ambulatory , Pancreas, Artificial/adverse effects , Patient Safety , Predictive Value of Tests , Treatment Outcome
15.
PLoS One ; 14(12): e0226687, 2019.
Article in English | MEDLINE | ID: mdl-31877201

ABSTRACT

Large scale biological responses are inherently uncertain, in part as a consequence of noisy systems that do not respond deterministically to perturbations and measurement errors inherent to technological limitations. As a result, they are computationally difficult to model and current approaches are notoriously slow and computationally intensive (multiscale stochastic models), fail to capture the effects of noise across a system (chemical kinetic models), or fail to provide sufficient biological fidelity because of broad simplifying assumptions (stochastic differential equations). We use a new approach to modeling multiscale stationary biological processes that embraces the noise found in experimental data to provide estimates of the parameter uncertainties and the potential mis-specification of models. Our approach models the mean stationary response at each biological level given a particular expected response relationship, capturing variation around this mean using conditional Monte Carlo sampling that is statistically consistent with training data. A conditional probability distribution associated with a biological response can be reconstructed using this method for a subset of input values, which overcomes the parameter identification problem. Our approach could be applied in addition to dynamical modeling methods (see above) to predict uncertain biological responses over experimental time scales. To illustrate this point, we apply the approach to a test case in which we model the variation associated with measurements at multiple scales of organization across a reproduction-related Adverse Outcome Pathway described for teleosts.


Subject(s)
Computer Simulation , Cyprinidae/physiology , Models, Biological , Algorithms , Animals , Female , Monte Carlo Method , Reproduction , Stochastic Processes
16.
PLoS One ; 14(12): e0225613, 2019.
Article in English | MEDLINE | ID: mdl-31790464

ABSTRACT

Techniques using machine learning for short term blood glucose level prediction in patients with Type 1 Diabetes are investigated. This problem is significant for the development of effective artificial pancreas technology so accurate alerts (e.g. hypoglycemia alarms) and other forecasts can be generated. It is shown that two factors must be considered when selecting the best machine learning technique for blood glucose level regression: (i) the regression model performance metrics being used to select the model, and (ii) the preprocessing techniques required to account for the imbalanced time spent by patients in different portions of the glycemic range. Using standard benchmark data, it is demonstrated that different regression model/preprocessing technique combinations exhibit different accuracies depending on the glycemic subrange under consideration. Therefore technique selection depends on the type of alert required. Specific findings are that a linear Support Vector Regression-based model, trained with normal as well as polynomial features, is best for blood glucose level forecasting in the normal and hyperglycemic ranges while a Multilayer Perceptron trained on oversampled data is ideal for predictions in the hypoglycemic range.


Subject(s)
Blood Glucose Self-Monitoring/methods , Blood Glucose/analysis , Diabetes Mellitus, Type 1/drug therapy , Hypoglycemia/diagnosis , Support Vector Machine , Blood Glucose Self-Monitoring/instrumentation , Datasets as Topic , Diabetes Mellitus, Type 1/blood , Forecasting , Humans , Hypoglycemia/blood , Hypoglycemia/chemically induced , Hypoglycemic Agents/administration & dosage , Hypoglycemic Agents/adverse effects , Insulin/administration & dosage , Insulin/adverse effects , Laboratory Critical Values , Pancreas, Artificial , Self Medication/adverse effects
17.
J Gen Appl Microbiol ; 65(3): 145-150, 2019 Jul 19.
Article in English | MEDLINE | ID: mdl-30700648

ABSTRACT

Explosives such as hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) are common contaminants found in soil and groundwater at military facilities worldwide, but large-scale monitoring of these contaminants at low concentrations is difficult. Biosensors that incorporate aptamers with high affinity and specificity for a target are a novel way of detecting these compounds. This work describes novel riboswitch-based biosensors for detecting RDX. The performance of the RDX riboswitch was characterized in Escherichia coli using a range of RDX concentrations from 0-44 µmol l-1. Fluorescence was induced at RDX concentrations as low as 0.44 µmol l-1. The presence of 4.4 µmol l-1 RDX induced an 8-fold increase in fluorescence and higher concentrations did not induce a statistically significant increase in response.


Subject(s)
Biosensing Techniques/methods , Environmental Monitoring/methods , Environmental Pollutants/analysis , Explosive Agents/analysis , Triazines/analysis , Aptamers, Nucleotide/chemistry , Aptamers, Nucleotide/genetics , Escherichia coli/genetics , Escherichia coli/metabolism , Luminescent Measurements , Luminescent Proteins/genetics , Luminescent Proteins/metabolism , Riboswitch/genetics
18.
Artif Intell Med ; 97: 204-214, 2019 06.
Article in English | MEDLINE | ID: mdl-30797633

ABSTRACT

Neural networks are powerful tools used widely for building cancer prediction models from microarray data. We review the most recently proposed models to highlight the roles of neural networks in predicting cancer from gene expression data. We identified articles published between 2013-2018 in scientific databases using keywords such as cancer classification, cancer analysis, cancer prediction, cancer clustering and microarray data. Analyzing the studies reveals that neural network methods have been either used for filtering (data engineering) the gene expressions in a prior step to prediction; predicting the existence of cancer, cancer type or the survivability risk; or for clustering unlabeled samples. This paper also discusses some practical issues that can be considered when building a neural network-based cancer prediction model. Results indicate that the functionality of the neural network determines its general architecture. However, the decision on the number of hidden layers, neurons, hypermeters and learning algorithm is made using trail-and-error techniques.


Subject(s)
Neoplasms/pathology , Neural Networks, Computer , Algorithms , Cluster Analysis , Humans , Neoplasms/classification , Surveys and Questionnaires
19.
N Z Med J ; 131(1485): 19-26, 2018 11 09.
Article in English | MEDLINE | ID: mdl-30408815

ABSTRACT

AIM: To examine the practices used by New Zealand's 20 district health boards (DHBs) to protect patient privacy when patient information is used for research, and particularly practices for de-identifying information. METHOD: An e-mailed questionnaire survey, using New Zealand's Official Information Act to request information on the policies and practices of each DHB. RESULTS: 19/20 DHBs (95%) responded to the survey, one of which reported that it did not provide patient information for research. 18/18 (100%) of the DHBs that reported providing patient information for research required the project to have ethics approval. 18/18 (100%) of the DHBs that offered patient data for research also required individual patient consent and/or de-identification of the information before it was used for research. 14/18 DHBs (78%) deidentified data before releasing it for research, 8/18 DHBs (48%) sought individual patient consent before releasing data for research, and 5/18 (28%) used both methods. Other measures to protect privacy included confidentiality agreements, encryption and cybersecurity procedures. CONCLUSION: Our findings show DHBs self-report that they have sufficient measures in place to protect privacy when patient information is used for research. However, these measures need to be continuously evaluated against rapidly evolving international practices and technological developments.


Subject(s)
Confidentiality , Data Analysis , Health Services Research , Organizational Policy , Advisory Committees , Computer Security , Electronic Health Records , Humans , Informed Consent , New Zealand , Surveys and Questionnaires
20.
BMC Syst Biol ; 12(1): 81, 2018 08 07.
Article in English | MEDLINE | ID: mdl-30086736

ABSTRACT

BACKGROUND: A challenge of in vitro to in vivo extrapolation (IVIVE) is to predict the physical state of organisms exposed to chemicals in the environment from in vitro exposure assay data. Although toxicokinetic modeling approaches promise to bridge in vitro screening data with in vivo effects, they are often encumbered by a need for redesign or re-parameterization when applied to different tissues or chemicals. RESULTS: We demonstrate a parameterization of reverse toxicokinetic (rTK) models developed for the adult zebrafish (Danio rerio) based upon particle swarm optimizations (PSO) of the chemical uptake and degradation rates that predict bioconcentration factors (BCF) for a broad range of chemicals. PSO reveals a relationship between chemical uptake and decomposition parameter values that predicts chemical-specific BCF values with moderate statistical agreement to a limited yet diverse chemical dataset, and all without a need to retrain the model to new data. CONCLUSIONS: The presented model requires only the octanol-water partitioning ratio to predict BCFs to a fidelity consistent with existing QSAR models. This success begs re-evaluation of the modeling assumptions; specifically, it suggests that chemical uptake into arterial blood may be limited by transport across gill membranes (diffusion) rather than by counter-current flow between gill lamellae (convection). Therefore, more detailed molecular modeling of aquatic respiration may further improve predictive accuracy of the rTK approach.


Subject(s)
Models, Biological , Zebrafish/metabolism , Animals , Biological Transport , Toxicokinetics
SELECTION OF CITATIONS
SEARCH DETAIL
...