Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 68
Filter
1.
BMC Bioinformatics ; 24(1): 331, 2023 Sep 04.
Article in English | MEDLINE | ID: mdl-37667175

ABSTRACT

BACKGROUND: Over the past several decades, metrics have been defined to assess the quality of various types of models and to compare their performance depending on their capacity to explain the variance found in real-life data. However, available validation methods are mostly designed for statistical regressions rather than for mechanistic models. To our knowledge, in the latter case, there are no consensus standards, for instance for the validation of predictions against real-world data given the variability and uncertainty of the data. In this work, we focus on the prediction of time-to-event curves using as an application example a mechanistic model of non-small cell lung cancer. We designed four empirical methods to assess both model performance and reliability of predictions: two methods based on bootstrapped versions of parametric statistical tests: log-rank and combined weighted log-ranks (MaxCombo); and two methods based on bootstrapped prediction intervals, referred to here as raw coverage and the juncture metric. We also introduced the notion of observation time uncertainty to take into consideration the real life delay between the moment when an event happens, and the moment when it is observed and reported. RESULTS: We highlight the advantages and disadvantages of these methods according to their application context. We have shown that the context of use of the model has an impact on the model validation process. Thanks to the use of several validation metrics we have highlighted the limit of the model to predict the evolution of the disease in the whole population of mutations at the same time, and that it was more efficient with specific predictions in the target mutation populations. The choice and use of a single metric could have led to an erroneous validation of the model and its context of use. CONCLUSIONS: With this work, we stress the importance of making judicious choices for a metric, and how using a combination of metrics could be more relevant, with the objective of validating a given model and its predictions within a specific context of use. We also show how the reliability of the results depends both on the metric and on the statistical comparisons, and that the conditions of application and the type of available information need to be taken into account to choose the best validation strategy.


Subject(s)
Adenocarcinoma of Lung , Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Humans , Carcinoma, Non-Small-Cell Lung/genetics , Reproducibility of Results , Uncertainty , Lung Neoplasms/genetics , Adenocarcinoma of Lung/genetics , ErbB Receptors/genetics
2.
Nat Commun ; 13(1): 1980, 2022 04 13.
Article in English | MEDLINE | ID: mdl-35418135

ABSTRACT

Respiratory disease trials are profoundly affected by non-pharmaceutical interventions (NPIs) against COVID-19 because they perturb existing regular patterns of all seasonal viral epidemics. To address trial design with such uncertainty, we developed an epidemiological model of respiratory tract infection (RTI) coupled to a mechanistic description of viral RTI episodes. We explored the impact of reduced viral transmission (mimicking NPIs) using a virtual population and in silico trials for the bacterial lysate OM-85 as prophylaxis for RTI. Ratio-based efficacy metrics are only impacted under strict lockdown whereas absolute benefit already is with intermediate NPIs (eg. mask-wearing). Consequently, despite NPI, trials may meet their relative efficacy endpoints (provided recruitment hurdles can be overcome) but are difficult to assess with respect to clinical relevance. These results advocate to report a variety of metrics for benefit assessment, to use adaptive trial design and adapted statistical analyses. They also question eligibility criteria misaligned with the actual disease burden.


Subject(s)
COVID-19 , Respiration Disorders , Respiratory Tract Infections , Virus Diseases , COVID-19/prevention & control , Clinical Trials as Topic , Communicable Disease Control/methods , Humans , Respiratory Tract Infections/epidemiology , SARS-CoV-2 , Virus Diseases/epidemiology
3.
Front Med Technol ; 4: 810315, 2022.
Article in English | MEDLINE | ID: mdl-35281671

ABSTRACT

Health technology assessment (HTA) aims to be a systematic, transparent, unbiased synthesis of clinical efficacy, safety, and value of medical products (MPs) to help policymakers, payers, clinicians, and industry to make informed decisions. The evidence available for HTA has gaps-impeding timely prediction of the individual long-term effect in real clinical practice. Also, appraisal of an MP needs cross-stakeholder communication and engagement. Both aspects may benefit from extended use of modeling and simulation. Modeling is used in HTA for data-synthesis and health-economic projections. In parallel, regulatory consideration of model informed drug development (MIDD) has brought attention to mechanistic modeling techniques that could in fact be relevant for HTA. The ability to extrapolate and generate personalized predictions renders the mechanistic MIDD approaches suitable to support translation between clinical trial data into real-world evidence. In this perspective, we therefore discuss concrete examples of how mechanistic models could address HTA-related questions. We shed light on different stakeholder's contributions and needs in the appraisal phase and suggest how mechanistic modeling strategies and reporting can contribute to this effort. There are still barriers dissecting the HTA space and the clinical development space with regard to modeling: lack of an adapted model validation framework for decision-making process, inconsistent and unclear support by stakeholders, limited generalizable use cases, and absence of appropriate incentives. To address this challenge, we suggest to intensify the collaboration between competent authorities, drug developers and modelers with the aim to implement mechanistic models central in the evidence generation, synthesis, and appraisal of HTA so that the totality of mechanistic and clinical evidence can be leveraged by all relevant stakeholders.

4.
PLoS One ; 16(10): e0258093, 2021.
Article in English | MEDLINE | ID: mdl-34634062

ABSTRACT

In order to propose a more precise definition and explore how to reduce ethical losses in randomized controlled clinical trials (RCTs), we set out to identify trial participants who do not contribute to demonstrating that the treatment in the experimental arm is superior to that in the control arm. RCTs emerged mid-last century as the gold standard for assessing efficacy, becoming the cornerstone of the value of new therapies, yet their ethical grounds are a matter of debate. We introduce the concept of unnecessary participants in RCTs, the sum of non-informative participants and non-responders. The non-informative participants are considered not informative with respect to the efficacy measured in the trial in contrast to responders who carry all the information required to conclude on the treatment's efficacy. The non-responders present the event whether or not they are treated with the experimental treatment. The unnecessary participants carry the burden of having to participate in a clinical trial without benefiting from it, which might include experiencing side effects. Thus, these unnecessary participants carry the ethical loss that is inherent to the RCT methodology. On the contrary, responders to the experimental treatment bear its entire efficacy in the RCT. Starting from the proportions observed in a real placebo-controlled trial from the literature, we carried out simulations of RCTs progressively increasing the proportion of responders up to 100%. We show that the number of unnecessary participants decreases steadily until the RCT's ethical loss reaches a minimum. In parallel, the trial sample size decreases (presumably its cost as well), although the trial's statistical power increases as shown by the increase of the chi-square comparing the event rates between the two arms. Thus, we expect that increasing the proportion of responders in RCTs would contribute to making them more ethically acceptable, with less false negative outcomes.


Subject(s)
Randomized Controlled Trials as Topic/ethics
5.
CPT Pharmacometrics Syst Pharmacol ; 10(8): 804-825, 2021 08.
Article in English | MEDLINE | ID: mdl-34102034

ABSTRACT

The value of in silico methods in drug development and evaluation has been demonstrated repeatedly and convincingly. While their benefits are now unanimously recognized, international standards for their evaluation, accepted by all stakeholders involved, are still to be established. In this white paper, we propose a risk-informed evaluation framework for mechanistic model credibility evaluation. To properly frame the proposed verification and validation activities, concepts such as context of use, regulatory impact and risk-based analysis are discussed. To ensure common understanding between all stakeholders, an overview is provided of relevant in silico terminology used throughout this paper. To illustrate the feasibility of the proposed approach, we have applied it to three real case examples in the context of drug development, using a credibility matrix currently being tested as a quick-start tool by regulators. Altogether, this white paper provides a practical approach to model evaluation, applicable in both scientific and regulatory evaluation contexts.


Subject(s)
Computer Simulation , Drug Development/methods , Models, Theoretical , Drug Development/legislation & jurisprudence , Humans , Risk Assessment/methods , Terminology as Topic
7.
Drugs Real World Outcomes ; 6(3): 125-132, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31359347

ABSTRACT

BACKGROUND: Randomised, double-blind, clinical trial methodology minimises bias in the measurement of treatment efficacy. However, most phase III trials in non-orphan diseases do not include individuals from the population to whom efficacy findings will be applied in the real world. Thus, a translation process must be used to infer effectiveness for these populations. Current conventional translation processes are not formalised and do not have a clear theoretical or practical base. There is a growing need for accurate translation, both for public health considerations and for supporting the shift towards personalised medicine. OBJECTIVE: Our objective was to assess the results of translation of efficacy data to population efficacy from two simulated clinical trials for two drugs in three populations, using conventional methods. METHODS: We simulated three populations, two drugs with different efficacies and two trials with different sampling protocols. RESULTS: With few exceptions, current translation methods do not result in accurate population effectiveness predictions. The reason for this failure is the non-linearity of the translation method. One of the consequences of this inaccuracy is that pharmacoeconomic and postmarketing surveillance studies based on direct use of clinical trial efficacy metrics are flawed. CONCLUSION: There is a clear need to develop and validate functional and relevant translation approaches for the translation of clinical trial efficacy to the real-world setting.

8.
Transplantation ; 100(9): 1803-14, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27257997

ABSTRACT

Chronic lung allograft dysfunction (CLAD) is the major limitation of long-term survival after lung transplantation. Chronic lung allograft dysfunction manifests as bronchiolitis obliterans syndrome or the recently described restrictive allograft syndrome. Although numerous risk factors have been identified so far, the physiopathological mechanisms of CLAD remain poorly understood. We investigate here the immune mechanisms involved in the development of CLAD after lung transplantation. We explore the innate or adaptive immune reactions induced by the allograft itself or by the environment and how they lead to allograft dysfunction. Because current literature suggests bronchiolitis obliterans syndrome and restrictive allograft syndrome as 2 distinct entities, we focus on the specific factors behind one or the other syndromes. Chronic lung allograft dysfunction is a multifactorial disease that remains irreversible and unpredictable so far. We thus finally discuss the potential of systems-biology approach to predict its occurrence and to better understand its underlying mechanisms.


Subject(s)
Bronchiolitis Obliterans/immunology , Lung Transplantation/adverse effects , Lung/immunology , Lung/surgery , Adaptive Immunity , Allografts , Animals , Bronchiolitis Obliterans/diagnosis , Bronchiolitis Obliterans/mortality , Bronchiolitis Obliterans/physiopathology , Chronic Disease , Graft Survival , Humans , Immunity, Innate , Lung/physiopathology , Lung Transplantation/mortality , Risk Factors , Syndrome , Systems Biology , Time Factors , Treatment Outcome
9.
PLoS One ; 10(11): e0140793, 2015.
Article in English | MEDLINE | ID: mdl-26529507

ABSTRACT

OBJECTIVE: To examine the performances of an alternative strategy to decide initiating BP-lowering drugs called Proportional Benefit (PB). It selects candidates addressing the inequity induced by the high-risk approach since it distributes the gains proportionally to the burden of disease by genders and ages. STUDY DESIGN AND SETTING: Mild hypertensives from a Realistic Virtual Population by genders and 10-year age classes (range 35-64 years) received simulated treatment over 10 years according to the PB strategy or the 2007 ESH/ESC guidelines (ESH/ESC). Primary outcomes were the relative life-year gain (life-years gained-to-years of potential life lost ratio) and the number needed to treat to gain a life-year. A sensitivity analysis was performed to assess the impact of changes introduced by the ESH/ESC guidelines appeared in 2013 on these outcomes. RESULTS: The 2007 ESH/ESC relative life-year gains by ages were 2%; 10%; 14% in men, and 0%; 2%; 11% in women, this gradient being abolished by the PB (relative gain in all categories = 10%), while preserving the same overall gain in life-years. The redistribution of benefits improved the profile of residual events in younger individuals compared to the 2007 ESH/ESC guidelines. The PB strategy was more efficient (NNT = 131) than the 2013 ESH/ESC guidelines, whatever the level of evidence of the scenario adopted (NNT = 139 and NNT = 179 with the evidence-based scenario and the opinion-based scenario, respectively), although the 2007 ESH/ESC guidelines remained the most efficient strategy (NNT = 114). CONCLUSION: The Proportional Benefit strategy provides the first response ever proposed against the inequity of resource use when treating highest risk people. It occupies an intermediate position with regards to the efficiency expected from the application of historical and current ESH/ESC hypertension guidelines. Our approach allows adapting recommendations to the risk and resources of a particular country.


Subject(s)
Antihypertensive Agents/therapeutic use , Cardiovascular Diseases/prevention & control , Hypertension/drug therapy , Adult , Age Factors , Antihypertensive Agents/pharmacology , Blood Pressure/drug effects , Cost-Benefit Analysis , Female , Humans , Hypertension/physiopathology , Male , Middle Aged , Practice Guidelines as Topic , Risk Factors , Sex Factors
10.
C R Biol ; 338(10): 635-42, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26276539

ABSTRACT

Sepsis is defined as a syndrome combining a systemic inflammatory response with a documented infection. It may progress to more serious cases such as septic shock following the failure of one or more organs and the emergence of hemodynamic defects. Assuming that the emergence of serious septic syndromes may be partially explained by the early loss of regulation of the inflammatory response, we decided to compare, in a transcriptomic perspective, the biological mechanisms expressed during an induced systemic inflammatory response with those expressed during severe septic syndromes. By using open-access transcriptomic databases, we first studied the kinetics of an induced inflammatory response. The use of functional analysis helped us identify discriminating biological mechanisms, such as the mTOR signaling pathway, between the pathological cases of sepsis and non-pathological (i.e., the artificially induced SIRS) cases.


Subject(s)
Endotoxemia/complications , Gene Expression Profiling , Gene Expression Regulation , Sepsis/complications , Systemic Inflammatory Response Syndrome/genetics , Transcriptome , Adaptive Immunity/genetics , Adult , Apoptosis/genetics , Datasets as Topic , Endotoxemia/chemically induced , Endotoxemia/genetics , Humans , Microarray Analysis , Multigene Family , Postoperative Complications/genetics , Principal Component Analysis , Prospective Studies , Sepsis/genetics , Signal Transduction , Systemic Inflammatory Response Syndrome/etiology , TOR Serine-Threonine Kinases/physiology , Transcription Factors/metabolism , Transcription, Genetic
11.
C R Biol ; 337(12): 661-78, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25433558

ABSTRACT

Target identification aims at identifying biomolecules whose function should be therapeutically altered to cure the considered pathology. An algorithm for in silico target identification using Boolean network attractors is proposed. It assumes that attractors correspond to phenotypes produced by the modeled biological network. It identifies target combinations which allow disturbed networks to avoid attractors associated with pathological phenotypes. The algorithm is tested on a Boolean model of the mammalian cell cycle and its applications are illustrated on a Boolean model of Fanconi anemia. Results show that the algorithm returns target combinations able to remove attractors associated with pathological phenotypes and then succeeds in performing the proposed in silico target identification. However, as with any in silico evidence, there is a bridge to cross between theory and practice. Nevertheless, it is expected that the algorithm is of interest for target identification.


Subject(s)
Computer Simulation/statistics & numerical data , Drug Discovery/methods , High-Throughput Screening Assays/instrumentation , Neural Networks, Computer , Algorithms , Breast Neoplasms/genetics , Cell Cycle/physiology , Cell Physiological Phenomena , DNA Repair , Fanconi Anemia/genetics , Female , Humans , Models, Biological , Phenotype
12.
J R Soc Interface ; 11(100): 20140867, 2014 Nov 06.
Article in English | MEDLINE | ID: mdl-25209407

ABSTRACT

Healthcare authorities make difficult decisions about how to spend limited budgets for interventions that guarantee the best cost-efficacy ratio. We propose a novel approach for treatment decision-making, OMES-in French: Objectif thérapeutique Modèle Effet Seuil (in English: Therapeutic Objective-Threshold-Effect Model; TOTEM). This approach takes into consideration results from clinical trials, adjusted for the patients' characteristics in treatment decision-making. We compared OMES with the French clinical practice guidelines (CPGs) for the management of dyslipidemia with statin in a computer-generated realistic virtual population, representing the adult French population, in terms of the number of all-cause deaths avoided (number of avoided events: NAEs) under treatment and the individual absolute benefit. The total budget was fixed at the annual amount reimbursed by the French social security for statins. With the CPGs, the NAEs was 292 for an annual cost of 122.54 M€ compared with 443 with OMES. For a fixed NAEs, OMES reduced costs by 50% (60.53 M€ yr(-1)). The results demonstrate that OMES is at least as good as, and even better than, the standard CPGs when applied to the same population. Hence the OMES approach is a practical, useful alternative which will help to overcome the limitations of treatment decision-making based uniquely on CPGs.


Subject(s)
Dyslipidemias , Hydroxymethylglutaryl-CoA Reductase Inhibitors/economics , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Models, Biological , Models, Econometric , Adult , Clinical Trials as Topic , Computer Simulation , Costs and Cost Analysis , Dyslipidemias/diagnosis , Dyslipidemias/drug therapy , Dyslipidemias/economics , Female , France , Humans , Male , Practice Guidelines as Topic
13.
Therapie ; 69(3): 235-7, 2014.
Article in English | MEDLINE | ID: mdl-24927505

ABSTRACT

Reimbursement of drugs by public or private insurance systems is increasingly problematic, including in supposedly "rich" countries. There is an international consensus to consider the benefit of a Health technology assessment to clarify decisions on reimbursement by the collectivity, and this includes taking account of the target population of the new drug. The authors debate about the urgent need of a better quantification of the target population, which must include a qualitative description of this target population and a scientific extrapolation of the target population, which is certainly the most challenging problem.


Subject(s)
Insurance, Health, Reimbursement/economics , Insurance, Pharmaceutical Services/economics , Reimbursement Mechanisms/economics , Decision Making , Humans , Technology Assessment, Biomedical/methods
15.
J Pers Med ; 3(3): 177-90, 2013 Aug 15.
Article in English | MEDLINE | ID: mdl-25562651

ABSTRACT

The effect model law states that a natural relationship exists between the frequency (observation) or the probability (prediction) of a morbid event without any treatment and the frequency or probability of the same event with a treatment. This relationship is called the effect model. It applies to a single individual, individuals within a population, or groups. In the latter case, frequencies or probabilities are averages of the group. The relationship is specific to a therapy, a disease or an event, and a period of observation. If one single disease is expressed through several distinct events, a treatment will be characterized by as many effect models. Empirical evidence, simulations with models of diseases and therapies and virtual populations, as well as theoretical derivation support the existence of the law. The effect model could be estimated through statistical fitting or mathematical modelling. It enables the prediction of the (absolute) benefit of a treatment for a given patient. It thus constitutes the theoretical basis for the design of practical tools for personalized medicine.

16.
PLoS Med ; 9(4): e1001204, 2012.
Article in English | MEDLINE | ID: mdl-22509138

ABSTRACT

BACKGROUND: The UK Prospective Diabetes Study showed that metformin decreases mortality compared to diet alone in overweight patients with type 2 diabetes mellitus. Since then, it has been the first-line treatment in overweight patients with type 2 diabetes. However, metformin-sulphonylurea bitherapy may increase mortality. METHODS AND FINDINGS: This meta-analysis of randomised controlled trials evaluated metformin efficacy (in studies of metformin versus diet alone, versus placebo, and versus no treatment; metformin as an add-on therapy; and metformin withdrawal) against cardiovascular morbidity or mortality in patients with type 2 diabetes. We searched Medline, Embase, and the Cochrane database. Primary end points were all-cause mortality and cardiovascular death. Secondary end points included all myocardial infarctions, all strokes, congestive heart failure, peripheral vascular disease, leg amputations, and microvascular complications. Thirteen randomised controlled trials (13,110 patients) were retrieved; 9,560 patients were given metformin, and 3,550 patients were given conventional treatment or placebo. Metformin did not significantly affect the primary outcomes all-cause mortality, risk ratio (RR)=0.99 (95% CI: 0.75 to 1.31), and cardiovascular mortality, RR=1.05 (95% CI: 0.67 to 1.64). The secondary outcomes were also unaffected by metformin treatment: all myocardial infarctions, RR=0.90 (95% CI: 0.74 to 1.09); all strokes, RR=0.76 (95% CI: 0.51 to 1.14); heart failure, RR=1.03 (95% CI: 0.67 to 1.59); peripheral vascular disease, RR=0.90 (95% CI: 0.46 to 1.78); leg amputations, RR=1.04 (95% CI: 0.44 to 2.44); and microvascular complications, RR=0.83 (95% CI: 0.59 to 1.17). For all-cause mortality and cardiovascular mortality, there was significant heterogeneity when including the UK Prospective Diabetes Study subgroups (I(2)=41% and 59%). There was significant interaction with sulphonylurea as a concomitant treatment for myocardial infarction (p=0.10 and 0.02, respectively). CONCLUSIONS: Although metformin is considered the gold standard, its benefit/risk ratio remains uncertain. We cannot exclude a 25% reduction or a 31% increase in all-cause mortality. We cannot exclude a 33% reduction or a 64% increase in cardiovascular mortality. Further studies are needed to clarify this situation.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Diabetic Angiopathies/mortality , Metformin/therapeutic use , Outcome Assessment, Health Care , Diabetes Mellitus, Type 2/complications , Diabetes Mellitus, Type 2/mortality , Humans , Overweight/complications , Overweight/mortality , Sulfonylurea Compounds/adverse effects
17.
PLoS One ; 6(3): e17508, 2011 Mar 03.
Article in English | MEDLINE | ID: mdl-21408615

ABSTRACT

BACKGROUND: The prediction of the public health impact of a preventive strategy provides valuable support for decision-making. International guidelines for hypertension management have introduced the level of absolute cardiovascular risk in the definition of the treatment target population. The public health impact of implementing such a recommendation has not been measured. METHODOLOGY/PRINCIPAL FINDINGS: We assessed the efficiency of three treatment scenarios according to historical and current versions of practice guidelines on a Realistic Virtual Population representative of the French population aged from 35 to 64 years: 1) BP≥160/95 mm Hg; 2) BP≥140/90 mm Hg and 3) BP≥140/90 mm Hg plus increased CVD risk. We compared the eligibility following the ESC guidelines with the recently observed proportion of treated amongst hypertensive individuals reported by the Etude Nationale Nutrition Santé survey. Lowering the threshold to define hypertension multiplied by 2.5 the number of eligible individuals. Applying the cardiovascular risk rule reduced this number significantly: less than 1/4 of hypertensive women under 55 years and less than 1/3 of hypertensive men below 45 years of age. This was the most efficient strategy. Compared to the simulated guidelines application, men of all ages were undertreated (between 32 and 60%), as were women over 55 years (70%). By contrast, younger women were over-treated (over 200%). CONCLUSION: The global CVD risk approach to decide for treatment is more efficient than the simple blood pressure level. However, lack of screening rather than guideline application seems to explain the low prescription rates among hypertensive individuals in France. Multidimensional analyses required to obtain these results are possible only through databases at the individual level: realistic virtual populations should become the gold standard for assessing the impact of public health policies at the national level.


Subject(s)
Computer Simulation , Hypertension/drug therapy , Hypertension/epidemiology , Internationality , Adult , Antihypertensive Agents/administration & dosage , Antihypertensive Agents/pharmacology , Antihypertensive Agents/therapeutic use , Blood Pressure/drug effects , Drug Prescriptions , Female , France/epidemiology , Health Plan Implementation , Health Planning Guidelines , Humans , Hypertension/physiopathology , Male , Middle Aged , Prevalence , Risk Factors
18.
Per Med ; 8(5): 581-586, 2011 Sep.
Article in English | MEDLINE | ID: mdl-29793254

ABSTRACT

Although personalized medicine has been a subject of research and debate in recent years, it has been underused in medical practice, except in some cancers. We believe that the main reason for the gap between the potential of personalized medicine and its use in daily medical practice can be explained by the lack of an appropriate tool to facilitate the use of biomarker values in a doctor's decision-making process. We propose that the effect model could form the basis of such a tool.

19.
Acta Biotheor ; 58(2-3): 171-90, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20665072

ABSTRACT

Ischemic stroke involves numerous and complex pathophysiological mechanisms including blood flow reduction, ionic exchanges, spreading depressions and cell death through necrosis or apoptosis. We used a mathematical model based on these phenomena to study the influences of intensity and duration of ischemia on the final size of the infarcted area. This model relies on a set of ordinary and partial differential equations. After a sensibility study, the model was used to carry out in silico experiments in various ischemic conditions. The simulation results show that the proportion of apoptotic cells increases when the intensity of ischemia decreases, which contributes to the model validation. The simulation results also show that the influence of ischemia duration on the infarct size is more complicated. They suggest that reperfusion is beneficial when performed in the early stroke but may be either inefficacious or even deleterious when performed later after the stroke onset. This aggravation could be explained by the depolarisation waves which might continue to spread ischemic damage and by the speeding up of the apoptotic process leading to cell death. The effect of reperfusion on cell death through these two phenomena needs to be further studied in order to develop new therapeutic strategies for stroke patients.


Subject(s)
Brain Ischemia/pathology , Brain Ischemia/physiopathology , Models, Neurological , Stroke/pathology , Stroke/physiopathology , Algorithms , Apoptosis , Brain Infarction/pathology , Brain Infarction/physiopathology , Cerebrovascular Circulation , Humans , Models, Cardiovascular , Necrosis , Regional Blood Flow , Time Factors
20.
Philos Trans A Math Phys Eng Sci ; 367(1908): 4699-716, 2009 Dec 13.
Article in English | MEDLINE | ID: mdl-19884176

ABSTRACT

The inflammatory process during stroke consists of activation of resident brain microglia and recruitment of leucocytes, namely neutrophils and monocytes/macrophages. During inflammation, microglial cells, neutrophils and macrophages secrete inflammatory cytokines and chemokines, and phagocytize dead cells. The recruitment of blood cells (neutrophils and macrophages) is mediated by the leucocyte-endothelium interactions and more specifically by cell adhesion molecules. A mathematical model is proposed to represent the dynamics of various brain cells and of immune cells (neutrophils and macrophages). This model is based on a set of six ordinary differential equations and explores the beneficial and deleterious effects of inflammation, respectively phagocytosis by immune cells and the release of pro-inflammatory mediators and nitric oxide (NO). The results of our simulations are qualitatively consistent with those observed in experiments in vivo and would suggest that the increase of phagocytosis could contribute to the increase of the percentage of living cells. The inhibition of the production of cytokines and NO and the blocking of neutrophil and macrophage infiltration into the brain parenchyma led also to the improvement of brain cell survival. This approach may help to explore the respective contributions of the beneficial and deleterious roles of the inflammatory process in stroke, and to study various therapeutic strategies in order to reduce stroke damage.


Subject(s)
Inflammation/immunology , Microglia/immunology , Models, Immunological , Stroke/immunology , Brain/cytology , Brain/immunology , Computer Simulation , Cytokines/immunology , Encephalitis/immunology , Humans , Macrophages/immunology , Neutrophils/immunology
SELECTION OF CITATIONS
SEARCH DETAIL
...