Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
JAMA Netw Open ; 6(7): e2324977, 2023 07 03.
Article in English | MEDLINE | ID: mdl-37505498

ABSTRACT

Importance: The development of oncology drugs is expensive and beset by a high attrition rate. Analysis of the costs and causes of translational failure may help to reduce attrition and permit the more appropriate use of resources to reduce mortality from cancer. Objective: To analyze the causes of failure and expenses incurred in clinical trials of novel oncology drugs, with the example of insulin-like growth factor-1 receptor (IGF-1R) inhibitors, none of which was approved for use in oncology practice. Design, Setting, and Participants: In this cross-sectional study, inhibitors of the IGF-1R and their clinical trials for use in oncology practice between January 1, 2000, and July 31, 2021, were identified by searching PubMed and ClinicalTrials.gov. A proprietary commercial database was interrogated to provide expenses incurred in these trials. If data were not available, estimates were made of expenses using mean values from the proprietary database. A search revealed studies of the effects of IGF-1R inhibitors in preclinical in vivo assays, permitting calculation of the percentage of tumor growth inhibition. Archival data on the clinical trials of IGF-1R inhibitors and proprietary estimates of their expenses were examined, together with an analysis of preclinical data on IGF-1R inhibitors obtained from the published literature. Main Outcomes and Measures: Expenses associated with research and development of IGF-1R inhibitors. Results: Sixteen inhibitors of IGF-1R studied in 183 clinical trials were found. None of the trials, in a wide range of tumor types, showed efficacy permitting drug approval. More than 12 000 patients entered trials of IGF-1R inhibitors in oncology indications in 2003 to 2021. These trials incurred aggregate research and development expenses estimated at between $1.6 billion and $2.3 billion. Analysis of the results of preclinical in vivo assays of IGF-1R inhibitors that supported subsequent clinical investigations showed mixed activity and protocols that poorly reflected the treatment of advanced metastatic tumors in humans. Conclusions and Relevance: Failed drug development in oncology incurs substantial expense. At an industry level, an estimated $50 billion to $60 billion is spent annually on failed oncology trials. Improved target validation and more appropriate preclinical models are required to reduce attrition, with more attention to decision-making before launching clinical trials. A more appropriate use of resources may better reduce cancer mortality.


Subject(s)
Insulin-Like Growth Factor I , Neoplasms , Humans , Cross-Sectional Studies , Insulin-Like Growth Factor I/antagonists & inhibitors , Neoplasms/drug therapy
3.
Europace ; 25(5)2023 05 19.
Article in English | MEDLINE | ID: mdl-36942430

ABSTRACT

While sudden cardiac death (SCD) in hypertrophic cardiomyopathy (HCM) is due to arrhythmias, the guidelines for prediction of SCD are based solely on non-electrophysiological methods. This study aims to stimulate thinking about whether the interests of patients with HCM are better served by using current, 'risk factor', methods of prediction or by further development of electrophysiological methods to determine arrhythmic risk. Five published predictive studies of SCD in HCM, which contain sufficient data to permit analysis, were analysed to compute receiver operating characteristics together with their confidence bounds to compare their formal prediction either by bootstrapping or Monte Carlo analysis. Four are based on clinical risk factors, one with additional MRI analysis, and were regarded as exemplars of the risk factor approach. The other used an electrophysiological method and directly compared this method to risk factors in the same patients. Prediction methods that use conventional clinical risk factors and MRI have low predictive capacities that will only detect 50-60% of patients at risk with a 15-30% false positive rate [area under the curve (AUC) = ∼0.7], while the electrophysiological method detects 90% of events with a 20% false positive rate (AUC = ∼0.89). Given improved understanding of complex arrhythmogenesis, arrhythmic SCD is likely to be more accurately predictable using electrophysiologically based approaches as opposed to current guidelines and should drive further development of electrophysiologically based methods.


Subject(s)
Arrhythmias, Cardiac , Cardiomyopathy, Hypertrophic , Humans , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/complications , Risk Factors , Cardiomyopathy, Hypertrophic/complications , Cardiomyopathy, Hypertrophic/diagnosis , Death, Sudden, Cardiac/etiology , Death, Sudden, Cardiac/prevention & control , ROC Curve
6.
Commun Med (Lond) ; 2(1): 154, 2022 Dec 06.
Article in English | MEDLINE | ID: mdl-36473994

ABSTRACT

BACKGROUND: Conventional preclinical models often miss drug toxicities, meaning the harm these drugs pose to humans is only realized in clinical trials or when they make it to market. This has caused the pharmaceutical industry to waste considerable time and resources developing drugs destined to fail. Organ-on-a-Chip technology has the potential improve success in drug development pipelines, as it can recapitulate organ-level pathophysiology and clinical responses; however, systematic and quantitative evaluations of Organ-Chips' predictive value have not yet been reported. METHODS: 870 Liver-Chips were analyzed to determine their ability to predict drug-induced liver injury caused by small molecules identified as benchmarks by the Innovation and Quality consortium, who has published guidelines defining criteria for qualifying preclinical models. An economic analysis was also performed to measure the value Liver-Chips could offer if they were broadly adopted in supporting toxicity-related decisions as part of preclinical development workflows. RESULTS: Here, we show that the Liver-Chip met the qualification guidelines across a blinded set of 27 known hepatotoxic and non-toxic drugs with a sensitivity of 87% and a specificity of 100%. We also show that this level of performance could generate over $3 billion annually for the pharmaceutical industry through increased small-molecule R&D productivity. CONCLUSIONS: The results of this study show how incorporating predictive Organ-Chips into drug development workflows could substantially improve drug discovery and development, allowing manufacturers to bring safer, more effective medicines to market in less time and at lower costs.


Drug development is lengthy and costly, as it relies on laboratory models that fail to predict human reactions to potential drugs. Because of this, toxic drugs sometimes go on to harm humans when they reach clinical trials or once they are in the marketplace. Organ-on-a-Chip technology involves growing cells on small devices to mimic organs of the body, such as the liver. Organ-Chips could potentially help identify toxicities earlier, but there is limited research into how well they predict these effects compared to conventional models. In this study, we analyzed 870 Liver-Chips to determine how well they predict drug-induced liver injury, a common cause of drug failure, and found that Liver-Chips outperformed conventional models. These results suggest that widespread acceptance of Organ-Chips could decrease drug attrition, help minimize harm to patients, and generate billions in revenue for the pharmaceutical industry.

7.
Nat Rev Drug Discov ; 21(12): 915-931, 2022 12.
Article in English | MEDLINE | ID: mdl-36195754

ABSTRACT

Successful drug discovery is like finding oases of safety and efficacy in chemical and biological deserts. Screens in disease models, and other decision tools used in drug research and development (R&D), point towards oases when they score therapeutic candidates in a way that correlates with clinical utility in humans. Otherwise, they probably lead in the wrong direction. This line of thought can be quantified by using decision theory, in which 'predictive validity' is the correlation coefficient between the output of a decision tool and clinical utility across therapeutic candidates. Analyses based on this approach reveal that the detectability of good candidates is extremely sensitive to predictive validity, because the deserts are big and oases small. Both history and decision theory suggest that predictive validity is under-managed in drug R&D, not least because it is so hard to measure before projects succeed or fail later in the process. This article explains the influence of predictive validity on R&D productivity and discusses methods to evaluate and improve it, with the aim of supporting the application of more effective decision tools and catalysing investment in their creation.


Subject(s)
Drug Discovery , Efficiency , Humans , Drug Discovery/methods
8.
Drug Discov Today ; 27(11): 103333, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36007753

ABSTRACT

Research and development (R&D) outsourcing offers some obvious productivity benefits (e.g., access to new technology, variabilised costs, risk sharing, etc.). However, recent work in economics points to a productivity headwind at the level of the innovation ecosystem. The market for technologies with economies of scope and knowledge spillovers (those with the biggest impact on industry economics and social welfare) has structural features that allow customers to capture a disproportionate share of economic value and transfer a disproportionate share of economic risk to technology providers, even though the providers aim to maximise profit. This reduces the incentives to invest in new ventures that specialise in the most promising early-stage projects. Therefore, near-term gains from R&D outsourcing can be offset by slower innovation in the long run.

9.
Pediatrics ; 147(2)2021 02.
Article in English | MEDLINE | ID: mdl-33468598

ABSTRACT

BACKGROUND AND OBJECTIVES: Professional interpretation for patients with limited English proficiency remains underused. Understanding predictors of use is crucial for intervention. We sought to identify factors associated with professional interpreter use during pediatric emergency department (ED) visits. METHODS: We video recorded ED visits for a subset of participants (n = 50; 20% of the total sample) in a randomized trial of telephone versus video interpretation for Spanish-speaking limited English proficiency families. Medical communication events were coded for duration, health professional type, interpreter (none, ad hoc, or professional), and content. With communication event as the unit of analysis, associations between professional interpreter use and assigned interpreter modality, health professional type, and communication content were assessed with multivariate random-effects logistic regression, clustered on the patient. RESULTS: We analyzed 312 communication events from 50 ED visits (28 telephone arm, 22 video arm). Professional interpretation was used for 36% of communications overall, most often for detailed histories (89%) and least often for procedures (11%) and medication administrations (8%). Speaker type, communication content, and duration were all significantly associated with professional interpreter use. Assignment to video interpretation was associated with significantly increased use of professional interpretation for communication with providers (adjusted odds ratio 2.7; 95% confidence interval: 1.1-7.0). CONCLUSIONS: Professional interpreter use was inconsistent over the course of an ED visit, even for patients enrolled in an interpretation study. Assignment to video rather than telephone interpretation led to greater use of professional interpretation among physicians and nurse practitioners but not nurses and other staff.


Subject(s)
Allied Health Personnel/trends , Emergency Service, Hospital/trends , Hospitals, Pediatric/trends , Limited English Proficiency , Translating , Video Recording/trends , Child , Communication Barriers , Female , Forecasting , Humans , Interviews as Topic/methods , Male , Nurse Practitioners/trends , Physicians/trends , Video Recording/methods
11.
Acad Pediatr ; 18(8): 935-943, 2018.
Article in English | MEDLINE | ID: mdl-30048713

ABSTRACT

OBJECTIVE: Families with limited English proficiency (LEP) experience communication barriers and are at risk for adverse events after discharge from the pediatric emergency department (ED). We sought to describe the characteristics of ED discharge communication for LEP families and to assess whether the use of a professional interpreter was associated with provider communication quality during ED discharge. METHODS: Transcripts of video-recorded ED visits for Spanish-speaking LEP families were obtained from a larger study comparing professional interpretation modalities in a freestanding children's hospital. Caregiver-provider communication interactions that included discharge education were analyzed for content and for the techniques that providers used to assess caregiver comprehension. Regression analysis was used to assess for an association between professional interpreter use and discharge education content or assessment of caregiver comprehension. RESULTS: We analyzed 101 discharge communication interactions from 47 LEP patient visits; 31% of communications did not use professional interpretation. Although most patients (70%) received complete discharge education content, only 65% received instructions on medication dosing, and only 55% were given return precautions. Thirteen percent of the patient visits included an open-ended question to assess caregiver comprehension, and none included teach-back. Professional interpreter use was associated with greater odds of complete discharge education content (odds ratio [OR], 7.1; 95% confidence interval [CI], 1.4-37.0) and high-quality provider assessment of caregiver comprehension (OR, 6.1; 95% CI, 2.3-15.9). CONCLUSIONS: Professional interpreter use is associated with superior provider discharge communication behaviors. This study identifies clear areas for improving discharge communication, which may improve safety and outcomes for LEP children discharged from the ED.


Subject(s)
Communication Barriers , Communication , Emergency Service, Hospital , Parents/education , Patient Discharge , Translating , Child , Child, Preschool , Female , Humans , Infant , Male , Patient Education as Topic
12.
BMJ Open ; 7(5): e013497, 2017 06 06.
Article in English | MEDLINE | ID: mdl-28588106

ABSTRACT

OBJECTIVES: To assess the evidence for price-based alcohol policy interventions to determine whether minimum unit pricing (MUP) is likely to be effective. DESIGN: Systematic review and assessment of studies according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, against the Bradford Hill criteria for causality. Three electronic databases were searched from inception to February 2017. Additional articles were found through hand searching and grey literature searches. CRITERIA FOR SELECTING STUDIES: We included any study design that reported on the effect of price-based interventions on alcohol consumption or alcohol-related morbidity, mortality and wider harms. Studies reporting on the effects of taxation or affordability and studies that only investigated price elasticity of demand were beyond the scope of this review. Studies with any conflict of interest were excluded. All studies were appraised for methodological quality. RESULTS: Of 517 studies assessed, 33 studies were included: 26 peer-reviewed research studies and seven from the grey literature. All nine of the Bradford Hill criteria were met, although different types of study satisfied different criteria. For example, modelling studies complied with the consistency and specificity criteria, time series analyses demonstrated the temporality and experiment criteria, and the analogy criterion was fulfilled by comparing the findings with the wider literature on taxation and affordability. CONCLUSIONS: Overall, the Bradford Hill criteria for causality were satisfied. There was very little evidence that minimum alcohol prices are not associated with consumption or subsequent harms. However the overall quality of the evidence was variable, a large proportion of the evidence base has been produced by a small number of research teams, and the quantitative uncertainty in many estimates or forecasts is often poorly communicated outside the academic literature. Nonetheless, price-based alcohol policy interventions such as MUP are likely to reduce alcohol consumption, alcohol-related morbidity and mortality.


Subject(s)
Alcohol Drinking/economics , Alcohol-Related Disorders/mortality , Alcoholic Beverages/economics , Costs and Cost Analysis/standards , Models, Theoretical , Public Policy/economics , Alcohol Drinking/epidemiology , Causality , Humans , Randomized Controlled Trials as Topic , Taxes
13.
PLoS One ; 11(2): e0147215, 2016.
Article in English | MEDLINE | ID: mdl-26863229

ABSTRACT

A striking contrast runs through the last 60 years of biopharmaceutical discovery, research, and development. Huge scientific and technological gains should have increased the quality of academic science and raised industrial R&D efficiency. However, academia faces a "reproducibility crisis"; inflation-adjusted industrial R&D costs per novel drug increased nearly 100 fold between 1950 and 2010; and drugs are more likely to fail in clinical development today than in the 1970s. The contrast is explicable only if powerful headwinds reversed the gains and/or if many "gains" have proved illusory. However, discussions of reproducibility and R&D productivity rarely address this point explicitly. The main objectives of the primary research in this paper are: (a) to provide quantitatively and historically plausible explanations of the contrast; and (b) identify factors to which R&D efficiency is sensitive. We present a quantitative decision-theoretic model of the R&D process. The model represents therapeutic candidates (e.g., putative drug targets, molecules in a screening library, etc.) within a "measurement space", with candidates' positions determined by their performance on a variety of assays (e.g., binding affinity, toxicity, in vivo efficacy, etc.) whose results correlate to a greater or lesser degree. We apply decision rules to segment the space, and assess the probability of correct R&D decisions. We find that when searching for rare positives (e.g., candidates that will successfully complete clinical development), changes in the predictive validity of screening and disease models that many people working in drug discovery would regard as small and/or unknowable (i.e., an 0.1 absolute change in correlation coefficient between model output and clinical outcomes in man) can offset large (e.g., 10 fold, even 100 fold) changes in models' brute-force efficiency. We also show how validity and reproducibility correlate across a population of simulated screening and disease models. We hypothesize that screening and disease models with high predictive validity are more likely to yield good answers and good treatments, so tend to render themselves and their diseases academically and commercially redundant. Perhaps there has also been too much enthusiasm for reductionist molecular models which have insufficient predictive validity. Thus we hypothesize that the average predictive validity of the stock of academically and industrially "interesting" screening and disease models has declined over time, with even small falls able to offset large gains in scientific knowledge and brute-force efficiency. The rate of creation of valid screening and disease models may be the major constraint on R&D productivity.


Subject(s)
Biopharmaceutics/trends , Decision Theory , Drug Discovery , Biopharmaceutics/methods , Cost-Benefit Analysis , Drug Discovery/economics , Efficiency , False Positive Reactions , High-Throughput Screening Assays , Humans , Models, Theoretical , Quality Control , Reproducibility of Results , Research
14.
Ther Innov Regul Sci ; 49(3): 415-424, 2015 May.
Article in English | MEDLINE | ID: mdl-30222401

ABSTRACT

In recent years, concern has been growing that traditional research and development models in the life sciences are unsustainable. Productivity, especially in pharmaceuticals, has plummeted, and too many of the products emerging from increasingly lengthy and costly clinical development offer marginal benefit to patients. Although the phenomenon is global, there are specific and important features of European life sciences that impede the translation of an ever more penetrating understanding of biology into effective treatments. This article analyzes these issues in the context of European biopharmaceutical innovation, describes the actions that Europe is already taking, and suggests what more needs to be done.

15.
Nat Rev Drug Discov ; 11(3): 191-200, 2012 Mar 01.
Article in English | MEDLINE | ID: mdl-22378269

ABSTRACT

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (RD). Yet the number of new drugs approved per billion US dollars spent on RD has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms. There have been many proposed solutions to the problem of declining RD efficiency. However, their apparent lack of impact so far and the contrast between improving inputs and declining output in terms of the number of new drugs make it sensible to ask whether the underlying problems have been correctly diagnosed. Here, we discuss four factors that we consider to be primary causes, which we call the 'better than the Beatles' problem; the 'cautious regulator' problem; the 'throw money at it' tendency; and the 'basic research-brute force' bias. Our aim is to provoke a more systematic analysis of the causes of the decline in RD efficiency.


Subject(s)
Drug Industry/standards , Efficiency, Organizational/standards , Pharmaceutical Preparations , Research/standards , Animals , Drug Delivery Systems/standards , Drug Delivery Systems/trends , Drug Industry/trends , Efficiency, Organizational/trends , Humans , Pharmaceutical Preparations/administration & dosage , Research/trends
16.
Neuroreport ; 14(7): 1045-50, 2003 May 23.
Article in English | MEDLINE | ID: mdl-12802200

ABSTRACT

To test the hypothesis that correlated neuronal activity serves as the neuronal code for visual feature binding, we applied information theory techniques to multiunit activity recorded from pairs of V1 recording sites in anaesthetised cats while presenting either single or separate bar stimuli. We quantified the roles of firing rates of individual channels and of cross-correlations between recording sites in encoding of visual information. Between 89 and 96% of the information was carried by firing rates; correlations contributed 4-11% extra information. The distribution across the population of either correlation strength or correlation information did not co-vary systematically with changes in perception predicted by Gestalt psychology. These results suggest that firing rates, rather than correlations, are the main element of the population code for feature binding in primary visual cortex.


Subject(s)
Brain Mapping/methods , Photic Stimulation/methods , Visual Cortex/physiology , Action Potentials/physiology , Animals , Cats
17.
Proc Natl Acad Sci U S A ; 99(16): 10494-9, 2002 Aug 06.
Article in English | MEDLINE | ID: mdl-12097644

ABSTRACT

The absolute diversity of prokaryotes is widely held to be unknown and unknowable at any scale in any environment. However, it is not necessary to count every species in a community to estimate the number of different taxa therein. It is sufficient to estimate the area under the species abundance curve for that environment. Log-normal species abundance curves are thought to characterize communities, such as bacteria, which exhibit highly dynamic and random growth. Thus, we are able to show that the diversity of prokaryotic communities may be related to the ratio of two measurable variables: the total number of individuals in the community and the abundance of the most abundant members of that community. We assume that either the least abundant species has an abundance of 1 or Preston's canonical hypothesis is valid. Consequently, we can estimate the bacterial diversity on a small scale (oceans 160 per ml; soil 6,400-38,000 per g; sewage works 70 per ml). We are also able to speculate about diversity at a larger scale, thus the entire bacterial diversity of the sea may be unlikely to exceed 2 x 10(6), while a ton of soil could contain 4 x 10(6) different taxa. These are preliminary estimates that may change as we gain a greater understanding of the nature of prokaryotic species abundance curves. Nevertheless, it is evident that local and global prokaryotic diversity can be understood through species abundance curves and purely experimental approaches to solving this conundrum will be fruitless.


Subject(s)
Bacteria/classification , Genetic Variation , RNA, Bacterial/classification , RNA, Ribosomal, 16S/classification , Soil Microbiology , Water Microbiology , Bacteria/genetics , Mathematical Computing , Oceans and Seas , Prokaryotic Cells
SELECTION OF CITATIONS
SEARCH DETAIL
...