Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 139
Filter
4.
J Hosp Infect ; 106(4): 765-773, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32916212

ABSTRACT

BACKGROUND: Healthcare-acquired infections (HAIs) cause substantial morbidity and mortality. Copper appears to have strong antimicrobial properties under laboratory conditions. AIM: To examine the potential effect of copper treatment of commonly touched surfaces in healthcare facilities. METHODS: Controlled trials comparing the effect of copper-treated surfaces (furniture or bed linens) in hospital rooms compared with standard rooms on HAIs were included in this systematic review. Two reviewers independently screened retrieved articles, extracted data, and assessed the risk of bias of included studies. The primary outcome was the occurrence of HAIs. FINDINGS: In total, 638 records were screened, and seven studies comprising 12,362 patients were included. All included studies were judged to be at high risk of bias in two or more of the seven domains. All seven studies reported the effect of various copper-treated surfaces on HAIs. Overall, this review found low-quality evidence of potential clinical importance that copper-treated hard surfaces and/or bed linens and clothes reduced HAIs by 27% (risk ratio 0.73, 95% confidence interval 0.57-0.94; I2 = 44%, P=0.01). CONCLUSION: Given the clinical and economic costs of HAIs, the potentially protective effect of copper treatment appears to be important. The current evidence is insufficient to make a strong positive recommendation. However, it would appear worthwhile and urgent to conduct larger publicly funded clinical trials into the impact of copper treatment.


Subject(s)
Copper/pharmacology , Cross Infection/prevention & control , Bedding and Linens , Delivery of Health Care , Health Facilities , Humans
5.
J Hum Nutr Diet ; 30(5): 655-664, 2017 10.
Article in English | MEDLINE | ID: mdl-28150402

ABSTRACT

Despite the significance placed on lifestyle interventions for obesity management, most weight loss is followed by weight regain. Psychological concepts of habitual behaviour and automaticity have been suggested as plausible explanations for this overwhelming lack of long-term weight loss success. Interventions that focus on changing an individual's behaviour are not usually successful at changing an individual's habits because they do not incorporate the strategies required to break unhealthy habits and/or form new healthy habits. A narrative review was conducted and describes the theory behind habit formation in relation to weight regain. The review evaluated the effectiveness of using habits as tools to maintain weight loss. Three specific habit-based weight loss programmes are described: '10 Top Tips', 'Do Something Different' and 'Transforming Your Life'. Participants in these interventions achieved significant weight loss compared to a control group or other conventional interventions. Habit-based interventions show promising results in sustaining behaviour change. Weight loss maintenance may benefit from incorporating habit-focused strategies and should be investigated further.


Subject(s)
Body Weight Maintenance , Habits , Health Behavior , Weight Loss , Diet/psychology , Eating/psychology , Exercise , Humans , Intention , Life Style , Obesity/psychology , Obesity/therapy , Overweight/psychology , Overweight/therapy , Randomized Controlled Trials as Topic
6.
Gesundheitswesen ; 78(3): 175-88, 2016 03.
Article in German | MEDLINE | ID: mdl-26824401

ABSTRACT

Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face-to-face panel meeting. The resultant 12-item TIDieR checklist (brief name, why, what (materials), what (procedure), who intervened, how, where, when and how much, tailoring, modifications, how well (planned), how well (actually carried out)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with a detailed explanation of each item, and examples of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure the accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.


Subject(s)
Checklist/standards , Disease Management , Documentation/standards , Guideline Adherence/standards , Outcome Assessment, Health Care/standards , Records/standards , Algorithms , Evidence-Based Medicine , Forms and Records Control/standards , Germany , Practice Guidelines as Topic
8.
J Hum Hypertens ; 28(2): 123-7, 2014 Feb.
Article in English | MEDLINE | ID: mdl-23823583

ABSTRACT

Although self-monitoring of blood pressure is common among people with hypertension, little is known about how general practitioners (GPs) use such readings. This survey aimed to ascertain current views and practice on self-monitoring of UK primary care physicians. An internet-based survey of UK GPs was undertaken using a provider of internet services to UK doctors. The hyperlink to the survey was opened by 928 doctors, and 625 (67%) GPs completed the questionnaire. Of them, 557 (90%) reported having patients who self-monitor, 191 (34%) had a monitor that they lend to patients, 171 (31%) provided training in self-monitoring for their patients and 52 (9%) offered training to other GPs. Three hundred and sixty-seven GPs (66%) recommended at least two readings per day, and 416 (75%) recommended at least 4 days of monitoring at a time. One hundred and eighty (32%) adjusted self-monitored readings to take account of lower pressures in out-of-office settings, and 10/5 mm Hg was the most common adjustment factor used. Self-monitoring of blood pressure was widespread among the patients of responding GPs. Although the majority used appropriate schedules of measurement, some GPs suggested much more frequent home measurements than usual. Further, interpretation of home blood pressure was suboptimal, with only a minority recognising that values for diagnosis and on-treatment target are lower than those for clinic measurement. Subsequent national guidance may improve this situation but will require adequate implementation.


Subject(s)
Blood Pressure Determination/methods , Blood Pressure , Hypertension/diagnosis , Practice Patterns, Physicians' , Primary Health Care , Self Care , Attitude of Health Personnel , Blood Pressure Determination/standards , Female , Health Care Surveys , Health Knowledge, Attitudes, Practice , Humans , Hypertension/physiopathology , Internet , Male , Patient Education as Topic , Practice Patterns, Physicians'/standards , Predictive Value of Tests , Reproducibility of Results , Self Care/standards , Surveys and Questionnaires , United Kingdom
11.
Health Technol Assess ; 16(29): 1-271, iii-iv, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22687263

ABSTRACT

OBJECTIVES: To determine effective and efficient monitoring criteria for ocular hypertension [raised intraocular pressure (IOP)] through (i) identification and validation of glaucoma risk prediction models; and (ii) development of models to determine optimal surveillance pathways. DESIGN: A discrete event simulation economic modelling evaluation. Data from systematic reviews of risk prediction models and agreement between tonometers, secondary analyses of existing datasets (to validate identified risk models and determine optimal monitoring criteria) and public preferences were used to structure and populate the economic model. SETTING: Primary and secondary care. PARTICIPANTS: Adults with ocular hypertension (IOP > 21 mmHg) and the public (surveillance preferences). INTERVENTIONS: We compared five pathways: two based on National Institute for Health and Clinical Excellence (NICE) guidelines with monitoring interval and treatment depending on initial risk stratification, 'NICE intensive' (4-monthly to annual monitoring) and 'NICE conservative' (6-monthly to biennial monitoring); two pathways, differing in location (hospital and community), with monitoring biennially and treatment initiated for a ≥ 6% 5-year glaucoma risk; and a 'treat all' pathway involving treatment with a prostaglandin analogue if IOP > 21 mmHg and IOP measured annually in the community. MAIN OUTCOME MEASURES: Glaucoma cases detected; tonometer agreement; public preferences; costs; willingness to pay and quality-adjusted life-years (QALYs). RESULTS: The best available glaucoma risk prediction model estimated the 5-year risk based on age and ocular predictors (IOP, central corneal thickness, optic nerve damage and index of visual field status). Taking the average of two IOP readings, by tonometry, true change was detected at two years. Sizeable measurement variability was noted between tonometers. There was a general public preference for monitoring; good communication and understanding of the process predicted service value. 'Treat all' was the least costly and 'NICE intensive' the most costly pathway. Biennial monitoring reduced the number of cases of glaucoma conversion compared with a 'treat all' pathway and provided more QALYs, but the incremental cost-effectiveness ratio (ICER) was considerably more than £30,000. The 'NICE intensive' pathway also avoided glaucoma conversion, but NICE-based pathways were either dominated (more costly and less effective) by biennial hospital monitoring or had a ICERs > £30,000. Results were not sensitive to the risk threshold for initiating surveillance but were sensitive to the risk threshold for initiating treatment, NHS costs and treatment adherence. LIMITATIONS: Optimal monitoring intervals were based on IOP data. There were insufficient data to determine the optimal frequency of measurement of the visual field or optic nerve head for identification of glaucoma. The economic modelling took a 20-year time horizon which may be insufficient to capture long-term benefits. Sensitivity analyses may not fully capture the uncertainty surrounding parameter estimates. CONCLUSIONS: For confirmed ocular hypertension, findings suggest that there is no clear benefit from intensive monitoring. Consideration of the patient experience is important. A cohort study is recommended to provide data to refine the glaucoma risk prediction model, determine the optimum type and frequency of serial glaucoma tests and estimate costs and patient preferences for monitoring and treatment. FUNDING: The National Institute for Health Research Health Technology Assessment Programme.


Subject(s)
Antihypertensive Agents/economics , Antihypertensive Agents/therapeutic use , Glaucoma, Open-Angle/prevention & control , Ocular Hypertension/drug therapy , Ocular Hypertension/economics , Administration, Ophthalmic , Age Factors , Antihypertensive Agents/administration & dosage , Cohort Studies , Cost-Benefit Analysis , Humans , Intraocular Pressure , Mass Screening , Models, Theoretical , Ocular Hypertension/epidemiology , Quality-Adjusted Life Years , Randomized Controlled Trials as Topic , Risk Assessment
12.
Ann Oncol ; 23(5): 1250-1253, 2012 May.
Article in English | MEDLINE | ID: mdl-21948815

ABSTRACT

BACKGROUND: To identify the optimal interval for repeat prostate-specific antigen (PSA) testing to screen for prostate cancer in healthy adults. PATIENTS AND METHODS: A retrospective cohort study was conducted on 7332 healthy males without prostate cancer at baseline from 2005 to 2008. Participants underwent annual health checkups including PSA testing at the Center for Preventive Medicine in Japan. Participants with high PSA (≥ 4.0 ng/ml) underwent further examination for prostate cancer. A subgroup analysis was conducted age group (<50 years, ≥ 50 years). RESULTS: Mean age was 50 years. Mean PSA at baseline was 1.2 ng/ml. In over 50-year group, for those with initial PSA of <1.0, 1.0-1.9, 2.0-2.9, and 3.0-3.9 ng/ml at baseline, the 3-year cumulative incidence of prostate cancer was 0%, 0.1%, 0.3%, and 5.7%, respectively. No prostate cancer was identified in those <50 years, regardless of PSA level. CONCLUSIONS: If PSA screening is recommended, males >50 years with PSA of 3.0-3.9 ng/ml at baseline should undergo rescreening at 2 years. For men with PSA <3.0 ng/ml, PSA rescreening at intervals of ≥ 3 years is appropriate. PSA screening may not be indicated in males of <50 years of age.


Subject(s)
Carcinoma/diagnosis , Diagnostic Techniques, Endocrine/statistics & numerical data , Mass Screening/statistics & numerical data , Prostate-Specific Antigen/analysis , Prostatic Neoplasms/diagnosis , Adult , Carcinoma/blood , Cohort Studies , Diagnostic Techniques, Endocrine/standards , Humans , Male , Mass Screening/methods , Mass Screening/standards , Middle Aged , Prostate-Specific Antigen/blood , Prostatic Neoplasms/blood , Prostatic Neoplasms/prevention & control , Retrospective Studies , Time Factors
13.
J Hum Hypertens ; 26(9): 540-6, 2012 Sep.
Article in English | MEDLINE | ID: mdl-21814284

ABSTRACT

Blood pressure (BP) screening is important to identify those at risk of cardiovascular disease, but there has been little data on the appropriate interval of screening. We aimed to evaluate the optimal interval and the best measure for BP re-screening by estimating the long-term, true change variance ('signal') and short-term, within-person variance ('noise'). Study design was a cohort study from 2005 to 2008. Target population was Japanese healthy adults not taking antihypertensive medication at baseline, in a teaching hospital. We measured annually the systolic BP (SBP) and the diastolic BP (DBP), and calculated the pulse pressure (PP) and the mean arterial pressure (MAP). A total of 15,055 individuals (51% male) with a mean age of 49 years had annual check-ups. Short-term coefficient of variation was lowest for MAP at 5.2%, followed by SBP (5.7%) and DBP (5.8%), and highest for PP (12%). After 3 years, the 'signal' of true BP changes of only SBP and MAP equaled the 'noise' of BP measurement; however, it was larger for those with higher initial BPs. SBP or MAP appears to be a better screening measure. The optimal interval should be 3 years or more, with SBP<130 mm Hg and 2 years for those with SBP ≥ 130 mm Hg.


Subject(s)
Blood Pressure Determination/methods , Blood Pressure Determination/standards , Adult , Aged , Aged, 80 and over , Cohort Studies , Female , Humans , Hypertension/diagnosis , Male , Mass Screening/methods , Middle Aged , Reproducibility of Results , Young Adult
14.
Heart ; 97(9): 689-97, 2011 May.
Article in English | MEDLINE | ID: mdl-21474616

ABSTRACT

OBJECTIVE: To compare the strengths and limitations of cardiovascular risk scores available for clinicians in assessing the global (absolute) risk of cardiovascular disease. DESIGN: Review of cardiovascular risk scores. DATA SOURCES: Medline (1966 to May 2009) using a mixture of MeSH terms and free text for the keywords 'cardiovascular', 'risk prediction' and 'cohort studies'. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: A study was eligible if it fulfilled the following criteria: (1) it was a cohort study of adults in the general population with no prior history of cardiovascular disease and not restricted by a disease condition; (2) the primary objective was the development of a cardiovascular risk score/equation that predicted an individual's absolute cardiovascular risk in 5-10 years; (3) the score could be used by a clinician to calculate the risk for an individual patient. RESULTS: 21 risk scores from 18 papers were identified from 3536 papers. Cohort size ranged from 4372 participants (SHS) to 1591209 records (QRISK2). More than half of the cardiovascular risk scores (11) were from studies with recruitment starting after 1980. Definitions and methods for measuring risk predictors and outcomes varied widely between scores. Fourteen cardiovascular risk scores reported data on prior treatment, but this was mainly limited to antihypertensive treatment. Only two studies reported prior use of lipid-lowering agents. None reported on prior use of platelet inhibitors or data on treatment drop-ins. CONCLUSIONS: The use of risk-factor-modifying drugs-for example, statins-and disease-modifying medication-for example, platelet inhibitors-was not accounted for. In addition, none of the risk scores addressed the effect of treatment drop-ins-that is, treatment started during the study period. Ideally, a risk score should be derived from a population free from treatment. The lack of accounting for treatment effect and the wide variation in study characteristics, predictors and outcomes causes difficulties in the use of cardiovascular risk scores for clinical treatment decision.


Subject(s)
Cardiovascular Diseases/etiology , Adult , Aged , Cardiovascular Agents/therapeutic use , Cardiovascular Diseases/drug therapy , Cohort Studies , Female , Humans , Male , Middle Aged , Risk Assessment , Risk Factors , Treatment Outcome
15.
Allergy ; 66(5): 588-95, 2011 May.
Article in English | MEDLINE | ID: mdl-21241318

ABSTRACT

This is the third and last article in the series about the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to grading the quality of evidence and the strength of recommendations in clinical practice guidelines and its application in the field of allergy. We describe the factors that influence the strength of recommendations about the use of diagnostic, preventive and therapeutic interventions: the balance of desirable and undesirable consequences, the quality of a body of evidence related to a decision, patients' values and preferences, and considerations of resource use. We provide examples from two recently developed guidelines in the field of allergy that applied the GRADE approach. The main advantages of this approach are the focus on patient important outcomes, explicit consideration of patients' values and preferences, the systematic approach to collecting the evidence, the clear separation of the concepts of quality of evidence and strength of recommendations, and transparent reporting of the decision process. The focus on transparency facilitates understanding and implementation and should empower patients, clinicians and other health care professionals to make informed choices.


Subject(s)
Evidence-Based Medicine/standards , Practice Guidelines as Topic/standards , Humans , Needs Assessment
16.
Diabetologia ; 54(2): 280-90, 2011 Feb.
Article in English | MEDLINE | ID: mdl-21052978

ABSTRACT

AIMS/HYPOTHESIS: Fenofibrate caused an acute, sustained plasma creatinine increase in the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) and Action to Control Cardiovascular Risk in Diabetes (ACCORD) studies. We assessed fenofibrate's renal effects overall and in a FIELD washout sub-study. METHODS: Type 2 diabetic patients (n = 9,795) aged 50 to 75 years were randomly assigned to fenofibrate (n = 4,895) or placebo (n = 4,900) for 5 years, after 6 weeks fenofibrate run-in. Albuminuria (urinary albumin/creatinine ratio measured at baseline, year 2 and close-out) and estimated GFR, measured four to six monthly according to the Modification of Diet in Renal Disease Study, were pre-specified endpoints. Plasma creatinine was re-measured 8 weeks after treatment cessation at close-out (washout sub-study, n = 661). Analysis was by intention-to-treat. RESULTS: During fenofibrate run-in, plasma creatinine increased by 10.0 µmol/l (p < 0.001), but quickly reversed on placebo assignment. It remained higher on fenofibrate than on placebo, but the chronic rise was slower (1.62 vs 1.89 µmol/l annually, p = 0.01), with less estimated GFR loss (1.19 vs 2.03 ml min(-1) 1.73 m(-2) annually, p < 0.001). After washout, estimated GFR had fallen less from baseline on fenofibrate (1.9 ml min(-1) 1.73 m(-2), p = 0.065) than on placebo (6.9 ml min(-1) 1.73 m(-2), p < 0.001), sparing 5.0 ml min(-1) 1.73 m(-2) (95% CI 2.3-7.7, p < 0.001). Greater preservation of estimated GFR with fenofibrate was observed with baseline hypertriacylglycerolaemia (n = 169 vs 491 without) alone, or combined with low HDL-cholesterol (n = 140 vs 520 without) and reductions of ≥ 0.48 mmol/l in triacylglycerol over the active run-in period (pre-randomisation) (n = 356 vs 303 without). Fenofibrate reduced urine albumin concentrations and hence albumin/creatinine ratio by 24% vs 11% (p < 0.001; mean difference 14% [95% CI 9-18]; p < 0.001), with 14% less progression and 18% more albuminuria regression (p < 0.001) than in participants on placebo. End-stage renal event frequency was similar (n = 21 vs 26, p = 0.48). CONCLUSIONS/INTERPRETATION: Fenofibrate reduced albuminuria and slowed estimated GFR loss over 5 years, despite initially and reversibly increasing plasma creatinine. Fenofibrate may delay albuminuria and GFR impairment in type 2 diabetes patients. Confirmatory studies are merited. TRIAL REGISTRATION: ISRCTN64783481.


Subject(s)
Diabetes Mellitus, Type 2/blood , Diabetes Mellitus, Type 2/drug therapy , Fenofibrate/therapeutic use , Hypolipidemic Agents/therapeutic use , Aged , Creatinine/blood , Female , Glomerular Filtration Rate/drug effects , Humans , Male , Middle Aged
17.
Health Technol Assess ; 13(60): 1-160, 2009 Dec.
Article in English | MEDLINE | ID: mdl-20003824

ABSTRACT

OBJECTIVE: To determine the diagnostic performance and cost-effectiveness of colour vision testing (CVT) to identify and monitor the progression of diabetic retinopathy (DR). DATA SOURCES: Major electronic databases including MEDLINE, EMBASE, Cumulative Index to Nursing and Allied Health Literature, and Cochrane Database of Systematic Reviews were searched from inception to September 2008. REVIEW METHODS: A systematic review of the evidence was carried out according to standard methods. An online survey of National Screening Programme for Diabetic Retinopathy (NSPDR) clinical leads and programme managers assessed the diagnostic tools used routinely by local centres and their views on future research priorities. A decision tree and Markov model was developed to estimate the incremental costs and effects of adding CVT to the current NSPDR. RESULTS: In total, 25 studies on CVT met the inclusion criteria for the review, including 18 presenting 2 x 2 diagnostic accuracy data. The quality of studies and reporting was generally poor. Automated or computerised CVTs reported variable sensitivities (63-97%) and specificities (71-95%). One study reported good diagnostic accuracy estimates for computerised CVT plus retinal photography for detection of sight-threatening DR, but it included few cases of retinopathy in total. Results for pseudoisochromatic plates, anomaloscopes and colour arrangement tests were largely inadequate for DR screening, with Youden indices (sensitivity + specificity - 100%) close to zero. No studies were located that addressed patient preferences relating to CVT for DR. Retinal photography is universally employed as the primary method for retinal screening by centres responding to the online survey; none used CVT. The review of the economic evaluation literature found no previous studies describing the cost and effects of any type of CVT. Our economic evaluation suggested that adding CVT to the current national screening programme could be cost-effective if it adequately increases sensitivity and is relatively inexpensive. The deterministic base-case analysis indicated that the cost per quality-adjusted life-year gained may be 6364 pounds and 12,432 pounds for type 1 and type 2 diabetes respectively. However, probabilistic sensitivity analysis highlighted the substantial probability that CVT is not diagnostically accurate enough to be either an effective or a cost-effective addition to current screening methods. The results of the economic model should be treated with caution as the model is based on only one small study. CONCLUSIONS: There is insufficient evidence to support the use of CVT alone, or in combination with retinal photography, as a method for screening for retinopathy in patients with diabetes. Better quality diagnostic accuracy studies directly comparing the incremental value of CVT in addition to retinal photography are needed before drawing conclusions on cost-effectiveness. The most frequently cited preference for future research was the use of optical coherence tomography for the detection of clinically significant macular oedema.


Subject(s)
Color Vision Defects/diagnosis , Diabetic Retinopathy/physiopathology , Diagnostic Tests, Routine/economics , Diagnostic Tests, Routine/standards , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Reproducibility of Results , Young Adult
18.
Diabetologia ; 52(10): 1990-2000, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19644668

ABSTRACT

AIMS/HYPOTHESIS: We compared the effect of biphasic, basal or prandial insulin regimens on glucose control, clinical outcomes and adverse events in people with type 2 diabetes. METHODS: We searched the Cochrane Library, MEDLINE, EMBASE and major American and European conference abstracts for randomised controlled trials up to October 2008. A systematic review and meta-analyses were performed. RESULTS: Twenty-two trials that randomised 4,379 patients were included. Seven trials reported both starting insulin dose and titration schedules. Hypoglycaemia definitions and glucose targets varied. Meta-analyses were performed pooling data from insulin-naive patients. Greater HbA(1c) reductions were seen with biphasic and prandial insulin, compared with basal insulin, of 0.45% (95% CI 0.19-0.70, p = 0.0006) and 0.45% (95% CI 0.16-0.73, p = 0.002), respectively, but with lesser reductions of fasting glucose of 0.93 mmol/l (95% CI 0.21-1.65, p = 0.01) and 2.20 mmol/l (95% CI 1.70-2.70, p < 0.00001), respectively. Larger insulin doses at study end were reported in biphasic and prandial arms compared with basal arms. No studies found differences in major hypoglycaemic events, but minor hypoglycaemic events for prandial and biphasic insulin were inconsistently reported as either higher than or equivalent to basal insulin. Greater weight gain was seen with prandial compared with basal insulin (1.86 kg, 95% CI 0.80-2.92, p = 0.0006). CONCLUSIONS/INTERPRETATION: Greater HbA(1c) reduction may be obtained in type 2 diabetes when insulin is initiated using biphasic or prandial insulin rather than a basal regimen, but with an unquantified risk of hypoglycaemia. Studies with longer follow-up are required to determine the clinical relevance of this finding.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Clinical Trials as Topic , Diabetes Mellitus, Type 2/metabolism , Glycated Hemoglobin/metabolism , Humans
19.
Health Technol Assess ; 13(32): 1-207, iii, 2009 Jul.
Article in English | MEDLINE | ID: mdl-19586584

ABSTRACT

OBJECTIVES: To assess the accuracy in diagnosing heart failure of clinical features and potential primary care investigations, and to perform a decision analysis to test the impact of plausible diagnostic strategies on costs and diagnostic yield in the UK health-care setting. DATA SOURCES: MEDLINE and CINAHL were searched from inception to 7 July 2006. 'Grey literature' databases and conference proceedings were searched and authors of relevant studies contacted for data that could not be extracted from the published papers. REVIEW METHODS: A systematic review of the clinical evidence was carried out according to standard methods. Individual patient data (IPD) analysis was performed on nine studies, and a logistic regression model to predict heart failure was developed on one of the data sets and validated on the other data sets. Cost-effectiveness modelling was based on a decision tree that compared different plausible investigation strategies. RESULTS: Dyspnoea was the only symptom or sign with high sensitivity (89%), but it had poor specificity (51%). Clinical features with relatively high specificity included history of myocardial infarction (89%), orthopnoea (89%), oedema (72%), elevated jugular venous pressure (70%), cardiomegaly (85%), added heart sounds (99%), lung crepitations (81%) and hepatomegaly (97%). However, the sensitivity of these features was low, ranging from 11% (added heart sounds) to 53% (oedema). Electrocardiography (ECG), B-type natriuretic peptides (BNP) and N-terminal pro-B-type natriuretic peptides (NT-proBNP) all had high sensitivities (89%, 93% and 93% respectively). Chest X-ray was moderately specific (76-83%) but insensitive (67-68%). BNP was more accurate than ECG, with a relative diagnostic odds ratio of ECG/BNP of 0.32 (95% CI 0.12-0.87). There was no difference between the diagnostic accuracy of BNP and NT-proBNP. A model based upon simple clinical features and BNP derived from one data set was found to have good validity when applied to other data sets. A model substituting ECG for BNP was less predictive. From this a simple clinical rule was developed: in a patient presenting with symptoms such as breathlessness in whom heart failure is suspected, refer directly to echocardiography if the patient has a history of myocardial infarction or basal crepitations or is a male with ankle oedema; otherwise, carry out a BNP test and refer for echocardiography depending on the results of the test. On the basis of the cost-effectiveness analysis carried out, such a decision rule is likely to be considered cost-effective to the NHS in terms of cost per additional case detected. The cost-effectiveness analysis further suggested that, if likely benefit to the patient in terms of improved life expectancy is taken into account, the optimum strategy would be to refer all patients with symptoms suggestive of heart failure directly for echocardiography. CONCLUSIONS: The analysis suggests the need for important changes to the NICE recommendations. First, BNP (or NT-proBNP) should be recommended over ECG and, second, some patients should be referred straight for echocardiography without undergoing any preliminary investigation. Future work should include evaluation of the clinical rule described above in clinical practice.


Subject(s)
Heart Failure/diagnosis , Heart Function Tests/methods , Natriuretic Peptide, Brain/analysis , Primary Health Care/methods , Aged , Aged, 80 and over , Diagnosis, Differential , Female , Heart Failure/metabolism , Humans , Male , Middle Aged , Practice Guidelines as Topic , State Medicine
20.
Allergy ; 64(8): 1109-16, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19489757

ABSTRACT

The GRADE approach to grading the quality of evidence and strength of recommendations provides a comprehensive and transparent approach for developing clinical recommendations about using diagnostic tests or diagnostic strategies. Although grading the quality of evidence and strength of recommendations about using tests shares the logic of grading recommendations for treatment, it presents unique challenges. Guideline panels and clinicians should be alert to these special challenges when using the evidence about the accuracy of tests as the basis for clinical decisions. In the GRADE system, valid diagnostic accuracy studies can provide high quality evidence of test accuracy. However, such studies often provide only low quality evidence for the development of recommendations about diagnostic testing, as test accuracy is a surrogate for patient-important outcomes at best. Inferring from data on accuracy that using a test improves outcomes that are important to patients requires availability of an effective treatment, improved patients' wellbeing through prognostic information, or - by excluding an ominous diagnosis - reduction of anxiety and the opportunity for earlier search for an alternative diagnosis for which beneficial treatment can be available. Assessing the directness of evidence supporting the use of a diagnostic test requires judgments about the relationship between test results and patient-important consequences. Well-designed and conducted studies of allergy tests in parallel with efforts to evaluate allergy treatments critically will encourage improved guideline development for allergic diseases.


Subject(s)
Diagnostic Tests, Routine/standards , Evidence-Based Medicine , Hypersensitivity/diagnosis , Practice Guidelines as Topic/standards , Diagnosis, Differential , Humans , Quality Assurance, Health Care , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...