Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 143
Filter
1.
Br J Surg ; 106(8): 1026-1034, 2019 07.
Article in English | MEDLINE | ID: mdl-31134619

ABSTRACT

BACKGROUND: Patients undergoing amputation of the lower extremity for the complications of peripheral artery disease and/or diabetes are at risk of treatment failure and the need for reamputation at a higher level. The aim of this study was to develop a patient-specific reamputation risk prediction model. METHODS: Patients with incident unilateral transmetatarsal, transtibial or transfemoral amputation between 2004 and 2014 secondary to diabetes and/or peripheral artery disease, and who survived 12 months after amputation, were identified using Veterans Health Administration databases. Procedure codes and natural language processing were used to define subsequent ipsilateral reamputation at the same or higher level. Stepdown logistic regression was used to develop the prediction model. It was then evaluated for calibration and discrimination by evaluating the goodness of fit, area under the receiver operating characteristic curve (AUC) and discrimination slope. RESULTS: Some 5260 patients were identified, of whom 1283 (24·4 per cent) underwent ipsilateral reamputation in the 12 months after initial amputation. Crude reamputation risks were 40·3, 25·9 and 9·7 per cent in the transmetatarsal, transtibial and transfemoral groups respectively. The final prediction model included 11 predictors (amputation level, sex, smoking, alcohol, rest pain, use of outpatient anticoagulants, diabetes, chronic obstructive pulmonary disease, white blood cell count, kidney failure and previous revascularization), along with four interaction terms. Evaluation of the prediction characteristics indicated good model calibration with goodness-of-fit testing, good discrimination (AUC 0·72) and a discrimination slope of 11·2 per cent. CONCLUSION: A prediction model was developed to calculate individual risk of primary healing failure and the need for reamputation surgery at each amputation level. This model may assist clinical decision-making regarding amputation-level selection.


Subject(s)
Amputation, Surgical/statistics & numerical data , Diabetic Angiopathies/epidemiology , Leg/surgery , Peripheral Arterial Disease/complications , Reoperation/statistics & numerical data , Risk Assessment , Aged , Clinical Decision-Making , Diabetic Angiopathies/surgery , Female , Humans , Male , Middle Aged , Models, Statistical , Peripheral Arterial Disease/epidemiology , Risk Factors
2.
Br J Surg ; 106(7): 879-888, 2019 06.
Article in English | MEDLINE | ID: mdl-30865292

ABSTRACT

BACKGROUND: Patients who undergo lower extremity amputation secondary to the complications of diabetes or peripheral artery disease have poor long-term survival. Providing patients and surgeons with individual-patient, rather than population, survival estimates provides them with important information to make individualized treatment decisions. METHODS: Patients with peripheral artery disease and/or diabetes undergoing their first unilateral transmetatarsal, transtibial or transfemoral amputation were identified in the Veterans Affairs Surgical Quality Improvement Program (VASQIP) database. Stepdown logistic regression was used to develop a 1-year mortality risk prediction model from a list of 33 candidate predictors using data from three of five Department of Veterans Affairs national geographical regions. External geographical validation was performed using data from the remaining two regions. Calibration and discrimination were assessed in the development and validation samples. RESULTS: The development sample included 5028 patients and the validation sample 2140. The final mortality prediction model (AMPREDICT-Mortality) included amputation level, age, BMI, race, functional status, congestive heart failure, dialysis, blood urea nitrogen level, and white blood cell and platelet counts. The model fit in the validation sample was good. The area under the receiver operating characteristic (ROC) curve for the validation sample was 0·76 and Cox calibration regression indicated excellent calibration (slope 0·96, 95 per cent c.i. 0·85 to 1·06; intercept 0·02, 95 per cent c.i. -0·12 to 0·17). Given the external validation characteristics, the development and validation samples were combined, giving a total sample of 7168. CONCLUSION: The AMPREDICT-Mortality prediction model is a validated parsimonious model that can be used to inform the 1-year mortality risk following non-traumatic lower extremity amputation of patients with peripheral artery disease or diabetes.


Subject(s)
Amputation, Surgical/mortality , Decision Support Techniques , Diabetic Foot/surgery , Lower Extremity/surgery , Peripheral Arterial Disease/surgery , Adult , Aged , Databases, Factual , Diabetic Foot/complications , Diabetic Foot/mortality , Female , Humans , Logistic Models , Lower Extremity/blood supply , Male , Middle Aged , Peripheral Arterial Disease/complications , Peripheral Arterial Disease/mortality , Proportional Hazards Models , ROC Curve , Risk Assessment , Risk Factors , Treatment Outcome
3.
JDR Clin Trans Res ; 3(4): 366-375, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30238061

ABSTRACT

INTRODUCTION: In a randomized controlled trial, the effectiveness of motivational interviewing (MI) combined with enhanced community services (MI + ECS) was compared with ECS alone for reducing dental caries in American Indian children on the Pine Ridge Reservation. The intervention was developed and delivered with extensive tribal collaboration. METHODS: A total 579 mother-newborn dyads were enrolled and randomized to the MI + ECS and ECS groups. They were followed for 36 mo. Four MI sessions were provided, the first shortly after childbirth and then 6, 12, and 18 mo later. Both groups were exposed to ECS, which included public service announcements through billboards and tribal radio, as well as broad distribution of brochures on behavioral risk factors for early childhood caries (ECC), toothbrushes, and toothpaste. MI impact was measured as decayed, missing, and filled tooth surfaces (dmfs). Secondary outcomes included decayed surfaces, caries prevalence, and maternal oral health knowledge and behaviors. Modified intention-to-treat analyses were conducted. Eighty-eight percent of mothers completed at least 3 of 4 MI sessions offered. RESULTS: After 3 y, dmfs was not significantly different for the 2 groups (MI + ECS = 10, ECS = 10.38, P = 0.68). In both groups, prevalence of caries experience was 7% to 9% after 1 y, 35% to 36% at 2 y, and 55% to 56% at 3 y. Mean knowledge scores increased by 5.0, 5.3, and 5.9 percentage points at years 1, 2, and 3 in the MI + ECS group and by 1.9, 3.3, and 5.0 percentage points in the ECS group (P = 0.03), respectively. Mean maternal oral health behavior scores were not statistically significantly different between the treatment arms. CONCLUSION: In summary, the MI intervention appeared to improve maternal knowledge but had no effect on oral health behaviors or on the progression of ECC (ClinicalTrials.gov NCT01116726). KNOWLEDGE TRANSFER STATEMENT: The findings of this study suggest that motivational interviewing focusing on parental behaviors may not be as effective as previously hoped for slowing the development of childhood caries in some high-risk groups. Furthermore, social factors may be even more salient determinants of oral health than what we previously supposed, perhaps interfering with the capacity to benefit from behavioral strategies that have been useful elsewhere. The improvement of children's oral health in high-risk populations characterized by poverty and multiple related life stresses may require more holistic approaches that address these formidable barriers.

5.
J Dent Res ; 95(11): 1237-44, 2016 Oct.
Article in English | MEDLINE | ID: mdl-27439724

ABSTRACT

The authors tested the effectiveness of a community-based, tribally delivered oral health promotion (OHP) intervention (INT) at reducing caries increment in Navajo children attending Head Start. In a 3-y cluster-randomized trial, we developed an OHP INT with Navajo input that was delivered by trained Navajo lay health workers to children attending 52 Navajo Head Start classrooms (26 INT, 26 usual care [UC]). The INT was designed as a highly personalized set of oral health-focused interactions (5 for children and 4 for parents), along with 4 fluoride varnish applications delivered in Head Start during academic years of 2011 to 2012 and 2012 to 2013. The authors evaluated INT impact on decayed, missing, and filled tooth surfaces (dmfs) increment compared with UC. Other outcomes included caries prevalence and caregiver oral health-related knowledge and behaviors. Modified intention-to-treat and per-protocol analyses were conducted. The authors enrolled 1,016 caregiver-child dyads. Baseline mean dmfs/caries prevalence equaled 19.9/86.5% for the INT group and 22.8/90.1% for the UC group, respectively. INT adherence was 53% (i.e., ≥3 child OHP events, ≥1 caregiver OHP events, and ≥3 fluoride varnish). After 3 y, dmfs increased in both groups (+12.9 INT vs. +10.8 UC; P = 0.216), as did caries prevalence (86.5% to 96.6% INT vs. 90.1% to 98.2% UC; P = 0.808) in a modified intention-to-treat analysis of 897 caregiver-child dyads receiving 1 y of INT. Caregiver oral health knowledge scores improved in both groups (75.1% to 81.2% INT vs. 73.6% to 79.5% UC; P = 0.369). Caregiver oral health behavior scores improved more rapidly in the INT group versus the UC group (P = 0.006). The dmfs increment was smaller among adherent INT children (+8.9) than among UC children (+10.8; P = 0.028) in a per-protocol analysis. In conclusion, the severity of dental disease in Navajo Head Start children is extreme and difficult to improve. The authors argue that successful approaches to prevention may require even more highly personalized approaches shaped by cultural perspectives and attentive to the social determinants of oral health (ClinicalTrials.gov NCT01116739).


Subject(s)
Health Promotion/methods , Oral Health , Child, Preschool , DMF Index , Dental Caries/epidemiology , Dental Caries/prevention & control , Female , Health Services, Indigenous , Humans , Indians, North American , Male
6.
Health Educ Res ; 31(1): 70-81, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26612050

ABSTRACT

Health literacy is 'the capacity to obtain, process and understand basic health information and services needed to make appropriate health decisions'. Although numerous studies show a link between health literacy and clinical outcomes, little research has examined the association of health literacy with oral health. No large-scale studies have assessed these relationships among American Indians, a population at risk for limited health literacy and oral health problems. This analysis was conducted as part of a clinical trial aimed at reducing dental decay among preschoolers in the Navajo Nation Head Start program. Using baseline data for 1016 parent-child dyads, we examined the association of parental health literacy with parents' oral health knowledge, attitudes, and behavior, as well as indicators of parental and pediatric oral health. More limited health literacy was associated with lower levels of oral health knowledge, more negative oral health attitudes, and lower levels of adherence to recommended oral health behavior. Parents with more limited health literacy also had significantly worse oral health status (OHS) and reported their children to have significantly worse oral health-related quality of life. These results highlight the importance of oral health promotion interventions that are sensitive to the needs of participants with limited health literacy.


Subject(s)
Child Health , Health Literacy , Indians, North American , Oral Health/education , Parents/education , Adult , Aged , Aged, 80 and over , Child, Preschool , Female , Health Knowledge, Attitudes, Practice , Humans , Male , Middle Aged , Young Adult
7.
Ann Intern Med ; 135(10): 847-57, 2001 Nov 20.
Article in English | MEDLINE | ID: mdl-11712875

ABSTRACT

BACKGROUND: Pneumonia is a common postoperative complication associated with substantial morbidity and mortality. OBJECTIVE: To develop and validate a preoperative risk index for predicting postoperative pneumonia. DESIGN: Prospective cohort study with outcome assessment based on chart review. SETTING: 100 Veterans Affairs Medical Centers performing major surgery. PATIENTS: The risk index was developed by using data on 160 805 patients undergoing major noncardiac surgery between 1 September 1997 and 31 August 1999 and was validated by using data on 155 266 patients undergoing surgery between 1 September 1995 and 31 August 1997. Patients with preoperative pneumonia, ventilator dependence, and pneumonia that developed after postoperative respiratory failure were excluded. MEASUREMENTS: Postoperative pneumonia was defined by using the Centers for Disease Control and Prevention definition of nosocomial pneumonia. RESULTS: A total of 2466 patients (1.5%) developed pneumonia, and the 30-day postoperative mortality rate was 21%. A postoperative pneumonia risk index was developed that included type of surgery (abdominal aortic aneurysm repair, thoracic, upper abdominal, neck, vascular, and neurosurgery), age, functional status, weight loss, chronic obstructive pulmonary disease, general anesthesia, impaired sensorium, cerebral vascular accident, blood urea nitrogen level, transfusion, emergency surgery, long-term steroid use, smoking, and alcohol use. Patients were divided into five risk classes by using risk index scores. Pneumonia rates were 0.2% among those with 0 to 15 risk points, 1.2% for those with 16 to 25 risk points, 4.0% for those with 26 to 40 risk points, 9.4% for those with 41 to 55 risk points, and 15.3% for those with more than 55 risk points. The C-statistic was 0.805 for the development cohort and 0.817 for the validation cohort. CONCLUSIONS: The postoperative pneumonia risk index identifies patients at risk for postoperative pneumonia and may be useful in guiding perioperative respiratory care.


Subject(s)
Pneumonia/diagnosis , Postoperative Complications/diagnosis , Risk Assessment/methods , Aged , Aged, 80 and over , Data Interpretation, Statistical , Female , Health Status , Humans , Male , Mental Health , Middle Aged , Pneumonia/complications , Prospective Studies , Risk Factors
8.
J Vasc Surg ; 34(4): 634-40, 2001 Oct.
Article in English | MEDLINE | ID: mdl-11668317

ABSTRACT

PURPOSE: Racial variation in health care outcomes is an important topic. Risk-adjustment models have not been developed for elective abdominal aortic aneurysm repair (AAA), lower extremity bypass revascularization (LEB), or lower extremity amputation (AMP). Earlier studies examining racial variation in mortality and morbidity from AAA, LEB, or AMP were limited to administrative data. This study determined risk factors for mortality after surgery for vascular disease and determined whether race is an important risk factor. METHODS: Data in this prospective observational study were obtained from the Department of Veterans Affairs (VA) National Surgical Quality Improvement Program. Detailed demographic and clinical data were collected prospectively from patients' medical records by trained nurse reviewers. Eligible patients were those 18 years and older who underwent elective AAA, LEB, or AMP at one of 44 VA medical centers performing both vascular and cardiac surgery (phase I; October 1991 to December 1993) and at one of these 44 or 79 additional VA medical centers performing vascular but not cardiac surgery (phase II; January 1994 to August 1995). The independent association of several preoperative factors with the 30-day postoperative mortality rate was examined with stepwise logistic regression analysis for AAA, LEB, and AMP. Models were developed in the combined 44 VA medical centers and validated in the 79 VA medical centers. The independent association of race with the 30-day postoperative mortality rate was examined after controlling for important preoperative risk factors for each operation. RESULTS: More than 10,000 surgical operations were examined, and 5, 3, and 10 independent preoperative predictors of 30-day mortality rate were identified for AAA, LEB, and AMP, respectively. The observed mortality rate for patients undergoing AAA was higher (7.2% vs 3.2%; P =.02) in African American patients than in white patients in the 44 VA medical centers, although the differences were not significant in LEB and AMP or at the additional 79 hospitals. After important preoperative risk factors were controlled, there was no difference in 30-day mortality rates between African American patients and white patients. CONCLUSION: We identified several important preoperative risk factors for 30-day mortality rate in three vascular operations. From the results of this study, race was determined not to be an independent predictor of mortality.


Subject(s)
Amputation, Surgical/mortality , Amputation, Surgical/statistics & numerical data , Aortic Aneurysm, Abdominal/mortality , Aortic Aneurysm, Abdominal/surgery , Black People , Elective Surgical Procedures/mortality , Hospital Mortality , Hospitals, Veterans , Leg/blood supply , Leg/surgery , Vascular Diseases/mortality , Vascular Diseases/surgery , Vascular Surgical Procedures/mortality , White People , Aged , Analysis of Variance , Female , Humans , Logistic Models , Male , Middle Aged , Prospective Studies , Risk Adjustment , Risk Factors , Total Quality Management , United States/epidemiology , United States Department of Veterans Affairs
9.
Ann Surg ; 234(3): 370-82; discussion 382-3, 2001 Sep.
Article in English | MEDLINE | ID: mdl-11524590

ABSTRACT

OBJECTIVE: To determine whether the investment in postgraduate education and training places patients at risk for worse outcomes and higher costs than if medical and surgical care was delivered in nonteaching settings. SUMMARY BACKGROUND DATA: The Veterans Health Administration (VA) plays a major role in the training of medical students, residents, and fellows. METHODS: The database of the VA National Surgical Quality Improvement Program was analyzed for all major noncardiac operations performed during fiscal years 1997, 1998, and 1999. Teaching status of a hospital was determined on the basis of a background and structure questionnaire that was independently verified by a research fellow. Stepwise logistic regression was used to construct separate models predictive of 30-day mortality and morbidity for each of seven surgical specialties and eight operations. Based on these models, a severity index for each patient was calculated. Hierarchical logistic regression models were then created to examine the relationship between teaching versus nonteaching hospitals and 30-day postoperative mortality and morbidity, after adjusting for patient severity. RESULTS: Teaching hospitals performed 81% of the total surgical workload and 90% of the major surgery workload. In most specialties in teaching hospitals, the residents were the primary surgeons in more than 90% of the operations. Compared with nonteaching hospitals, the patient populations in teaching hospitals had a higher prevalence of risk factors, underwent more complex operations, and had longer operation times. Risk-adjusted mortality rates were not different between the teaching and nonteaching hospitals in the specialties and operations studied. The unadjusted complication rate was higher in teaching hospitals in six of seven specialties and four of eight operations. Risk adjustment did not eliminate completely these differences, probably reflecting the relatively poor predictive validity of some of the risk adjustment models for morbidity. Length of stay after major operations was not consistently different between teaching and nonteaching hospitals. CONCLUSION: Compared with nonteaching hospitals, teaching hospitals in the VA perform the majority of complex and high-risk major procedures, with comparable risk-adjusted 30-day mortality rates. Risk-adjusted 30-day morbidity rates in teaching hospitals are higher in some specialties and operations than in nonteaching hospitals. Although this may reflect the weak predictive validity of some of the risk adjustment models for morbidity, it may also represent suboptimal processes and structures of care that are unique to teaching hospitals. Despite good quality of care in teaching hospitals, as evidenced by the 30-day mortality data, efforts should be made to examine further the structures and processes of surgical care prevailing in these hospitals.


Subject(s)
Hospitals, Teaching/standards , Hospitals, Veterans/standards , Surgical Procedures, Operative/standards , Education, Medical, Graduate , Hospitals/standards , Humans , Length of Stay , Models, Theoretical , Postoperative Complications , Regression Analysis , Risk Factors , Surgical Procedures, Operative/mortality , Treatment Outcome
10.
J Vasc Surg ; 34(2): 283-90, 2001 Aug.
Article in English | MEDLINE | ID: mdl-11496281

ABSTRACT

PURPOSE: A noncardiac surgery risk model was used as a means of analyzing variations in postoperative mortality and amputation-free survival for older veterans undergoing femorodistal bypass grafting surgery. METHODS: A prospective cohort study was undertaken in 105 Veterans Affairs (VA) hospitals at the time of index operation from 1991 to 1995. Each patient was linked to subsequent hospitalizations, major amputation surgery, and survival through 1999. Logistic regression and proportional hazards models were used as a means of developing risk indices on the basis of risk factors from the VA National Surgical Quality Improvement Program. A total of 4288 male veterans 40 years or older underwent artificial, vein, or in situ bypass grafting surgery at the femoral to tibial level. The main outcome measures were 30-day postoperative mortality and amputation-free survival. RESULTS: Approximately half of all patients had undergone an earlier revascularization or amputation at any level for vascular disease. The 30-day postoperative mortality rate was 2.1% and varied greatly between mortality risk index quartiles (0.6%-5.2%). In a median 44.3 months of follow-up, surviving patients had 17,694 subsequent VA hospitalizations, 1147 patients (26.7%) underwent subsequent major amputation, and 1913 patients (44.6%) died. The overall survival probability was 88% at 1 year and 63% at 5 years; 1- and 5-year (any sided) limb salvage rates were 87% and 74%, respectively, for patients who underwent a femoropopliteal bypass grafting procedure, compared with 77% and 63%, respectively, for patients who underwent a tibial bypass grafting procedure. When amputation and death were combined as end points, amputation-free survival probability rates at 1, 3, and 7.5 years were 74%, 56%, and 29%, respectively. Patients with the best 20% survival risk scores had observed mean survival probability rates 30% higher than patients in the poorest 20% of survival risk. CONCLUSION: Risk indices derived from the preoperative workup may be of use to clinicians in assessing and communicating risk and prognosis. Risk-adjustment of outcomes is critical for evaluating future disease management initiatives for patients with advanced peripheral arterial disease.


Subject(s)
Femoral Vein/surgery , Aged , Humans , Male , Middle Aged , Postoperative Complications/epidemiology , Prospective Studies , Regression Analysis , Survival Rate , United States , United States Department of Veterans Affairs , Vascular Surgical Procedures
11.
Surgery ; 130(1): 21-9, 2001 Jul.
Article in English | MEDLINE | ID: mdl-11436008

ABSTRACT

BACKGROUND: A surgical risk model is used to analyze postoperative mortality and late survival for older veterans who underwent above- or below-knee amputations in 119 Veterans Affairs (VA) hospitals from 1991 to 1995. METHODS: Preoperative medical conditions and laboratory values abstracted by the VA National Surgical Quality Improvement Program were linked to subsequent hospitalization and survival through 1999. Logistic regression and proportional hazards models were used to develop risk indexes for postoperative mortality and long-term survival. RESULTS: Thirty-day postoperative mortality was 6.3% for 1909 below-knee and 13.3% for 2152 above-knee amputees. Mortality varied greatly between the lowest-highest risk index quartiles (0.8%-18.4% for below-knee amputation and 2.3%-31.1% for above-knee amputation). Surviving patients had 10,827 subsequent VA hospitalizations during a median 32-month follow-up. Survival probabilities for below- and above-knee amputees were 77% and 59% at 1 year, 57% and 39% at 3 years, and 28% and 20% at 7.5 years. The lowest quartile of survival risk had a 61% five-year survival compared with 14% for the highest-risk quartile. CONCLUSION: A generic surgical risk model can be of use in stratifying prognosis after major amputation. The heavy burden of hospital use by these patients suggests the need for better disease management for this high-risk, high-cost patient population.


Subject(s)
Amputation, Surgical , Leg/surgery , Quality Assurance, Health Care , United States Department of Veterans Affairs , Veterans , Adult , Aged , Amputation, Surgical/mortality , Hospitals, Veterans , Humans , Male , Middle Aged , Patient Readmission/statistics & numerical data , Prognosis , Risk Factors , Survival Analysis , Time Factors , United States
12.
Kidney Int ; 60(1): 300-8, 2001 Jul.
Article in English | MEDLINE | ID: mdl-11422765

ABSTRACT

BACKGROUND: Iron deficiency remains a common cause of hyporesponsiveness to epoetin in hemodialysis patients. However, considerable controversy exists regarding the best strategies for diagnosis and treatment. METHODS: As part of a multicenter randomized clinical trial of intravenous versus subcutaneous administration of epoetin, we made monthly determinations of serum iron, total iron binding capacity, percentage transferrin saturation, and serum ferritin. If a patient had serum ferritin <100 ng/mL or the combination of serum ferritin <400 ng/mL and a transferrin saturation <20%, he/she received parenteral iron, given as iron dextran 100 mg at ten consecutive dialysis sessions. We analyzed parenteral iron use during the trial, the effect of its administration on iron indices and epoetin dose, and the ability of the iron indices to predict a reduction in epoetin dose in response to parenteral iron administration. RESULTS: Eighty-seven percent of the 208 patients required parenteral iron to maintain adequate iron stores at an average dose of 1516 mg over 41.7 weeks, or 36 mg/week. Only two of 180 patients experienced serious reactions to intravenous iron administration. Two thirds of the patients receiving parenteral iron had a decrease in their epoetin requirement of at least 30 U/kg/week compared with 29% of patients who did not receive iron (P = 0.004). The average dose decrease 12 weeks after initiating iron therapy was 1763 U/week. A serum ferritin <200 ng/mL had the best positive predictive value (76%) for predicting a response to parenteral iron administration, but it still had limited clinical utility. CONCLUSIONS: Iron deficiency commonly develops during epoetin therapy, and parenteral iron administration may result in a clinically significant reduction in epoetin dose. The use of transferrin saturation or serum ferritin as an indicator for parenteral iron administration has limited utility.


Subject(s)
Erythropoietin/therapeutic use , Hematinics/therapeutic use , Iron Deficiencies , Iron/blood , Renal Dialysis , Adult , Dose-Response Relationship, Drug , Epoetin Alfa , Erythropoietin/administration & dosage , Female , Ferritins/blood , Hematinics/administration & dosage , Humans , Infusions, Parenteral , Iron/administration & dosage , Iron/therapeutic use , Male , Middle Aged , Predictive Value of Tests , Recombinant Proteins
13.
Med Care ; 39(6): 627-34, 2001 Jun.
Article in English | MEDLINE | ID: mdl-11404645

ABSTRACT

Although well-designed randomized controlled trials (RCT) provide the strongest evidence regarding causation, only relatively recently have they been used by health services researchers to study the organization, delivery, quality, and outcomes of care. More recent yet is the extension of multisite RCTs to health services research. Such studies offer numerous methodological advantages over single-site trials: (1) enhanced external validity; (2) greater statistical power when studying conditions with a low incidence or prevalence, small event rate in the outcome (eg, mortality), and/or large variance in the outcome (eg, health care costs); and (3) rapid recruitment to provide health care organizations and policy makers with timely results. This paper begins by outlining the advantages of multisite RCTs over single-site trials. It then discusses both scientific challenges (ie, standardizing eligibility criteria, defining and standardizing the intervention, defining usual care, standardizing the data collection protocol, blinded outcome assessment, data management and analysis, measuring health care costs) and operational issues (ie, site selection, randomization procedures, patient accrual, maintaining enthusiasm, oversight) posed by multisite RCTs in health services research. Recommendations are offered to health services researchers interested in conducting such studies.


Subject(s)
Health Services Research/methods , Multicenter Studies as Topic , Randomized Controlled Trials as Topic , Cost Control , Health Care Costs , Humans , Outcome Assessment, Health Care , Patient Selection , Research Design , United States
14.
Annu Rev Med ; 52: 275-87, 2001.
Article in English | MEDLINE | ID: mdl-11160779

ABSTRACT

Measures of risk-adjusted outcome are particularly suited for the assessment of the quality of surgical care. The reliability of measures of quality that use surgical outcomes is enhanced by prospective data acquisition and should be adjusted for the preoperative severity of illness. Such measures should be based only on reliable and validated data, and they should apply state-of-the-art analytical methods. The risk-adjusted postoperative mortality rate is useful as a quality measure only in specialties and operations expected to have a high rate of postoperative deaths. Risk-adjusted complications are more common but are limited as a comparative measure of quality by a lack of uniform definitions and data collection mechanisms. In specialties in which the expected postoperative mortality is low, risk-adjusted functional outcomes are promising measures for the assessment of the quality of surgical care. Measures of cost and patient satisfaction should also be incorporated in systems designed to measure the quality and cost-effectiveness of surgical care.


Subject(s)
Quality of Health Care , Risk Adjustment/methods , Surgical Procedures, Operative/adverse effects , Surgical Procedures, Operative/standards , Treatment Outcome , Activities of Daily Living , Cost-Benefit Analysis , Data Collection/methods , Data Interpretation, Statistical , Hospital Mortality , Hospitals, Veterans , Humans , Length of Stay/statistics & numerical data , Morbidity , Patient Satisfaction , Quality Indicators, Health Care , Reproducibility of Results , Severity of Illness Index , Surgical Procedures, Operative/psychology , Survival Analysis , Total Quality Management/organization & administration , United States/epidemiology , United States Department of Veterans Affairs
15.
Ann Thorac Surg ; 72(6): 2026-32, 2001 Dec.
Article in English | MEDLINE | ID: mdl-11789788

ABSTRACT

BACKGROUND: There are limited data to help clinicians identify patients likely to have an improvement in quality of life following CABG surgery. We evaluated the relationship between preoperative health status and changes in quality of life following CABG surgery. METHODS: We evaluated 1,744 patients enrolled in the VA Cooperative Processes, Structures, and Outcomes in Cardiac Surgery study who completed preoperative and 6-month postoperative Short Form-36 (SF-36) surveys. The primary outcome was change in the Mental Component Summary (MCS) and Physical Component Summary (PCS) scores from the SF-36. RESULTS: On average, physical and mental health status improved following the operation. Preoperative health status was the major determinant of change in quality of life following surgery, independent of anginal burden and other clinical characteristics. Patients with MCS scores less than 44 or PCS scores less than 38 were most likely to have an improvement in quality of life. Patients with higher preoperative scores were unlikely to have an improvement in quality of life. CONCLUSIONS: Patients with preoperative health status deficits are likely to have an improvement in their quality of life following CABG surgery. Alternatively, patients with relatively good preoperative health status are unlikely to have a quality of life benefit from surgery and the operation should primarily be performed to improve survival.


Subject(s)
Angina Pectoris/surgery , Coronary Artery Bypass/psychology , Postoperative Complications/psychology , Quality of Life , Activities of Daily Living/psychology , Aged , Angina Pectoris/psychology , Female , Health Status , Humans , Male , Middle Aged , Sick Role , Treatment Outcome
16.
J Clin Epidemiol ; 53(11): 1113-8, 2000 Nov.
Article in English | MEDLINE | ID: mdl-11106884

ABSTRACT

OBJECTIVE: To determine clinical and patient-centered factors predicting non-elective hospital readmissions. DESIGN: Secondary analysis from a randomized clinical trial. CLINICAL SETTING: Nine VA medical centers. PARTICIPANTS: Patients discharged from the medical service with diabetes mellitus, congestive heart failure, and/or chronic obstructive pulmonary disease (COPD). MAIN OUTCOME MEASUREMENT: Non-elective readmission within 90 days. RESULTS: Of 1378 patients discharged, 23.3% were readmitted. After controlling for hospital and intervention status, risk of readmission was increased if the patient had more hospitalizations and emergency room visits in the prior 6 months, higher blood urea nitrogen, lower mental health function, a diagnosis of COPD, and increased satisfaction with access to emergency care assessed on the index hospitalization. CONCLUSIONS: Both clinical and patient-centered factors identifiable at discharge are related to non-elective readmission. These factors identify high-risk patients and provide guidance for future interventions. The relationship of patient satisfaction measures to readmission deserves further study.


Subject(s)
Patient Readmission/statistics & numerical data , Diabetes Mellitus , Health Services Accessibility , Heart Failure , Humans , Lung Diseases, Obstructive , Multivariate Analysis , Patient Satisfaction , Quality of Life , Risk Factors , United States
17.
Health Serv Res ; 35(5 Pt 1): 995-1010, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11130808

ABSTRACT

OBJECTIVE: To explore the contribution of genes and environmental factors to variation in a common measure (i.e., a five-point--excellent, very good, good, fair, and poor--Likert scale) of self-reported health. DATA SOURCES: Data were analyzed from 4,638 male-male twin pair members of the Vietnam Era Twin (VET) Registry who responded to a 1987 health survey. STUDY DESIGN: Varying models for the relationship between genetic and environmental influences on self-reported health were tested in an attempt to explain the relative contributions of additive genetic, shared and nonshared environmental effects, and health conditions reported since 1975 to perceived health status. DATA COLLECTION: A mail and telephone survey of health was administered in 1987 to VET Registry twins. PRINCIPAL FINDINGS: Variance component estimates under the best-fitting model included a 39.6 percent genetic contribution to self-reported health. In a model which included the effect of health condition, genes accounted for 32.5 percent and health condition accounted for 15.0 percent of the variance in self-reported health. The magnitude of the genetic contribution to perceived health status was not significantly different in a model with or without health condition. CONCLUSIONS: These data suggest over one-third of the variability of self-reported health can be attributed to genes. Since perceived health status is a major predictor of morbidity, mortality, and health services utilization, future analyses should consider the role of heritable influences on traditional health services variables.


Subject(s)
Environmental Exposure/adverse effects , Genetic Predisposition to Disease/genetics , Health Status , Veterans , Aged , Analysis of Variance , Health Behavior , Health Knowledge, Attitudes, Practice , Health Services/statistics & numerical data , Health Surveys , Humans , Male , Middle Aged , Models, Genetic , Morbidity , Mortality , Predictive Value of Tests , Registries , Risk Factors , Surveys and Questionnaires , United States/epidemiology , Veterans/psychology , Veterans/statistics & numerical data
18.
JAMA ; 284(14): 1806-13, 2000 Oct 11.
Article in English | MEDLINE | ID: mdl-11025833

ABSTRACT

CONTEXT: Numerous studies have demonstrated that hearing aids provide significant benefit for a wide range of sensorineural hearing loss, but no carefully controlled, multicenter clinical trials comparing hearing aid efficacy have been conducted. OBJECTIVE: To compare the benefits provided to patients with sensorineural hearing loss by 3 commonly used hearing aid circuits. DESIGN: Double-blind, 3-period, 3-treatment crossover trial conducted from May 1996 to February 1998. SETTING: Eight audiology laboratories at Department of Veterans Affairs medical centers across the United States. PATIENTS: A sample of 360 patients with bilateral sensorineural hearing loss (mean age, 67.2 years; 57% male; 78.6% white). INTERVENTION: Patients were randomly assigned to 1 of 6 sequences of linear peak clipper (PC), compression limiter (CL), and wide dynamic range compressor (WDRC) hearing aid circuits. All patients wore each of the 3 hearing aids, which were installed in identical casements, for 3 months. MAIN OUTCOME MEASURES: Results of tests of speech recognition, sound quality, and subjective hearing aid benefit, administered at baseline and after each 3-month intervention with and without a hearing aid. At the end of the experiment, patients ranked the 3 hearing aid circuits. RESULTS: Each circuit markedly improved speech recognition, with greater improvement observed for soft and conversationally loud speech (all 52-dB and 62-dB conditions, P

Subject(s)
Hearing Aids , Hearing Loss, Sensorineural/therapy , Adult , Aged , Aged, 80 and over , Auditory Perception , Cross-Over Studies , Double-Blind Method , Female , Hearing Tests , Humans , Male , Middle Aged , Patient Satisfaction
19.
J Am Coll Cardiol ; 36(4): 1152-8, 2000 Oct.
Article in English | MEDLINE | ID: mdl-11028464

ABSTRACT

OBJECTIVES: The goal of this study was to compare long-term survival and valve-related complications between bioprosthetic and mechanical heart valves. BACKGROUND: Different heart valves may have different patient outcomes. METHODS: Five hundred seventy-five patients undergoing single aortic valve replacement (AVR) or mitral valve replacement (MVR) at 13 VA medical centers were randomized to receive a bioprosthetic or mechanical valve. RESULTS: By survival analysis at 15 years, all-cause mortality after AVR was lower with the mechanical valve versus bioprosthesis (66% vs. 79%, p = 0.02) but not after MVR. Primary valve failure occurred mainly in patients <65 years of age (bioprosthesis vs. mechanical, 26% vs. 0%, p < 0.001 for AVR and 44% vs. 4%, p = 0.0001 for MVR), and in patients > or =65 years after AVR, primary valve failure in bioprosthesis versus mechanical valve was 9 +/- 6% versus 0%, p = 0.16. Reoperation was significantly higher for bioprosthetic AVR (p = 0.004). Bleeding occurred more frequently in patients with mechanical valve. There were no statistically significant differences for other complications, including thromboembolism and all valve-related complications between the two randomized groups. CONCLUSIONS: At 15 years, patients undergoing AVR had a better survival with a mechanical valve than with a bioprosthetic valve, largely because primary valve failure was virtually absent with mechanical valve. Primary valve failure was greater with bioprosthesis, both for AVR and MVR, and occurred at a much higher rate in those aged <65 years; in those aged > or =65 years, primary valve failure after AVR was not significantly different between bioprosthesis and mechanical valve. Reoperation was more common for AVR with bioprosthesis. Thromboembolism rates were similar in the two valve prostheses, but bleeding was more common with a mechanical valve.


Subject(s)
Aortic Valve , Bioprosthesis , Heart Valve Prosthesis , Mitral Valve , United States Department of Veterans Affairs/statistics & numerical data , Aged , Cause of Death , Follow-Up Studies , Heart Valve Diseases/mortality , Heart Valve Diseases/surgery , Heart Valve Prosthesis Implantation , Humans , Male , Middle Aged , Postoperative Complications , Surveys and Questionnaires , Survival Rate , United States/epidemiology
20.
Diabetes Care ; 23(10): 1478-85, 2000 Oct.
Article in English | MEDLINE | ID: mdl-11023140

ABSTRACT

OBJECTIVE: Microalbuminuria can reflect the progress of microvascular complications and may be predictive of macrovascular disease in type 2 diabetes. The effect of intensive glycemic control on microalbuminuria in patients in the U.S. who have had type 2 diabetes for several years has not previously been evaluated. RESEARCH DESIGN AND METHODS: We randomly assigned 153 male patients to either intensive treatment (INT) (goal HbA(1c) 7.1%) or to standard treatment (ST) (goal HbA(1c) 9.1%; P = 0.001), and data were obtained during a 2-year period. Mean duration of known diabetes was 8 years, mean age of the patients was 60 years, and patients were well matched at baseline. We obtained 3-h urine samples for each patient at baseline and annually and defined microalbuminuria as an albumin:creatinine ratio of 0.03-0.30. All patients were treated with insulin and received instructions regarding diet and exercise. Hypertension and dyslipidemia were treated with similar goals in each group. RESULTS: A total of 38% of patients had microalbuminuria at entry and were evenly assigned to both treatment groups. INT retarded the progression of microalbuminuria during the 2-year period: the changes in albumin:creatinine ratio from baseline to 2 years of INT versus ST were 0.045 vs. 0.141, respectively (P = 0.046). Retardation of progressive urinary albumin excretion was most pronounced in those patients who entered the study with microalbuminuria and were randomized to INT. Patients entering with microalbuminuria had a deterioration in creatinine clearance at 2 years regardless of the intensity of glycemic control. In the group entering without microalbuminuria, the subgroup receiving ST had a lower percentage of patients with a macrovascular event (17%) than the subgroup receiving INT (36%) (P = 0.03). Use of ACE inhibitors or calcium-channel blockers was similarly distributed among the groups. CONCLUSIONS: Intensive glycemic control retards microalbuminuria in patients who have had type 2 diabetes for several years but may not lessen the progressive deterioration of glomerular function. Increases in macrovascular event rates in the subgroup entering without albuminuria who received INT remain unexplained but could reflect early worsening, as observed with microvascular disease in the Diabetes Control and Complications Trial.


Subject(s)
Albuminuria , Blood Glucose/metabolism , Diabetes Mellitus, Type 2/therapy , Diabetes Mellitus, Type 2/urine , Insulin/therapeutic use , Adult , Aged , Blood Glucose Self-Monitoring , Creatinine/urine , Diabetes Mellitus, Type 2/blood , Drug Administration Schedule , Exercise , Follow-Up Studies , Glycated Hemoglobin/analysis , Humans , Hypoglycemic Agents/therapeutic use , Male , Middle Aged , Smoking Cessation , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...