Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Arthritis Rheum ; 57(5): 707-15, 2007 Jun 15.
Article in English | MEDLINE | ID: mdl-17530663

ABSTRACT

OBJECTIVE: To evaluate the American College of Rheumatology (ACR) starter set of quality measures for rheumatoid arthritis (RA) in an actual patient cohort that preceded publication of the quality measures. METHODS: We retrospectively applied the 2006 ACR quality criteria to a prospectively studied cohort of 568 patients with RA treated by 1,932 unique physicians including 255 different rheumatologists between the years 1999 and 2003. Data on performance were obtained from self-report surveys and medical record review within 12 months. RESULTS: At least 1 joint examination was performed in 98% of patients. Patient and physician global assessments were reported for 79% and 74% of patients, respectively. A total of 85% of patients received disease-modifying antirheumatic drugs (DMARDs). DMARD adjustments were made for 50% of patients in whom increasing disease activity was noted at least once and for 64% of patients in whom increasing disease activity was noted during 2 (of 4) 3-month periods within the year. Compared with self-report surveys, medical records substantially underreported performance on quality measures. CONCLUSION: The ACR-endorsed quality measures for RA can be assessed using available data sources. When both self-report and medical record data are used, adherence rates, designed to serve as minimum standards of care, were moderate or high for most measures. Prior to using indicators to compare quality across groups, specific strategies for operationalizing measures and for using accurate data sources to assess adherence to the measures should be defined.


Subject(s)
Arthritis, Rheumatoid/therapy , Outcome and Process Assessment, Health Care/methods , Quality Assurance, Health Care/methods , Rheumatology/standards , Adult , Aged , Antirheumatic Agents/therapeutic use , Arthritis, Rheumatoid/physiopathology , Cohort Studies , Disability Evaluation , Documentation , Female , Health Status , Humans , Joints/physiopathology , Male , Middle Aged , Reproducibility of Results , Retrospective Studies , Rheumatology/methods , Rheumatology/statistics & numerical data , Self-Examination , Severity of Illness Index , Societies, Medical , United States
2.
Arthritis Rheum ; 55(6): 884-91, 2006 Dec 15.
Article in English | MEDLINE | ID: mdl-17139665

ABSTRACT

OBJECTIVE: To construct quality measures with measurement validity and meaning for clinicians. METHODS: We conducted a prospective cohort study of rates of change in disease-modifying antirheumatic drug (DMARD) and/or systemic corticosteroid drug or dose for 568 patients with rheumatoid arthritis (RA) across 6,159 clinical encounters within 12 months to examine how changes in clinical specifications change adherence. RESULTS: Rates of DMARD change were sensitive to specifications regarding the intensity of disease activity (severe or moderate), duration of specified disease activity, and length of the observation period. Over 12 months, the proportions of 377 patients with severe disease activity observed for 1-month, 2-month, and 3-month time blocks who had a change in DMARD drug or dose were 36%, 57%, and 74%, respectively. Over 12 months, a change in DMARD drug or dose was observed for 44%, 50%, and 68% of 377 patients with severe disease within 3 months, 6 months, and 12 months, respectively, of the patient meeting criteria for severe disease activity. A change in DMARD drug or dose was observed for 21%, 23%, and 34% of 149 patients with moderate disease activity within 3, 6, and 12 months, respectively, of the patient meeting criteria for moderate disease activity. CONCLUSION: Rates of pharmacologic interventions for patients with moderate and severe RA disease activity vary substantially by intensity and duration of disease activity and by duration of period for observing change. Lack of precision in explicit process criteria could substantially mislead comparisons of quality of care across comparison groups.


Subject(s)
Antirheumatic Agents/therapeutic use , Arthritis, Rheumatoid/drug therapy , Evidence-Based Medicine , Outcome and Process Assessment, Health Care/methods , Rheumatology/standards , Adrenal Cortex Hormones/therapeutic use , Adult , Aged , Arthritis, Rheumatoid/physiopathology , Cohort Studies , Female , Health Status Indicators , Humans , Male , Medical Records , Middle Aged , Prospective Studies , Quality of Health Care , Severity of Illness Index
3.
J Gerontol A Biol Sci Med Sci ; 56(6): M366-72, 2001 Jun.
Article in English | MEDLINE | ID: mdl-11382797

ABSTRACT

BACKGROUND: The Mini-Nutritional Assessment (MNA) is a validated assessment instrument for nutritional problems, but its length limits its usefulness for screening. We sought to develop a screening version of this instrument, the MNA-SF, that retains good diagnostic accuracy. METHODS: We reanalyzed data from France that were used to develop the original MNA and combined these with data collected in Spain and New MEXICO: Of the 881 subjects with complete MNA data, 151 were from France, 400 were from Spain, and 330 were from New MEXICO: Independent ratings of clinical nutritional status were available for 142 of the French subjects. Overall, 73.8% were community dwelling, and mean age was 76.4 years. Items were chosen for the MNA-SF on the basis of item correlation with the total MNA score and with clinical nutritional status, internal consistency, reliability, completeness, and ease of administration. RESULTS: After testing multiple versions, we identified an optimal six-item MNA-SF total score ranging from 0 to 14. The cut-point score for MNA-SF was calculated using clinical nutritional status as the gold standard (n = 142) and using the total MNA score (n = 881). The MNA-SF was strongly correlated with the total MNA score (r = .945). Using an MNA-SF score of > or = 11 as normal, sensitivity was 97.9%, specificity was 100%, and diagnostic accuracy was 98.7% for predicting undernutrition. CONCLUSIONS: The MNA-SF can identify persons with undernutrition and can be used in a two-step screening process in which persons, identified as "at risk" on the MNA-SF, would receive additional assessment to confirm the diagnosis and plan interventions.


Subject(s)
Geriatrics/methods , Mass Screening , Nutrition Assessment , Nutrition Disorders/diagnosis , Professional Practice , Aged , Discriminant Analysis , Female , Humans , Male
4.
J Gerontol A Biol Sci Med Sci ; 55(6): M317-21, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10843351

ABSTRACT

OBJECTIVES: This randomized controlled trial studied the effects of a low- to moderate-intensity group exercise program on strength, endurance, mobility, and fall rates in fall-prone elderly men with chronic impairments. METHODS: Fifty-nine community-living men (mean age = 74 years) with specific fall risk factors (i.e., leg weakness, impaired gait or balance, previous falls) were randomly assigned to a control group (n = 28) or to a 12-week group exercise program (n = 31). Exercise sessions (90 minutes, three times per week) focused on increasing strength and endurance and improving mobility and balance. Outcome measures included isokinetic strength and endurance, five physical performance measures, and self-reported physical functioning, health perception, activity level, and falls. RESULTS: Exercisers showed significant improvement in measures of endurance and gait. Isokinetic endurance increased 21% for right knee flexion and 26% for extension. Exercisers had a 10% increase (p < .05) in distance walked in six minutes, and improved (p < .05) scores on an observational gait scale. Isokinetic strength improved only for right knee flexion. Exercise achieved no significant effect on hip or ankle strength, balance, self-reported physical functioning, or number of falls. Activity level increased within the exercise group. When fall rates were adjusted for activity level, the exercisers had a lower 3-month fall rate than controls (6 falls/1000 hours of activity vs 16.2 falls/1000 hours, p < .05). DISCUSSION: These findings suggest that exercise can improve endurance, strength, gait, and function in chronically impaired, fall-prone elderly persons. In addition, increased physical activity was associated with reduced fall rates when adjusted for level of activity.


Subject(s)
Accidental Falls/prevention & control , Exercise , Accidental Falls/statistics & numerical data , Aged , Gait , Humans , Male , Motor Activity , Muscle, Skeletal/physiology , Peer Group , Postural Balance
5.
J Am Geriatr Soc ; 47(7): 873-8, 1999 Jul.
Article in English | MEDLINE | ID: mdl-10404935

ABSTRACT

OBJECTIVE: To develop and test the effectiveness of a 5-item version of the Geriatric Depression Scale (GDS) in screening for depression in a frail community-dwelling older population. DESIGN: A cross-sectional study. SETTING: A geriatric outpatient clinic at the Sepulveda VA Medical Center, Sepulveda, California. PARTICIPANTS: A total of 74 frail outpatients (98.6% male, mean age 74.6) enrolled in an ongoing trial. MEASUREMENTS: Subjects had a comprehensive geriatric assessment that included a structured clinical evaluation for depression with geropsychiatric consultation. A 5-item version of the GDS was created from the 15-item GDS by selecting the items with the highest Pearson chi2 correlation with clinical diagnosis of depression. Sensitivity, specificity, diagnostic accuracy, and positive and negative predictive values were calculated for the 15-item GDS and the new 5-item scale. RESULTS: Subjects had a mean GDS score of 6.2 (range 0-15). Clinical evaluation found that 46% of subjects were depressed. The depressed and not depressed groups were similar with regard to demographics, mental status, educational level, and number of chronic medical conditions. Using clinical evaluation as the gold standard for depression, the 5-item GDS (compared with the 15-item GDS results shown in parentheses) had a sensitivity of .97 (.94), specificity of .85 (.83), positive predictive value of .85 (.82), negative predictive value of .97 (.94), and accuracy of .90 (.88) for predicting depression. Significant agreement was found between depression diagnosis and the 5-item GDS (kappa = 0.81). Multiple other short forms were tested, and are discussed. The mean administration times for the 5- and 15-item GDS were .9 and 2.7 minutes, respectively. CONCLUSIONS: The 5-item GDS was as effective as the 15-item GDS for depression screening in this population, with a marked reduction in administration time. If validated elsewhere, it may prove to be a preferred screening test for depression.


Subject(s)
Depressive Disorder/diagnosis , Frail Elderly , Geriatric Assessment , Interview, Psychological/methods , Mass Screening/methods , Psychiatric Status Rating Scales/standards , Activities of Daily Living , Aged , Aged, 80 and over , Cross-Sectional Studies , Depressive Disorder/classification , Female , Health Status , Humans , Likelihood Functions , Male , Mental Health , Prevalence , Reproducibility of Results , Sensitivity and Specificity , Severity of Illness Index
6.
Aging (Milano) ; 10(6): 479-89, 1998 Dec.
Article in English | MEDLINE | ID: mdl-10078318

ABSTRACT

Managing acute illness is an important aspect of medical care for nursing home residents, but little data is available on the nature of acute illness in this setting. The aims of this study were to determine the incidence, etiologies, risk factors and outcomes of acute illness in nursing home residents. This was a prospective cohort study of residents at one Veterans Administration nursing home (N = 140). Acute illness episodes were identified prospectively for one year through staff interviews and medical record review. Etiologies of acute illness were determined based on standardized criteria. Subjects were followed for three years to determine hospital utilization, discharge location and survival. There were 113 acute illness episodes identified (0.59 episodes per subject per month). The most common etiologies were pneumonia (33% of episodes), and urinary tract infection (27%). Significant risk factors for acute illness included anemia, dependence in mobility and surveillance time (i.e., duration of time monitored for illness episodes) in the nursing home (model chi 2 27.16, p < 0.001). Subjects who developed acute illness had increased hospital utilization during the first year of follow-up (p = 0.034); they were also less likely to be discharged home by both one year (chi 2 12.37, p < 0.001) and two years of follow-up (chi 2 9.45, p = 0.009). When hospice and respite residents with short stay were excluded, subjects who developed acute illness had lower 3-year survival (Log rank 4.97, p = 0.026), and the rate of acute illness episodes (i.e., number per month monitored) predicted 3-year mortality (Cox proportional hazards, p < 0.001). In conclusion, acute illness is extremely common among nursing home residents, and is most often due to infection. The occurrence of acute illness identifies residents who have increased hospital utilization, are less likely to return home, and have decreased long-term survival.


Subject(s)
Acute Disease , Nursing Homes , Aged , Cohort Studies , Female , Forecasting , Hospitalization , Humans , Incidence , Male , Proportional Hazards Models , Prospective Studies , Resuscitation , Risk Factors , Survival Analysis , United States , United States Department of Veterans Affairs
7.
Aging (Milano) ; 9(1-2): 127-35, 1997.
Article in English | MEDLINE | ID: mdl-9177596

ABSTRACT

The use of an obstacle course to quantify gait, balance and functional mobility in elderly persons, particularly to assess objectively changes following exercise and rehabilitation interventions, has not been extensively developed or tested. In this study, we describe an 18-item obstacle course developed as an outcome measure for an exercise intervention among fall-prone elderly men. Reliability and validity of the obstacle course was tested in a group of 58 community-living elderly men (mean age = 75 years). Each subject's performance was videotaped and timed. The videotapes were scored by a physical therapist and a physician. Inter-rater reliability between the raters was high (Kappa = 0.96, p < 0.0001). Both the obstacle course score and time correlated significantly with gait velocity, a 6-minute walk test, and a performance-oriented instrument of gait and balance. Obstacle course scores showed significant improvement among the most impaired subjects, but not among higher functioning subjects following a 3-month exercise intervention. These results suggest that an obstacle course may be a useful and valid method for measuring outcomes related to mobility tasks in selected elderly populations. Further work is needed to determine in which populations, and for which outcomes, an obstacle course is better than simpler performance-based measures.


Subject(s)
Aged/physiology , Exercise Test/methods , Gait/physiology , Motor Activity/physiology , Postural Balance/physiology , Health Status , Humans , Male , Reproducibility of Results
8.
Aging (Milano) ; 7(3): 212-7, 1995 Jun.
Article in English | MEDLINE | ID: mdl-8547380

ABSTRACT

The randomized controlled trial of the Geriatric Evaluation Unit (GEU) at the Sepulveda Veterans Hospital was the first to document the clinical and cost-effectiveness of hospital-based comprehensive geriatric assessment (CGA). Frail elderly inpatients were assigned randomly to the GEU for CGA, therapy, rehabilitation, and placement (N = 63), or to standard hospital care (N = 60). At one year, GEU patients had much lower mortality (24% vs 48%) and were less likely to have been discharged to a nursing home (NH) (13% vs 30%), or to have spent any time in NHs (27% vs 47%). GEU patients were more likely to improve in personal self-maintenance and morale. Further, controls had substantially more acute-care hospital days, NH days, and hospital readmissions, resulting in higher direct institutional care costs, especially after survival adjustment. Here, we report the results of long-term follow-up. There was a significant survival effect through two years. Despite prolongation of life, there was no indication that quality of life was worse for survivors in the GEU group. In fact, the proportion of persons independent in > or = 2 ADLs at two years was somewhat higher for GEU patients (0.44) than controls (0.33) (z = 1.27; p = 0.056). By three years, 43% of GEU subjects and 38% of controls were still alive. Over the entire 3-year period, the per capita direct cost difference was not significant, either before or after survival adjustment (unadjusted: $37,091 GEU vs $34,205 control; survival-adjusted: $54,315 GEU vs $63,362 control; p = 0.17).(ABSTRACT TRUNCATED AT 250 WORDS)


Subject(s)
Geriatric Assessment , Health Care Costs , Health Services/statistics & numerical data , Activities of Daily Living , Aged , Humans , Longitudinal Studies , Nursing Homes/economics , Survival Analysis
9.
J Am Geriatr Soc ; 43(3): 301-7, 1995 Mar.
Article in English | MEDLINE | ID: mdl-7884123

ABSTRACT

OBJECTIVE: To determine patient and treatment-related factors predictive of health outcomes. DESIGN: Secondary analysis of a randomized trial with 6-month follow-up. After using bivariate and three-way analysis in the total sample to screen outcome predictors and interactions among baseline variables, multivariate logistic regression was used to model outcomes. SETTING: A county general hospital in central Stockholm, and patients' homes nearby. PATIENTS: Hospital inpatients stable for discharge from acute care, having at least one chronic condition, and dependent in 1 to 5 Katz activities of daily life (ADLs) were included. Subjects (mean age = 81.1 years) were randomized to "team" (n = 150) or "usual care" (n = 99). INTERVENTIONS: Team patients were eligible for in-home primary care by an interdisciplinary team that included a physician, physical therapist, and 24-hour nursing services and geriatric consultation where necessary. "Usual-care" patients received standard district nurse-administered services at home upon hospital discharge. MEASUREMENTS: Demographic, functional status, and medical characteristics were measured at randomization. Outcomes included survival and higher ADL, instrumental ADL (IADL), and outdoor ambulation scores. MAIN RESULTS: Multiple medical, social, behavioral, and functional factors were associated with outcomes. Primary cardiac disease, number of prescription drugs, alcohol abstinence, and baseline mental status all impacted 6-month survival. Controlling for other factors, team care improved the likelihood of ambulation independent of personal assistance at follow-up (P = .027), treating an estimated 10 patients per 1 benefiting. Further, rehabilitative in-home team care neutralized mortality and functional risk factors (low number of baseline contacts and coresidence) apparent in usual care. CONCLUSIONS: Heterogeneous clinical populations of older patients contain many prevalent characteristics important to outcomes. Secondary analysis of trials including interactions identifies treatable and untreatable risks, what program components may be effective, and who benefits.


Subject(s)
Aftercare/organization & administration , Home Care Services, Hospital-Based/organization & administration , Outcome Assessment, Health Care , Patient Care Team/organization & administration , Aged , Aged, 80 and over , Female , Hospitals, County , Humans , Logistic Models , Male , Multivariate Analysis , Risk Assessment , Sweden
11.
Gerontologist ; 34(5): 652-7, 1994 Oct.
Article in English | MEDLINE | ID: mdl-7959133

ABSTRACT

Geriatric evaluation and management units (GEMs) are designed to improve the functional health and placement of frail elderly hospital inpatients. We surveyed Department of Veterans Affairs (VA) GEMs to describe their care patterns and organization. GEMs meeting consensus standards (n = 46) varied considerably. Hospital, GEM, and patient-admission factors (e.g., hospital psychiatric mix, GEM location, proportion of GEM admissions from nursing homes) predicted length-of-stay, readmission rate, and discharge status. Ongoing monitoring may improve the effectiveness of VA GEMs systemwide.


Subject(s)
Geriatric Assessment , Hospitals, Veterans/organization & administration , Aged , Hospital Units , Humans , United States , United States Department of Veterans Affairs
13.
J Subst Abuse ; 1(4): 417-30, 1989.
Article in English | MEDLINE | ID: mdl-2485289

ABSTRACT

Memory impairments related to alcoholism and to age (young, middle, and old) were examined as a function of educational level (low, high). Factoring of 14 different memory test scores from 93 alcoholics and 73 controls into four components indicated that alcohol-related impairments in verbal memory were observed in adults with low, but not high levels of education. Similarly, age-related decrements in visual-spatial and verbal memory tasks (Components I and II) affected mainly low-education alcoholics and controls. On these components, age and alcoholism did not interact, but were additive. Effects of education were reflected in verbal but not nonverbal tasks (Auditory and Design Recognition, Components III and IV). Neither years of heavy drinking, lifetime consumption, nor abstinence (80% had < 7 wks abstinence) predicted component scores of alcoholics, while age or education accounted for significant variance in visual-spatial, verbal, and design recognition components.


Subject(s)
Alcohol Amnestic Disorder/diagnosis , Educational Measurement , Mental Recall , Adult , Age Factors , Aged , Alcohol Amnestic Disorder/psychology , Alcohol Amnestic Disorder/rehabilitation , Alcohol Drinking/adverse effects , Alcoholism/psychology , Alcoholism/rehabilitation , Humans , Male , Mental Recall/drug effects , Middle Aged , Neuropsychological Tests , Verbal Learning/drug effects
14.
J Gerontol ; 40(5): 601-4, 1985 Sep.
Article in English | MEDLINE | ID: mdl-4031409

ABSTRACT

Based on a model of inter- and intraitem processes in recognition, this study examined age differences in word recognition (emphasizing interitem elaborative rehearsal) and in design recognition (emphasizing intraitem perceptual processes) over delays of 2, 20, and 200 min. Because repeated exposures should increase intraitem integration, targets were repeated from first to second test halves. Young and old adults showed equivalent accuracy in design recognition and equivalent increases from first to second test halves; the predicted lack of intraitem age differences was supported. Young adults, however, were more accurate than old adults in word recognition, supporting the prediction of age differences in interitem processing. The decline across delays was different for words and designs but was parallel for both age groups. Young and old adults also had equivalent decision criteria and decision speeds.


Subject(s)
Aging , Form Perception , Memory , Mental Recall , Visual Perception , Adult , Aged , Analysis of Variance , Humans , Middle Aged , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...