Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 62
Filter
1.
Am J Occup Ther ; 78(4)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38885526

ABSTRACT

IMPORTANCE: Effective communication skills (CS) are essential for occupational therapists. The Gap-Kalamazoo Communication Skills Assessment Form (GKCSAF) is a standard tool for assessing the CS of medical residents. However, the interrater reliability for the nine CS domain scores ranges from poor to good. The intrarater reliability remains unclear. OBJECTIVE: To examine the inter- and intrarater reliability of the GKCSAF's nine domain scores and total score among occupational therapy interns. DESIGN: Repeated assessments with the GKCSAF. SETTING: Medical center psychiatry department. PARTICIPANTS: Twenty-five interns and 49 clients with mental illness, recruited from August 2020 to December 2021. OUTCOMES AND MEASURES: The transcripts of 50 evaluation interviews between clients and interns were used. Three independent raters assessed each transcript twice, at least 3 mo apart. RESULTS: The GKCSAF demonstrated poor interrater reliability for the nine domain scores (weighted κ = .08-.30) and the total score (intraclass correlation coefficient [ICC] = .22, 95% confidence interval [CI] [.10, .35]). The GKCSAF showed poor to intermediate intrarater reliability for the nine domain scores (weighted κ = .27-.73) and fair reliability for the total score (ICC = .69, 95% CI [.60, .77]). CONCLUSIONS AND RELEVANCE: The GKCSAF demonstrates poor interrater reliability and poor to intermediate intrarater reliability for the nine domain scores. However, it demonstrates fair intrarater reliability in assessing the overall CS performance of occupational therapy interns. Significant variations were observed when different raters assessed the same interns' CS, indicating inconsistencies in ratings. Consequently, it is advisable to conservatively interpret the CS ratings obtained with the GKCSAF. Plain-Language Summary: It is essential for occupational therapists to effectively communicate with clients. The Gap-Kalamazoo Communication Skills Assessment Form (GKCSAF) is a standard tool that is used to assess the communication skills of medical residents. The study authors used the GKCSAF with occupational therapy interns in a medical center psychiatry department to assess how effectively they interviewed clients with mental illness. This study aids occupational therapy personnel in the interpretation of GKCSAF results. The study findings also highlight the importance of developing reliable and standardized measures to assess communications skills in the field of occupational therapy.


Subject(s)
Clinical Competence , Communication , Internship and Residency , Occupational Therapy , Humans , Occupational Therapy/education , Reproducibility of Results , Male , Female , Adult , Observer Variation , Professional-Patient Relations , Mental Disorders/rehabilitation
2.
Phys Ther ; 104(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38531775

ABSTRACT

OBJECTIVE: The Fugl-Meyer assessment for upper extremity (FMA-UE) is a measure for assessing upper extremity motor function in patients with stroke. However, the considerable administration time of the assessment decreases its feasibility. This study aimed to develop an accumulative assessment system of upper extremity motor function (AAS-UE) based on the FMA-UE to improve administrative efficiency while retaining sufficient psychometric properties. METHODS: The study used secondary data from 3 previous studies having FMA-UE datasets, including 2 follow-up studies for subacute stroke individuals and 1 test-retest study for individuals with chronic stroke. The AAS-UE adopted deep learning algorithms to use patients' prior information (ie, the FMA-UE scores in previous assessments, time interval of adjacent assessments, and chronicity of stroke) to select a short and personalized item set for the following assessment items and reproduce their FMA-UE scores. RESULTS: Our data included a total of 682 patients after stroke. The AAS-UE administered 10 different items for each patient. The AAS-UE demonstrated good concurrent validity (r = 0.97-0.99 with the FMA-UE), high test-retest reliability (intra-class correlation coefficient = 0.96), low random measurement error (percentage of minimal detectable change = 15.6%), good group-level responsiveness (standardized response mean = 0.65-1.07), and good individual-level responsiveness (30.5%-53.2% of patients showed significant improvement). These psychometric properties were comparable to those of the FMA-UE. CONCLUSION: The AAS-UE uses an innovative assessment method, which makes good use of patients' prior information to achieve administrative efficiency with good psychometric properties. IMPACT: This study demonstrates a new assessment method to improve administrative efficiency while retaining psychometric properties, especially individual-level responsiveness and random measurement error, by making good use of patients' basic information and medical records.


Subject(s)
Deep Learning , Disability Evaluation , Psychometrics , Stroke Rehabilitation , Upper Extremity , Humans , Upper Extremity/physiopathology , Male , Female , Middle Aged , Reproducibility of Results , Aged , Stroke Rehabilitation/methods , Stroke/physiopathology , Recovery of Function
3.
Am J Occup Ther ; 78(2)2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38271640

ABSTRACT

IMPORTANCE: The machine learning-based Stroke Impact Scale (ML-SIS) is an efficient short-form measure that uses 28 items to provide domain scores comparable to those of the original 59-item Stroke Impact Scale-Third Edition (SIS 3.0). However, its utility is largely unknown because it has not been cross-validated with an independent sample. OBJECTIVE: To examine the ML-SIS's comparability and test-retest reliability with that of the original SIS 3.0 in an independent sample of people with stroke. DESIGN: Comparability was examined with the coefficient of determination (R2), mean absolute error, and root-mean-square error (RMSE). Test-retest reliability was examined using the intraclass correlation coefficient (ICC). SETTING: Five hospitals in Taiwan. PARTICIPANTS: Data of 263 persons with stroke were extracted from a previous study; 144 completed repeated assessments after a 2-wk interval. RESULTS: High R2 (.87-.95) and low mean absolute error or RMSE (about 2.4 and 3.3) of the domain scores, except for the Emotion scores (R2 = .08), supported the comparability of the two measures. Similar ICC values (.39-.87 vs. .46-.87) were found between the two measures, suggesting that the ML-SIS is as reliable as the SIS 3.0. CONCLUSIONS AND RELEVANCE: The ML-SIS provides scores mostly identical to those of the original measure, with similar test-retest reliability, except for the Emotion domain. Thus, it is a promising alternative that can be used to lessen the burden of routine assessments and provide scores comparable to those of the original SIS 3.0. Plain-Language Summary: The machine learning-based Stroke Impact Scale (ML-SIS) is as reliable as the original Stroke Impact Scale-Third Edition, except for the Emotion domain. Thus, the ML-SIS can be used to improve the efficiency of clinical assessments and also relieve the burden on people with stroke who are completing the assessments.


Subject(s)
Stroke Rehabilitation , Stroke , Humans , Reproducibility of Results , Cross-Cultural Comparison , Stroke/psychology , Language
4.
Arch Phys Med Rehabil ; 104(9): 1432-1438, 2023 09.
Article in English | MEDLINE | ID: mdl-37028696

ABSTRACT

OBJECTIVE: To examine the test-retest reliability, minimal detectable change (MDC), responsiveness, and efficiency of the Computerized Adaptive Test of Social Functioning (Social-CAT) in patients with stroke. DESIGN: Repeated-assessments design. SETTING: A department of rehabilitation of a medical center. PARTICIPANTS: In total, 31 patients with chronic stroke and 65 patients with subacute stroke were recruited. INTERVENTION: Not available. MAIN OUTCOME MEASURE: Social-CAT. RESULTS: The Social-CAT showed acceptable test-retest reliability (intraclass correlation coefficient, 0.80) and small random measurement error (MDC%: 18.0%). However, heteroscedasticity was found (r between the means and the absolute change scores: 0.32), so the MDC% adjusted cut-off score is recommended for determining real improvement. Regarding responsiveness, the Social-CAT showed large differences (Kazis' effect size and standardized mean response: 1.15 and 1.09, respectively) in subacute patients. Regarding efficiency, the Social-CAT required an average of 5 items and less than 2 minutes for completion. CONCLUSIONS: Our findings indicate that the Social-CAT is a reliable and efficient measure with good test-retest reliability, small random measurement error, and good responsiveness. Thus, the Social-CAT is a useful outcome measure for routine monitoring of the changes in social function of patients with stroke.


Subject(s)
Stroke Rehabilitation , Stroke , Humans , Reproducibility of Results , Social Interaction , Activities of Daily Living , Disability Evaluation
5.
Disabil Rehabil ; 45(8): 1398-1404, 2023 04.
Article in English | MEDLINE | ID: mdl-35403536

ABSTRACT

PURPOSE: To compare the test-retest reliability and minimal detectable change (MDC) of the commonly used versions of the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) (the ADAS-Cog-11 (11 items), ADAS-Cog-3 (three items), ADAS-Cog-5-Subset (five items), ADAS-Cog-6-Subset (six items), and ADAS-Rasch (11 items)) in people with dementia. MATERIALS AND METHODS: A repeated-assessments design (2 weeks apart) was used to examine the ADAS-Cog-11, ADAS-Cog-3, ADAS-Cog-5-Subset, ADAS-Cog-6-Subset, and ADAS-Rasch. Participants with dementia were recruited from one hospital, one elder care center, and two day-care centers using convenience sampling. RESULTS: Fifty-two participants finished the assessments twice in two weeks. All versions showed high intraclass correlation coefficients (ICCs) (0.82-0.96), minimal standardized response means (-0.07 to 0.08) and low to acceptable MDC% (9.2-28.6%). The ADAS-Rasch had the highest ICC (0.96) and the lowest MDC%. The ADAS-Cog-3 had an ICC lower than 0.90 (0.82) and the highest MDC% (28.6%). CONCLUSIONS: The ADAS-Rasch seems to be the most reliable version of the ADAS-Cog for group- and individual-level comparisons. The ADAS-Cog-3 may be a better choice for researchers for group-level comparisons because it requires fewer items to achieve acceptable reliability. The ADAS-Cog-11, ADAS-Cog-5-Subset, ADAS-Cog-6-Subset, and ADAS-Rasch could be considered for clinical usage for individual-level comparisons.Implications for rehabilitationThe ADAS-Rasch is the most reliable version of the ADAS-Cog for group- and individual-level comparisons due to its excellent test-retest reliability, lowest random measurement error and absence of a practice effect.The ADAS-Cog-5-Subset and ADAS-Cog-6-Subset might be good substitutes for the ADAS-Rasch in clinical settings because of their comparable reliability features and superior administration efficiency.


Subject(s)
Alzheimer Disease , Humans , Aged , Alzheimer Disease/diagnosis , Alzheimer Disease/psychology , Neuropsychological Tests , Reproducibility of Results , Cognition
6.
Disabil Rehabil ; 45(22): 3748-3754, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36288467

ABSTRACT

PURPOSE: We examined the unidimensionality and Rasch reliability of both Jenkinson's and MacIsaac's eight-item short versions of the Stroke Impact Scale (SF-SIS), a questionnaire for assessing overall health-related quality of life (HRQOL). METHODS: This study was a secondary data analysis in which 263 persons with stroke completed the SIS. The 263 persons, on average, had age of 60 years, mild stroke, and moderate disability of self-care. The unidimensionality of both versions was validated via testing of model fitting and principal component analysis (PCA) of residuals using the Rasch analysis to determine the Rasch reliability and measures. RESULTS: The eight items in both SF-SIS versions met the criteria of infit and outfit MNSQ (<1.4 and >0.6), indicating good data-model fit. The PCA showed that no dominant factors existed in the residuals of the items. The person reliability of Jenkinson's and MacIsaac's SF-SIS versions was 0.80 and 0.79, respectively. The Rasch measures (i.e., person measure in logits) ranged from -1.06 to 1.87 in Jenkinson's SF-SIS and -0.82 to 1.88 in MacIsaac's version. CONCLUSIONS: The unidimensionality of both versions was supported. The Rasch measures of both appear valid for representing overall HRQOL levels. Both versions also showed acceptable reliability for research purposes.Implications for rehabilitationThe unidimensionality was justified for both versions (Jenkinson's and MacIsaac's eight-item short-versions of Stroke Impact Scale).The Rasch scores of both versions appear valid for representing overall health-related quality of life.Both versions showed acceptable reliability for research purposes, but not sufficiently reliable for clinical use.

7.
Article in English | MEDLINE | ID: mdl-38163920

ABSTRACT

Patients with schizophrenia tend to have deficits in emotion recognition (ER) that affect their social function. However, the commonly-used ER measures appear incomprehensive, unreliable and invalid, making it difficult to comprehensively evaluate ER. The purposes of this study were to develop the Computerized Emotion Recognition Video Test (CERVT) evaluating ER ability in patients with schizophrenia. This study was divided into two phases. First, we selected candidate CERVT items/videos of 8 basic emotion domains from a published database. Second, we validated the selected CERVT items using Rasch analysis. Finally, the 269 patients and 177 healthy adults were recruited to ensure the participants had diverse abilities. After the removal of 21 misfit (infit or outfit mean square > 1.4) items and adjustment of the item difficulties of the 26 items with severe differential item functioning, the remaining 217 items were finalized as the CERVT items. All the CERVT items showed good model fits with small eigenvalues (≤ 2) based on the residual-based principal components analysis for each domain, supporting the unidimensionality of these items. The 8 domains of the CERVT had good to excellent reliabilities (average Rasch reliabilities = 0.84-0.93). The CERVT contains items of the 8 basic emotions with individualized scores. Moreover, the CERVT showed acceptable reliability and validity, and the scores were not affected by examinees' gender. Thus, the CERVT has the potential to provide a comprehensive, reliable, valid, and gender-unbiased assessment of ER for patients with schizophrenia.

8.
Article in English | MEDLINE | ID: mdl-36078623

ABSTRACT

Various studies have examined the effectiveness of interventions to increase empathy in medical professionals. However, inconsistencies may exist in the definitions, interventions, and assessments of empathy. Inconsistencies jeopardize the internal validity and generalization of the research findings. The main purpose of this study was to examine the internal consistency among the definitions, interventions, and assessments of empathy in medical empathy intervention studies. We also examined the interventions and assessments in terms of the knowledge-attitude-behavior aspects. We conducted a literature search for medical empathy intervention studies with a design of randomized controlled trials and categorized each study according to the dimensions of empathy and knowledge-attitude-behavior aspects. The consistencies among the definitions, interventions, and assessments were calculated. A total of 13 studies were included in this study. No studies were fully consistent in their definitions, interventions, and assessments of empathy. Only four studies were partially consistent. In terms of knowledge-attitude-behavior aspects, four studies were fully consistent, two studies were partially consistent, and seven studies were inconsistent. Most medical empathy intervention studies are inconsistent in their definitions, interventions, and assessments of empathy, as well as the knowledge-attitude-behavior aspects between interventions and assessments. These inconsistencies may have affected the internal validity and generalization of the research results.


Subject(s)
Biomedical Research , Empathy , Randomized Controlled Trials as Topic
9.
Am J Occup Ther ; 76(4)2022 Jul 01.
Article in English | MEDLINE | ID: mdl-35861611

ABSTRACT

IMPORTANCE: Patients with schizophrenia tend to have severe deficits in theory of mind, which may limit their interpretation of others' behaviors and thereby hamper social participation. Commonly used measures of theory of mind assess the ability to understand various social situations (e.g., implied meaning or hinting, faux pas), but these measures do not yield valid, reliable, and gender unbiased results to inform interventions for managing theory-of-mind deficits. We used understanding of implied meaning, which appears to be a unidimensional construct highly correlated with social competence, as a promising starting point to develop a theory-of-mind assessment. OBJECTIVE: To develop a Rasch-calibrated computerized test of implied meaning. DESIGN: Cross-sectional design. SETTING: Psychiatric hospitals and community. PARTICIPANTS: 344 participants (240 patients with schizophrenia and 104 healthy adults). RESULTS: We initially developed 27 items for the Computerized Implied Meaning Test. After inappropriate items (12 misfit items and 1 gender-biased item) were removed, the remaining 14 items showed acceptable model fit to the Rasch model (infit = 0.84-1.16; outfit = 0.65-1.34) and the one-factor model (comparative fit index = .91, standardized root mean square residual = .05, root-mean-square error of approximation = .08). Most patients (81.7%) achieved individual Rasch reliability of ≥.90. Healthy participants performed significantly better on the test than patients with schizophrenia (Cohen's d = 2.5, p < .001). CONCLUSIONS AND RELEVANCE: Our preliminary findings suggest that the Computerized Implied Meaning Test may provide reliable, valid, and gender-unbiased results for patients with schizophrenia. What This Article Adds: We developed a new measure for assessing theory-of-mind ability in patients with schizophrenia that consists of items targeting the understanding of implied meaning. Preliminary findings suggest that the Computerized Implied Meaning Test is reliable, valid, and gender unbiased and may be used in evaluating patients' theory-of-mind deficits and relevant factors.


Subject(s)
Schizophrenia , Adult , Cross-Sectional Studies , Humans , Psychometrics , Reproducibility of Results , Surveys and Questionnaires
10.
BMC Psychiatry ; 21(1): 553, 2021 11 10.
Article in English | MEDLINE | ID: mdl-34758768

ABSTRACT

BACKGROUND: The Performance-based measure of Executive Functions (PEF) with four domains is designed to assess executive functions in people with schizophrenia. The purpose of this study was to examine the test-retest reliability of the PEF administered by the same rater (intra-rater agreement) and by different raters (inter-rater agreement) in people with schizophrenia and to estimate the values of minimal detectable change (MDC) and MDC%. METHODS: Two convenience samples (each sample, n = 60) with schizophrenia were conducted two assessments (two weeks apart). The intraclass correlation coefficient (ICC) was analyzed to examine intra-rater and inter-rater agreements of the test-retest reliability of the PEF. The MDC was calculated through standard error of measurement. RESULTS: For the intra-rater agreement study, the ICC values of the four domains were 0.88-0.92. The MDC (MDC%) of the four domains (volition, planning, purposive action, and perfromance effective) were 13.0 (13.0%), 12.2 (16.4%), 16.2 (16.2%), and 16.3 (18.8%), respectively. For the inter-rater agreement study, the ICC values of the four domains were 0.82-0.89. The MDC (MDC%) were 15.8 (15.8%), 17.4 (20.0%), 20.9 (20.9%), and 18.6 (18.6%) for the volition, planning, purposive action, and performance effective domains, respectively. CONCLUSIONS: The PEF has good test-retest reliability, including intra-rater and inter-rater agreements, for people with schizophrenia. Clinicians and researchers can use the MDC values to verify whether an individual with schizophrenia shows any real change (improvement or deterioration) between repeated PEF assessments by the same or different raters.


Subject(s)
Executive Function , Schizophrenia , Humans , Reproducibility of Results
11.
J Affect Disord ; 292: 102-107, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34111689

ABSTRACT

Background Facial emotion recognition deficit (FERD) seems to be an obvious feature of patients with schizophrenia and has great potential for classifying patients and non-patients. The FERD screener was previously developed to classify patients from healthy adults. However, an obvious drawback of this screener is that the recommended cut-off scores could enhance either sensitivity or specificity (about 0.92) only, while the other one is at an only acceptable level (about 0.66). Machine learning (ML) algorithms are famous for their feature extraction and data classification abilities, which are promising for improving the discriminative power of screeners. This study aimed to improve the discriminative power of the FERD screener using an ML algorithm. Methods The data were extracted from a previous study. Artificial neural networks were generated to estimate the probability of being a patient with schizophrenia or a healthy adult based on the examinee's responses on the FERD screener (168 items). The performances of the ML-FERD screener were examined using a stratified five-fold cross-validation method. Results Across the five subsets of data, the ML-FERD screener showed extremely high areas under the receiver operating characteristic curve of 0.97-0.99. With the optimized cut-off scores, the average sensitivity and specificity of the ML-FERD screener were 0.90 (0.85-0.93) and 0.93 (0.86-1.00), respectively. Limitations The characteristics of patients were not representative, and the age was mismatched to control group. Conclusion The ML-FERD screener appears to have a better discriminative power to classify patients with schizophrenia and healthy adults than does the FERD screener.


Subject(s)
Facial Recognition , Schizophrenia , Adult , Algorithms , Humans , Machine Learning , Schizophrenia/diagnosis , Sensitivity and Specificity
12.
Disabil Rehabil ; 43(26): 3757-3763, 2021 12.
Article in English | MEDLINE | ID: mdl-32372705

ABSTRACT

PURPOSE: To examine the relationships among therapist-reported, patient-reported, and objective assessment scores of balance function. METHODS: Inpatients with stroke and occupational therapists were recruited. The objective balance scores were measured using the Balance Computerized Adaptive Testing (Balance CAT) system. The therapist and patient-reported scores were evaluated using a visual analogue scale (VAS) and Likert-type scale. RESULTS: Eighty-eight patients and 16 therapists participated. The correlations (r= 0.64 and 0.65; R-squared about 0.42 at baseline and follow-up assessments, respectively) between the therapist-reported VAS scores and the Balance CAT system were larger than those (r = 0.31 and 0.21) between the patient-reported VAS scores and the Balance CAT system. Low correlations (r = 0.27 and 0.26 for VAS and Likert-type scores, respectively) were found between the therapist-reported and patient-reported change scores. Low correlations (r = 0.12-0.17) were found between the change scores of therapist- and patient-reported ratings and those of the Balance CAT system. CONCLUSIONS: The therapists' judgments explained <50% of variance of the Balance CAT system scores. Neither therapist-reported nor patient-reported change scores reflected the changes demonstrated by the objective assessments. Further studies are warranted to confirm our findings.Implications for RehabilitationNeither therapist- nor patient-reported balance function and change could effectively reflect the scores resulting from objective assessments.The routine use of objective balance assessments should not be replaced by therapists' subjective judgments.Communications regarding the balance function measured by objective assessments between therapists and patients can help patients to better understand their balance function and progress.


Subject(s)
Computerized Adaptive Testing , Stroke , Humans , Pain Measurement
13.
J Affect Disord ; 275: 224-229, 2020 10 01.
Article in English | MEDLINE | ID: mdl-32734912

ABSTRACT

BACKGROUND: Schizophrenia is a debilitating mental illness that causes significant disability. However, the lack of evidence for functional decline yields difficulty in distinguishing patients with schizophrenia from healthy adults. Since patients with schizophrenia demonstrate severe facial emotion recognition deficit (FERD), FERD measurement appears to be a promising solution for the aforementioned challenge.We aimed to develop a FERD-based screening tool to differentiates patients with schizophrenia from healthy adults. METHODS: Patients' responses were extracted from a previous study. The most discriminative index was determined by comparing the area under the receiver operating characteristic curve (AUC) of patients' FER scores in 7 domains individually and collectively. The best cut-off score was selected only for the most discriminative index to provide both high sensitivity and specificity (≥ 0.90). RESULTS: The "number of domains failed" showed the highest discriminative value (AUC = 0.92). Since high sensitivity and specificity could not be achieved simultaneously, two sub-optimal cut-off scores were recommended for prospective users. For users prioritizing sensitivity, the "≥ 2 domains failed" index yields high sensitivity (0.96) with modest specificity (0.66). For users targeting specificity, the "≥ 4 domains failed" indexachieves high specificity (0.92) with acceptable sensitivity (0.72). LIMITATIONS: Convenience sampling with mild clinical severity and younger healthy adults (< 20 years old) may limit the generalizability. CONCLUSION: The FERD screener seems to be a discriminative tool with changeable cut-off scores achieving high sensitivity or specificity. Therefore, it may be useful in detecting patients and ruling out adults erroneously suspected of having schizophrenia.


Subject(s)
Facial Recognition , Schizophrenia , Adult , Emotions , Humans , Prospective Studies , Schizophrenia/diagnosis , Sensitivity and Specificity , Young Adult
14.
Disabil Rehabil ; 41(22): 2683-2687, 2019 11.
Article in English | MEDLINE | ID: mdl-29954229

ABSTRACT

Purpose: To investigate the responsiveness and predictive validity of the computerized digit vigilance test (C-DVT) in inpatients receiving rehabilitation following stroke. Methods: Forty-nine patients completed the C-DVT and the Barthel Index (BI) after admission to and before discharge from the rehabilitation ward. The standardized response mean (SRM) was used to examine the responsiveness of the C-DVT. We used a paired t-test to determine the statistical significance of the changes in scores on the C-DVT. We estimated the predictive validity of the C-DVT with the Pearson correlation coefficient (r) to investigate the association between the scores of the C-DVT at admission and the scores of the BI at discharge. Results: Our data showed a small SRM (-0.31) and a significant difference (paired t-test, p = 0.034) between the C-DVT scores at admission and discharge. These findings indicate that the C-DVT can appropriately detect changes in sustained attention. In addition, we found a moderate association (r = 0.48) between the scores of the C-DVT at admission and the scores of the BI at discharge, suggesting the sufficient predictive validity of the C-DVT. Conclusions: Our results showed that the C-DVT had adequate responsiveness and sufficient predictive validity in inpatients receiving rehabilitation following stroke. Implications for rehabilitation The computerized digit vigilance test (C-DVT) had adequate responsiveness to be an outcome measure for assessing the sustained attention in inpatients receiving rehabilitation after stroke. The C-DVT had sufficient predictive validity to predict daily function in inpatients receiving rehabilitation after stroke.


Subject(s)
Patient Discharge , Reaction Time , Stroke Rehabilitation/methods , Task Performance and Analysis , Aged , Diagnosis, Computer-Assisted/methods , Female , Humans , Male , Middle Aged , Outcome Assessment, Health Care/methods , Predictive Value of Tests , Reproducibility of Results , Treatment Outcome
15.
Arch Phys Med Rehabil ; 99(8): 1499-1506, 2018 08.
Article in English | MEDLINE | ID: mdl-29653107

ABSTRACT

OBJECTIVE: To examine the interrater and intrarater reliability of the Balance Computerized Adaptive Test (Balance CAT) in patients with chronic stroke having a wide range of balance functions. DESIGN: Repeated assessments design (1wk apart). SETTING: Seven teaching hospitals. PARTICIPANTS: A pooled sample (N=102) including 2 independent groups of outpatients (n=50 for the interrater reliability study; n=52 for the intrarater reliability study) with chronic stroke. INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: Balance CAT. RESULTS: For the interrater reliability study, the values of intraclass correlation coefficient, minimal detectable change (MDC), and percentage of MDC (MDC%) for the Balance CAT were .84, 1.90, and 31.0%, respectively. For the intrarater reliability study, the values of intraclass correlation coefficient, MDC, and MDC% ranged from .89 to .91, from 1.14 to 1.26, and from 17.1% to 18.6%, respectively. CONCLUSIONS: The Balance CAT showed sufficient intrarater reliability in patients with chronic stroke having balance functions ranging from sitting with support to independent walking. Although the Balance CAT may have good interrater reliability, we found substantial random measurement error between different raters. Accordingly, if the Balance CAT is used as an outcome measure in clinical or research settings, same raters are suggested over different time points to ensure reliable assessments.


Subject(s)
Postural Balance/physiology , Stroke Rehabilitation , Stroke/physiopathology , Adult , Aged , Computers , Disability Evaluation , Female , Humans , Middle Aged , Reproducibility of Results
16.
Brain Inj ; 32(5): 627-633, 2018.
Article in English | MEDLINE | ID: mdl-29388842

ABSTRACT

OBJECTIVE: To investigate the extent of motor recovery and predict the prognosis of lower extremity (LE) recovery in patients with severe LE paresis after stroke Methods: 137 patients with severe LE paresis after stroke were recruited from a local medical centre. Voluntary LE movement was assessed with the LE subscale of the Stroke Rehabilitation Assessment of Movement (STREAM-LE). Univariate and stepwise regression analyses were used to investigate 25 clinical variables (including demographic, neuroimaging, and behavioural variables) for finding the predictors of LE recovery. RESULTS: The STREAM-LE at discharge (DCSTREAM-LE) of the participants covered a very wide range (0-19). Specifically, 5.1% of the participants were nearly completely recovered, 11.7% were moderately recovered, 36.5% were slightly recovered, and 46.7% remained severely paralysed. 'Score of STREAM-LE at admission (ADSTREAM-LE)' and 'volume of lesion and oedema') were significant predictors of LE movement at discharge, explaining 25.1% of the variance of the DCSTREAM-LE (p < 0.001). CONCLUSIONS: LE motor recovery varied widely in our participants, indicating that patients' recovery might not follow simple rules. The low predictive power (about a quarter) indicates that LE motor recovery in patients with severe LE paresis after stroke was hardly predictive.


Subject(s)
Movement/physiology , Paresis/etiology , Paresis/rehabilitation , Recovery of Function/physiology , Stroke Rehabilitation , Stroke/complications , Adult , Aged , Aged, 80 and over , Analysis of Variance , Female , Humans , Male , Middle Aged , Regression Analysis , Treatment Outcome
17.
Psychiatry Res ; 260: 199-206, 2018 02.
Article in English | MEDLINE | ID: mdl-29202384

ABSTRACT

We aimed to compare the test-retest agreement, random measurement error, practice effect, and ecological validity of the original and Tablet-based Symbol Digit Modalities Test (T-SDMT) over five serial assessments, and to examine the concurrent validity of the T-SDMT in patients with schizophrenia. Sixty patients with chronic schizophrenia completed five serial assessments (one week apart) of the SDMT and T-SDMT and one assessment of the Activities of Daily Living Rating Scale III at the first time point. Both measures showed high test-retest agreement, similar levels of random measurement error over five serial assessments. Moreover, the practice effects of the two measures did not reach a plateau phase after five serial assessments in young and middle-aged participants. Nevertheless, only the practice effect of the T-SDMT became trivial after the first assessment. Like the SDMT, the T-SDMT had good ecological validity. The T-SDMT also had good concurrent validity with the SDMT. In addition, only the T-SDMT had discriminative validity to discriminate processing speed in young and middle-aged participants. Compared to the SDMT, the T-SDMT had overall slightly better psychometric properties, so it can be an alternative measure to the SDMT for assessing processing speed in patients with schizophrenia.


Subject(s)
Activities of Daily Living , Cognitive Dysfunction/diagnosis , Computers, Handheld , Diagnosis, Computer-Assisted/standards , Neuropsychological Tests/standards , Psychometrics/standards , Psychomotor Performance/physiology , Schizophrenia/diagnosis , Adult , Cognitive Dysfunction/etiology , Female , Humans , Male , Middle Aged , Psychometrics/instrumentation , Reproducibility of Results , Schizophrenia/complications
18.
Eur J Phys Rehabil Med ; 53(5): 710-718, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28178771

ABSTRACT

BACKGROUND: A lack of evidence on the test-retest reliability and responsiveness limits the utility of the BI-based Supplementary Scales (BI-SS) in both clinical and research settings. AIM: To examine the test-retest reliability and responsiveness of the BI-based Supplementary Scales (BI-SS) in patients with stroke. DESIGN: A repeated-assessments design (1 week apart) was used to examine the test-retest reliability of the BI-SS. For the responsiveness study, the participants were assessed with the BI-SS and BI (treated as an external criterion) at admission to and discharge from rehabilitation wards. SETTING: Seven outpatient rehabilitation units and one inpatient rehabilitation unit. POPULATION: Outpatients with chronic stroke. METHODS: Eighty-four outpatients with chronic stroke participated in the test-retest reliability study. Fifty-seven inpatients completed baseline and follow-up assessments in the responsiveness study. RESULTS: For the test-retest reliability study, the values of the intra-class correlation coefficient and the overall percentage of minimal detectable change for the Ability Scale and Self-perceived Difficulty Scale were 0.97, 12.8%, and 0.78, 35.8%, respectively. For the responsiveness study, the standardized effect size and standardized response mean (representing internal responsiveness) of the Ability Scale and Self-perceived Difficulty Scale were 1.17 and 1.56, and 0.78 and 0.89, respectively. Regarding external responsiveness, the change in score of the Ability Scale had significant and moderate association with that of the BI (r=0.61, P<0.001). The change in score of the Self-perceived Difficulty Scale had non-significant and weak association with that of the BI (r=0.23, P=0.080). CONCLUSIONS: The Ability Scale of the BI-SS has satisfactory test-retest reliability and sufficient responsiveness for patients with stroke. However, the Self-perceived Difficulty Scale of the BI-SS has substantial random measurement error and insufficient external responsiveness, which may affect its utility in clinical settings. CLINICAL REHABILITATION IMPACT: The findings of this study provide empirical evidence of psychometric properties of the BI-SS for assessing ability and self-perceived difficulty of ADL in patients with stroke.


Subject(s)
Activities of Daily Living/psychology , Disability Evaluation , Patient Reported Outcome Measures , Stroke Rehabilitation/methods , Stroke/diagnosis , Aged , Cohort Studies , Female , Humans , Male , Middle Aged , Psychometrics , Recovery of Function/physiology , Reproducibility of Results , Retrospective Studies , Self Efficacy , Stroke/psychology , Stroke/therapy
19.
Medicine (Baltimore) ; 95(31): e4508, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27495103

ABSTRACT

The Brunnstrom recovery stages (the BRS) consists of 2 items assessing the poststroke motor function of the upper extremities and 1 assessing the lower extremities. The 3 items together represent overall motor function. Although the BRS efficiently assesses poststroke motor functions, a lack of rigorous examination of the psychometric properties restricts its utility. We aimed to examine the unidimensionality, Rasch reliability, and responsiveness of the BRS, and transform the raw sum scores of the BRS into Rasch logit scores once the 3 items fitted the assumptions of the Rasch model.We retrieved medical records of the BRS (N = 1180) from a medical center. We used Rasch analysis to examine the unidimensionality and Rasch reliability of both upper-extremity items and the 3 overall motor items of the BRS. In addition, to compare their responsiveness for patients (n = 41) assessed with the BRS and the Stroke Rehabilitation Assessment of Movement (STREAM) on admission and at discharge, we calculated the effect size (ES) and standardized response mean (SRM).The upper-extremity items and overall motor items fitted the assumptions of the Rasch model (infit/outfit mean square = 0.57-1.40). The Rasch reliabilities of the upper-extremity items and overall motor items were high (0.91-0.92). The upper-extremity items and overall motor items had adequate responsiveness (ES = 0.35-0.41, SRM = 0.85-0.99), which was comparable to that of the STREAM (ES = 0.43-0.44, SRM = 1.00-1.13).The results of our study support the unidimensionality, Rasch reliability, and responsiveness of the BRS. Moreover, the BRS can be transformed into an interval-level measure, which would be useful to quantify the extent of poststroke motor function, the changes of motor function, and the differences of motor functions in patients with stroke.


Subject(s)
Disability Evaluation , Lower Extremity/physiopathology , Movement/physiology , Recovery of Function/physiology , Stroke/physiopathology , Upper Extremity/physiopathology , Aged , Female , Humans , Male , Middle Aged , Models, Theoretical , Reproducibility of Results , Retrospective Studies
20.
Arch Phys Med Rehabil ; 97(6): 938-46, 2016 06.
Article in English | MEDLINE | ID: mdl-26850566

ABSTRACT

OBJECTIVE: To validate the psychometric properties of the Balance Assessment in Sitting and Standing Positions, including validity (unidimensionality and concurrent validity), reliability (Rasch reliability), and responsiveness (compared with the Postural Assessment Scale for Stroke Patients [PASS]) and to transform the Balance Assessment in Sitting and Standing Positions from an ordinal-level measure into an interval-level measure. DESIGN: Retrospective cross-sectional study. SETTING: Medical records from a medical center. PARTICIPANTS: Patients with stroke (N=1193). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURES: The 4-item Balance Assessment in Sitting and Standing Positions was used, assessing static sitting balance, dynamic sitting balance, static standing balance, and dynamic standing balance. RESULTS: Data of 1193 patients with stroke were included for Rasch analysis. The 4 items of the Balance Assessment in Sitting and Standing Positions constituted a unidimensional construct (infit/outfit mean square, .75-1.05), had good concurrent validity (r=.70-.90), and had sufficient Rasch reliability (.93). The Balance Assessment in Sitting and Standing Positions had large responsiveness (effect size, 1.20; standardized response mean, 1.51) and was comparable with the PASS (effect size, .90; standardized response mean, 1.32). CONCLUSIONS: The Balance Assessment in Sitting and Standing Positions has sound psychometric properties. The transformed-Rasch scores of the Balance Assessment in Sitting and Standing Positions can be used to identify patients' balance function and detect patients' changes.


Subject(s)
Disability Evaluation , Physical Therapy Modalities/standards , Postural Balance/physiology , Posture/physiology , Stroke Rehabilitation/methods , Aged , Aged, 80 and over , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Psychometrics , Reproducibility of Results , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...