Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Clin Child Psychol Psychiatry ; 27(4): 953-966, 2022 Oct.
Article in English | MEDLINE | ID: mdl-34875896

ABSTRACT

In efforts to explore adjunct/alternative treatments for ADHD, this study investigated the associations between dimensions of physical activity (PA) and children's ADHD symptoms and impairment. Current evidence-based treatments include medication and behaviour management, but there is widespread consensus that more treatment options are desirable. Although there is increasing support for PA as an adjunct/alternative to existing treatment for ADHD, the interplay of specific dimensions of PA has not been studied. Fifty-one parents of children aged 6-12 years with ADHD filled out questionnaires. Hierarchical regression analysis indicated that only some dimensions of PA explained a statistically significant portion of the variance in ADHD symptoms, beyond that explained by typical demographic variables. PA dimensions did not account for a statistically significant portion of ADHD impairment. Refining the measurement of how long children have engaged in PA is a key step in generating evidence for PA as an adjunct or alternate treatment for ADHD, and developing guidelines to manage parental expectations for this treatment in the benefit of their children.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Attention Deficit Disorder with Hyperactivity/therapy , Child , Exercise , Humans , Parents , Surveys and Questionnaires
2.
Br J Educ Psychol ; 89(3): 441-455, 2019 Sep.
Article in English | MEDLINE | ID: mdl-30883702

ABSTRACT

BACKGROUND AND AIMS: In educational measurement, performance assessments occupy a niche for offering a true-to-life format that affords the measurement of high-level cognitive competencies and the evidence to draw inferences about intellectual capital. However, true-to-life formats also introduce myriad complexities and can skew if not outright distort the accuracy of inferences. For validating claims about test-takers from performance assessments, the collection of evidence about response processes is a necessity of sufficient import that the validation process needs to be labelled a cognitive validation to ensure that the cognitive is not forgotten in the logic of the validation process. ANALYSIS AND EXAMPLE: Cognitive validation is described as a three-pronged process of (1) identifying the knowledge, skills, and attributes associated with the intellectual capital of interest, (2) selecting and/or developing tasks to elicit intellectual capital, and (3) collecting substantive empirical evidence of examinee response processes as part of the overall validity argument. This three-pronged process is illustrated using the American Institute of CPA's (2018) practice analysis, task-based simulations (TBSs), and use of think-aloud interviews to evaluate claims. CONCLUSIONS: Although cognitive laboratories and think alouds are used to measure distinct types of response processes as test-takers interact with performance assessments, both methods are among the best for obtaining direct but differential evidence from test-takers. The labour and cost of collecting this evidence are often not done or not done well by many testing programmes. However, for performance assessments to succeed in measuring what they purport to measure, the investment of cognitive validation must be made.


Subject(s)
Competency-Based Education/methods , Educational Measurement/methods , Professional Competence , Academic Performance , Adult , Competency-Based Education/standards , Educational Measurement/standards , Humans , Students , Universities , Young Adult
3.
Med Teach ; 34(7): 555-61, 2012.
Article in English | MEDLINE | ID: mdl-22746962

ABSTRACT

BACKGROUND: This study describes the development, implementation and evaluation of a team-based, multi-source method of assessment in which students on a clinical clerkship were provided with feedback on their performance as observed by physicians, residents, nurses, peers, patients and administrators. METHODS: The instrument was developed by reviewing existing assessment items and by obtaining input from assessors and students. Numerical data and written comments provided to students were collected, internal consistency was estimated and interviews and focus groups were used to determine acceptability to assessors and students. RESULTS: A total of 1068 assessors completed 3501 forms for 127 students. Internal consistency estimates for each assessment form were acceptable (Cronbach's alpha 0.856-0.948). Each student received an average of 188 words of written feedback divided into an average of 26 'Areas of Excellence' and 5 'Areas for Improvement'. Interviews revealed that the majority of students and assessors interviewed found the method acceptable. CONCLUSIONS: This study demonstrates that a team-based model of assessment based on the principles of multi-source feedback is a feasible and acceptable form of assessment for medical students learning in a clinical clerkship, and has some advantages over traditional preceptor-based assessment. Further studies will focus on the strengths and weaknesses of this novel assessment technique.


Subject(s)
Clinical Clerkship/organization & administration , Clinical Competence/standards , Educational Measurement/methods , Feedback , Students, Medical , Administrative Personnel , Alberta , Clinical Clerkship/standards , Faculty, Medical , Feasibility Studies , Group Processes , Humans , Models, Educational , Nurses , Observation , Patients , Peer Group , Program Evaluation
SELECTION OF CITATIONS
SEARCH DETAIL
...