Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38907842

ABSTRACT

Perceptions of evidence-based practices (EBPs) and implementation are inherent drivers of implementation outcomes. Most studies on implementation perceptions have focused on direct service providers, but clients and EBP experts may offer additional meaningful information about implementing EBPs in community settings. EBP providers (n = 21), EBP experts (n = 12), and clients who received EBPs (n = 6) participated in focus groups to ascertain their perceptions of and experiences with EBP implementation, as part of a program evaluation. Thematic analysis indicated that provider and expert perceptions of EBP implementation in community settings converged around themes of implementation supports and training and client outcomes, along with several subthemes. Client perceptions centered on themes regarding the importance of their personal experiences, their impressions of EBPs, as well as their recommendation for increasing public awareness and use of EBPs. Findings suggest that the perspectives of EBP providers and experts are closely aligned, focusing on system-level, individual-level, and training issues that impact EBP implementation within a public mental health system. The themes that were important to clients were primarily related to their experiences as recipients of an EBP which produced insightful recommendations for promoting EBPs in the community.

2.
Adm Policy Ment Health ; 49(3): 343-356, 2022 05.
Article in English | MEDLINE | ID: mdl-34537885

ABSTRACT

To capitalize on investments in evidence-based practices, technology is needed to scale up fidelity assessment and supervision. Stakeholder feedback may facilitate adoption of such tools. This evaluation gathered stakeholder feedback and preferences to explore whether it would be fundamentally feasible or possible to implement an automated fidelity-scoring supervision tool in community mental health settings. A partially mixed, sequential research method design was used including focus group discussions with community mental health therapists (n = 18) and clinical leadership (n = 12) to explore typical supervision practices, followed by discussion of an automated fidelity feedback tool embedded in a cloud-based supervision platform. Interpretation of qualitative findings was enhanced through quantitative measures of participants' use of technology and perceptions of acceptability, appropriateness, and feasibility of the tool. Initial perceptions of acceptability, appropriateness, and feasibility of automated fidelity tools were positive and increased after introduction of an automated tool. Standard supervision was described as collaboratively guided and focused on clinical content, self-care, and documentation. Participants highlighted the tool's utility for supervision, training, and professional growth, but questioned its ability to evaluate rapport, cultural responsiveness, and non-verbal communication. Concerns were raised about privacy and the impact of low scores on therapist confidence. Desired features included intervention labeling and transparency about how scores related to session content. Opportunities for asynchronous, remote, and targeted supervision were particularly valued. Stakeholder feedback suggests that automated fidelity measurement could augment supervision practices. Future research should examine the relations among use of such supervision tools, clinician skill, and client outcomes.


Subject(s)
Artificial Intelligence , Cognitive Behavioral Therapy , Attitude , Cognitive Behavioral Therapy/methods , Focus Groups , Humans , Research Design
3.
Arch Clin Neuropsychol ; 35(3): 326-331, 2020 Apr 20.
Article in English | MEDLINE | ID: mdl-32044991

ABSTRACT

OBJECTIVE: To compare neurocognitive scores between the Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) Quick Test (QT) and Online Versions in non-concussed high school athletes. METHODS: A sample of 47 high school athletes completed the ImPACT Online Version pre-season and the ImPACT QT approximately 3 months later. Paired sample t-tests and Pearson's correlations examined differences and relationships between the ImPACT batteries. RESULTS: The ImPACT QT scores were significantly higher for performance on the Three Letters: Average Counted (p < .001, d = .88), Three Letters: Average Counted Correctly (p < .001, d = .80), and Symbol Match: Correct RT Visible (p < .001, d = .72), and Symbol Match: Correct RT Hidden (p = .002, d = .50) subtests. There were significant relationships for Three Letters: Average Counted (r = .85, p < .001), Three Letters: Average Counted Correctly (r = .82, p < .001), and Symbol Match: Total Correct Hidden (r = .40, p = .006) subtests. CONCLUSIONS: Post-injury evaluation data using ImPACT QT should be compared to normative referenced data, and not to pre-season data from the ImPACT Online Version.


Subject(s)
Athletes/psychology , Neuropsychological Tests , Adolescent , Brain Concussion/psychology , Cognition , Female , Humans , Male , Neuropsychological Tests/standards , Schools
4.
Arch Clin Neuropsychol ; 34(7): 1175-1191, 2019 Oct 24.
Article in English | MEDLINE | ID: mdl-31044243

ABSTRACT

OBJECTIVE: This study examined the test-retest reliability and construct validity of the Action Fluency Test (AFT) as a measure of executive functioning. METHOD: Using a correlational design, 128 healthy college students (M Age = 19.24, SD = 2.01; M education = 13.29 years, SD = 0.81) completed the AFT, and measures of verbal and figural fluency, executive functioning and other relevant constructs (e.g., vocabulary, working memory, and attention). RESULTS: Coefficients of stability were acceptable for AFT correct words (r = .76; p < .01), but not for errors (r = .41) or perseverations (r = .14). No practice effects were observed upon repeat testing (M interval = 39.21 days). Divergent validity evidence was mixed. AFT scores were unrelated to working memory and perceptual-reasoning abilities; however, correlations with vocabulary (r = .32; p < .01) and information-processing speed (r = .30; p < .01) were greater than associations between AFT scores and executive measures. Regarding convergent validity, AFT scores correlated with other fluency tasks (r = .4 range), but correlations with measures of executive functioning were absent or small. Action and letter fluency correlated with measures of attentional control and inhibition; however, these associations were no longer significant after controlling for shared variance with information-processing speed. CONCLUSIONS: Findings are consistent with previous research suggesting vocabulary and information-processing speed underlie effective fluency performance to a greater extent than executive functioning. The AFT measures unique variance not accounted for by semantic and letter fluency tasks, and therefore may be used for a variety of research and clinical purposes.


Subject(s)
Executive Function , Neuropsychological Tests/standards , Students/psychology , Attention , Cognition , Female , Humans , Inhibition, Psychological , Male , Memory, Short-Term , Reproducibility of Results , Semantics , Universities , Verbal Behavior , Vocabulary , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...