Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
TechTrends ; 67(2): 285-293, 2023.
Article in English | MEDLINE | ID: mdl-36711121

ABSTRACT

Asynchronous discussions are a popular feature in online higher education as they enable instructor-student and student-student interactions at the users' own time and pace. AI-driven discussion platforms are designed to relieve instructors of automatable tasks, e.g., low-stakes grading and post moderation. Our study investigated the validity of an AI-generated score compared to human-driven methods of evaluating student effort and the impact of instructor interaction on students' discussion post quality. A series of within-subjects MANOVAs was conducted on 14,599 discussion posts among over 800 students across four classes to measure post 'curiosity score' (i.e., an AI-generated metric of post quality) and word count. After checking assumptions, one MANOVA was run for each type of instructor interaction: private coaching, public praising, and public featuring. Instructor coaching appears to impact curiosity scores and word count, with later posts being an average of 40 words longer and scoring an average of 15 points higher than the original post that received instructor coaching. AI-driven tools appear to free up time for more creative human interventions, particularly among instructors teaching high-enrollment classes, where a traditional discussion forum is less scalable.

2.
Educ Technol Res Dev ; 69(1): 35-38, 2021.
Article in English | MEDLINE | ID: mdl-33223780

ABSTRACT

From a design perspective, this paper offers a response to the impact, value, and application of a manuscript published by Philipsen et al. (Improving teacher professional development for online and blended learning: A systematic meta-aggregative review. Educational Technology and Research Development, 67, 1145-1174. 10.1007/s11423-019-09645-8, 2019). Philipsen et al. (2019) reviewed what constitutes an effective teacher professional development program (TPD) for online and blended learning (OBL), with our response focusing on its value and application in light of an emergency shift to digital to address a global pandemic. This paper also proceeds to examine limitations in previous research into the subject and future research opportunities to investigate important components that inform the design of a resilient and scalable TPD for OBL.

3.
PLoS One ; 15(1): e0227540, 2020.
Article in English | MEDLINE | ID: mdl-31995580

ABSTRACT

An increasing number of citizen science water monitoring programs is continuously collecting water quality data on streams throughout the United States. Operating under quality assurance protocols, this type of monitoring data can be extremely valuable for scientists and professional agencies, but in some cases has been of limited use due to concerns about the accuracy of data collected by volunteers. Although a growing body of studies attempts to address accuracy concerns by comparing volunteer data to professional data, rarely has this been conducted with large-scale datasets generated by citizen scientists. This study assesses the relative accuracy of volunteer water quality data collected by the Texas Stream Team (TST) citizen science program from 1992-2016 across the State of Texas by comparing it to professional data from corresponding stations during the same time period. Use of existing data meant that sampling times and protocols were not controlled for, thus professional and volunteer comparisons were refined to samples collected at stations within 60 meters of one another and during the same year. Results from the statewide TST dataset include 82 separate station/year ANOVAs and demonstrate that large-scale, existing volunteer and professional data with unpaired samples can show agreement of ~80% for all analyzed parameters (DO = 77%, pH = 79%, conductivity = 85%). In addition, to assess whether limiting variation within the source datasets increased the level of agreement between volunteers and professionals, data were analyzed at a local scale. Data from a single partner city, with increased controls on sampling times and locations and correction of a systematic bias in DO, confirmed this by showing an even greater agreement of 91% overall from 2009-2017 (DO = 91%, pH = 83%, conductivity = 100%). An experimental sampling dataset was analyzed and yielded similar results, indicating that existing datasets can be as accurate as experimental datasets designed with researcher supervision. Our findings underscore the reliability of large-scale citizen science monitoring datasets already in existence, and their potential value to scientific research and water management programs.


Subject(s)
Citizen Science/statistics & numerical data , Environmental Monitoring/statistics & numerical data , Volunteers/statistics & numerical data , Water , Conservation of Natural Resources , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...