Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros











Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 23285, 2024 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-39375385

RESUMO

This study evaluates the performance of ChatGPT variants, GPT-3.5 and GPT-4, both with and without prompt engineering, against solely student work and a mixed category containing both student and GPT-4 contributions in university-level physics coding assignments using the Python language. Comparing 50 student submissions to 50 AI-generated submissions across different categories, and marked blindly by three independent markers, we amassed n = 300 data points. Students averaged 91.9% (SE:0.4), surpassing the highest performing AI submission category, GPT-4 with prompt engineering, which scored 81.1% (SE:0.8)-a statistically significant difference (p = 2.482 × 10 - 10 ). Prompt engineering significantly improved scores for both GPT-4 (p = 1.661 × 10 - 4 ) and GPT-3.5 (p = 4.967 × 10 - 9 ). Additionally, the blinded markers were tasked with guessing the authorship of the submissions on a four-point Likert scale from 'Definitely AI' to 'Definitely Human'. They accurately identified the authorship, with 92.1% of the work categorized as 'Definitely Human' being human-authored. Simplifying this to a binary 'AI' or 'Human' categorization resulted in an average accuracy rate of 85.3%. These findings suggest that while AI-generated work closely approaches the quality of university students' work, it often remains detectable by human evaluators.


Assuntos
Estudantes , Humanos , Universidades
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA