Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(5): e0297804, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38718042

RESUMO

Artificial Intelligence (AI) chatbots have emerged as powerful tools in modern academic endeavors, presenting both opportunities and challenges in the learning landscape. They can provide content information and analysis across most academic disciplines, but significant differences exist in terms of response accuracy for conclusions and explanations, as well as word counts. This study explores four distinct AI chatbots, GPT-3.5, GPT-4, Bard, and LLaMA 2, for accuracy of conclusions and quality of explanations in the context of university-level economics. Leveraging Bloom's taxonomy of cognitive learning complexity as a guiding framework, the study confronts the four AI chatbots with a standard test for university-level understanding of economics, as well as more advanced economics problems. The null hypothesis that all AI chatbots perform equally well on prompts that explore understanding of economics is rejected. The results are that significant differences are observed across the four AI chatbots, and these differences are exacerbated as the complexity of the economics-related prompts increased. These findings are relevant to both students and educators; students can choose the most appropriate chatbots to better understand economics concepts and thought processes, while educators can design their instruction and assessment while recognizing the support and resources students have access to through AI chatbot platforms.


Assuntos
Inteligência Artificial , Humanos , Economia , Universidades , Estudantes/psicologia , Aprendizagem , Masculino , Feminino
2.
PLoS One ; 16(8): e0254340, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34347794

RESUMO

The COVID-19 pandemic has impelled the majority of schools and universities around the world to switch to remote teaching. One of the greatest challenges in online education is preserving the academic integrity of student assessments. The lack of direct supervision by instructors during final examinations poses a significant risk of academic misconduct. In this paper, we propose a new approach to detecting potential cases of cheating on the final exam using machine learning techniques. We treat the issue of identifying the potential cases of cheating as an outlier detection problem. We use students' continuous assessment results to identify abnormal scores on the final exam. However, unlike a standard outlier detection task in machine learning, the student assessment data requires us to consider its sequential nature. We address this issue by applying recurrent neural networks together with anomaly detection algorithms. Numerical experiments on a range of datasets show that the proposed method achieves a remarkably high level of accuracy in detecting cases of cheating on the exam. We believe that the proposed method would be an effective tool for academics and administrators interested in preserving the academic integrity of course assessments.


Assuntos
Educação a Distância , Avaliação Educacional , Fraude , Detecção de Mentiras , Aprendizado de Máquina , Algoritmos , COVID-19/epidemiologia , Conjuntos de Dados como Assunto , Enganação , Educação a Distância/métodos , Educação a Distância/organização & administração , Avaliação Educacional/métodos , Avaliação Educacional/normas , Humanos , Modelos Teóricos , Pandemias , SARS-CoV-2 , Universidades
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...