Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Robot AI ; 9: 782134, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35669290

RESUMO

Human-agent teaming (HAT) is becoming more commonplace across industry, military, and consumer settings. Agents are becoming more advanced, more integrated, and more responsible for tasks previously assigned to humans. In addition, the dyadic human-agent teaming nature is evolving from a one-one pair to one-many, in which the human is working with numerous agents to accomplish a task. As capabilities become more advanced and humanlike, the best method for humans and agents to effectively coordinate is still unknown. Therefore, current research must start diverting focus from how many agents can a human manage to how can agents and humans work together effectively. Levels of autonomy (LOAs), or varying levels of responsibility given to the agents, implemented specifically in the decision-making process could potentially address some of the issues related to workload, stress, performance, and trust. This study sought to explore the effects of different LOAs on human-machine team coordination, performance, trust, and decision making in hand with assessments of operator workload and stress in a simulated multi-unmanned aircraft vehicle (UAV) intelligence surveillance and reconnaissance (ISR) task. The results of the study can be used to identify human factor roadblocks to effective HAT and provide guidance for future designs of HAT. Additionally, the unique impacts of LOA and autonomous decision making by agents on trust are explored.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...