Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Front Psychol ; 15: 1302022, 2024.
Article in English | MEDLINE | ID: mdl-38410408

ABSTRACT

The need exists to better understand how to comprise fluid teams-teams that are assembled on short notice, from members with little to no familiarity, who come together to carry out a time-limited task, and then disband. Due to the ever-increasing complexity of the modern workplace, the demand for these types of fluid teams is growing in task domains such as the military, aviation, healthcare, and industry. The aim of this paper is to review the team composition literature to shed light on composition considerations for forming fluid teams.

2.
Front Psychol ; 15: 1327885, 2024.
Article in English | MEDLINE | ID: mdl-38333066

ABSTRACT

Fluid teams are teams that are rapidly assembled from across disciplines or areas of expertise to address a near-term problem. They are typically composed of individuals who have no prior familiarity with one another, who as a team must begin work immediately, and who disband at the completion of the task. Prior research has noted the challenges posed by this unique type of team context. To date, fluid teams have been understudied, yet their relevance and application in the modern workplace is expanding. This Perspective article presents a concise overview of critical research gaps and opportunities to support selection, training, and workplace design for fluid teams.

3.
Hum Factors ; : 187208231189000, 2023 Jul 17.
Article in English | MEDLINE | ID: mdl-37458319

ABSTRACT

OBJECTIVE: We created and validated a scale to measure perceptions of system trustworthiness. BACKGROUND: Several scales exist in the literature that attempt to assess trustworthiness of system referents. However, existing measures suffer from limitations in their development and validation. The current study sought to develop a scale based on theory and methodological rigor. METHOD: We conducted exploratory and confirmatory factor analyses on data from two online studies to develop the System Trustworthiness Scale (STS). Additional analyses explored the manipulation of the factors and assessed convergent and divergent validity. RESULTS: The exploratory factor analyses resulted in a three-factor solution that represented the theoretical constructs of trustworthiness: performance, purpose, and process. Confirmatory factor analyses confirmed the three-factor solution. In addition, correlation and regression analyses demonstrated the scale's divergent and predictive validity. CONCLUSION: The STS is a psychometrically valid and predictive scale for assessing trustworthiness perceptions of system referents. APPLICATIONS: The STS assesses trustworthiness perceptions of systems. Importantly, the scale differentiates performance, purpose, and process constructs and is adaptable to a variety of system referents.

4.
Appl Ergon ; 106: 103858, 2023 Jan.
Article in English | MEDLINE | ID: mdl-35994948

ABSTRACT

The research on human-robot interactions indicates possible differences toward robot trust that do not exist in human-human interactions. Research on these differences has traditionally focused on performance degradations. The current study sought to explore differences in human-robot and human-human trust interactions with performance, consideration, and morality trustworthiness manipulations, which are based on ability/performance, benevolence/purpose, and integrity/process manipulations, respectively, from previous research. We used a mixed factorial hierarchical linear model design to explore the effects of trustworthiness manipulations on trustworthiness perceptions, trust intentions, and trust behaviors in a trust game. We found partner (human versus robot) differences across all three trustworthiness perceptions, indicating biases towards robots may be more expansive than previously thought. Additionally, there were marginal effects of partner differences on trust intentions. Interestingly, there were no differences between partners on trust behaviors. Results indicate human biases toward robots may be more complex than considered in the literature.


Subject(s)
Robotics , Humans , Trust , Bias , Beneficence
5.
Hum Factors ; : 187208221145261, 2022 Dec 13.
Article in English | MEDLINE | ID: mdl-36511147

ABSTRACT

OBJECTIVE: The effects of asset degradation on trust in human-swarm interaction were investigated through the lens of system-wide trust theory. BACKGROUND: Researchers have begun investigating contextual features that shape human interactions with robotic swarms-systems comprising assets that coordinate behavior based on their nearest neighbors. Recent work has begun investigating how human trust toward swarms is affected by asset degradation through the lens of system-wide trust theory, but these studies have been marked by several limitations. METHOD: In an online study, the current work manipulated asset degradation and measured trust-relevant criteria in a within-subjects design and addressed the limitations of past work. RESULTS: Controlling for swarm performance (i.e., target acquisition), asset degradation and trust (i.e., reliance intentions) in swarms were negatively related. In addition, as degradation increased, perceptions of swarm cohesion, obstacle avoidance, target acquisition, and terrain exploration efficiency decreased, the latter two of which (coupled with the reliance intentions criterion) support the tenets of system-wide trust theory as well as replicate and extend past work on the effects of asset degradation on trust in swarms. CONCLUSION: Human-swarm interaction is a context in which system-wide trust is relevant, and future work ought to investigate how to calibrate human trust toward swarm systems. APPLICATIONS: Based on these findings, design professionals should prioritize ways to depict swarm performance and system health such that humans do not abandon trust in systems that are still functional yet not over-trust those systems which are indeed performing poorly.

6.
Front Psychol ; 13: 797443, 2022.
Article in English | MEDLINE | ID: mdl-35432086

ABSTRACT

Two popular models of trustworthiness have garnered support over the years. One has postulated three aspects of trustworthiness as state-based antecedents to trust. Another has been interpreted to comprise two aspects of trustworthiness. Empirical data shows support for both models, and debate remains as to the theoretical and practical reasons researchers may adopt one model over the other. The present research aimed to consider this debate by investigating the factor structure of trustworthiness. Taking items from two scales commonly employed to assess trustworthiness, we leveraged structural equation modeling to explore which theoretical model is supported by the data in an organizational trust context. We considered an array of first-order, second-order, and bifactor models. The best-fitting model was a bifactor model comprising one general trustworthiness factor and ability, benevolence, and integrity grouping factors. This model was determined to be essentially unidimensional, though this is qualified by the finding that the grouping variables accounted for significant variance with for several organizational outcome criteria. These results suggest that respondents typically employ a general factor when responding to items assessing trustworthiness, and researchers may be better served treating the construct as unidimensional or engaging in scale parceling of their models to reflect this response tendency more accurately. However, the substantial variance accounted by the grouping variables in hierarchical regression suggest there may be contexts in which it would be acceptable to consider the theoretical factors of ability, benevolence, and integrity independent of general trustworthiness.

7.
Front Psychol ; 12: 589585, 2021.
Article in English | MEDLINE | ID: mdl-34122209

ABSTRACT

Researchers are beginning to transition from studying human-automation interaction to human-autonomy teaming. This distinction has been highlighted in recent literature, and theoretical reasons why the psychological experience of humans interacting with autonomy may vary and affect subsequent collaboration outcomes are beginning to emerge (de Visser et al., 2018; Wynne and Lyons, 2018). In this review, we do a deep dive into human-autonomy teams (HATs) by explaining the differences between automation and autonomy and by reviewing the domain of human-human teaming to make inferences for HATs. We examine the domain of human-human teaming to extrapolate a few core factors that could have relevance for HATs. Notably, these factors involve critical social elements within teams that are central (as argued in this review) for HATs. We conclude by highlighting some research gaps that researchers should strive toward answering, which will ultimately facilitate a more nuanced and complete understanding of HATs in a variety of real-world contexts.

8.
Appl Ergon ; 93: 103350, 2021 May.
Article in English | MEDLINE | ID: mdl-33529968

ABSTRACT

There is sparse research directly investigating the effects of trust manipulations in human-human and human-robot interactions. Moreover, studies on human-human versus human-robot trust have leveraged unusual or low vulnerability contexts to investigate such effects and have focused mostly on robot performance. In the present research, we seek to remedy these limitations and compare trust in human-human versus human-robot collaborations in an augmented and adapted version of the Trust Game. We used a mixed factorial design to examine the effects of trust and trust violations on human-human and human-robot interactions over time with an emphasis on anthropomorphic robots in a social context. We found consistent and significant effects of partner behavior. Specifically, partner distrust behaviors led to participants' lower levels of trustworthiness perceptions, trust intentions, and trust behaviors over time compared to partner trust behaviors. We found no significant effect of partnering with a human versus an anthropomorphic robot over time across the three dependent variables, supporting the computers as social actors (CASA; Nass and Moon, 2000) paradigm. This study demonstrated that there may be instances where the effects of trust violations from an anthropomorphized robot partner are not meaningfully different from those of a human partner in a social context.


Subject(s)
Robotics , Humans , Intention , Interpersonal Relations , Social Environment , Trust
9.
J Psychol ; 153(7): 732-757, 2019.
Article in English | MEDLINE | ID: mdl-31112108

ABSTRACT

The current study investigated the role of trustworthiness perceptions at the individual level and collective efficacy at the team level on team performance in computer-mediated teams using multi-level structural equation modeling (MSEM). It was hypothesized that trustworthiness perceptions and collective efficacy would predict team performance, and collective efficacy would partially mediate the trustworthiness - performance relationship in computer-mediated teams. Sixty-four teams (five participants each) engaged in a computer-mediated task across two experimental sessions. Trustworthiness measured after session 1, collective efficacy measured after sessions 1 and 2, and team performance measured of sessions 1 and 2 were used to build the MSEM. The half longitudinal model for assessing mediation was used to examine the influence of trustworthiness perceptions on performance through collective efficacy over time. Results demonstrated support for the hypothesized model, such that trustworthiness perceptions demonstrated indirect effects on performance through collective efficacy. These findings extend past research by identifying an emergent mechanism by which trustworthiness is important for team performance in computer-mediated teams.


Subject(s)
Computers , Group Processes , Trust/psychology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Young Adult
10.
Appl Ergon ; 70: 182-193, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29866310

ABSTRACT

Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature.


Subject(s)
Heuristics , Software/standards , Trust/psychology , Adolescent , Adult , Female , Humans , Male , Middle Aged , Perception , Quality Control , Time Factors , Young Adult
11.
Appetite ; 58(3): 1106-8, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22369957

ABSTRACT

Individual differences in human food neophobia (the reluctance to try novel foods) and food neophilia (the overt willingness to try novel foods) influence the evaluation of tastes and odors, as well as the sampling of such stimuli. Past research also notes an association of food neophobia to PTC sensitivity, body weight, and cephalic phase salivary response. The present study assessed physiological reactions of food neophobics and neophilics to pictures of food and non-food stimuli. Stimuli pictures were presented in random order on a computer screen for a period of 5 min. No significant differences were found between the groups in relation to non-food stimuli. However, pulse, GSR, and respirations were significantly increased in food neophobics when presented pictures of food stimuli. Thus, further evidence is provided to support a physiological component at least partially responsible for differences noted between neophobics and neophilics in sensitivity, psychophysical ratings, and "willingness to try" personality. Such a component may also lead to differences in weight, nutrition, and overall health.


Subject(s)
Diet/psychology , Food Preferences/physiology , Galvanic Skin Response , Heart Rate , Personality , Respiration , Visual Perception , Adult , Cues , Female , Food Preferences/psychology , Humans , Male , Odorants , Phobic Disorders , Pulse , Taste , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...