Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 11: 1321, 2020.
Article in English | MEDLINE | ID: mdl-32636786

ABSTRACT

This study uses latent semantic analysis (LSA) to explore how prevalent measures of motivation are interpreted across very diverse job types. Building on the Semantic Theory of Survey Response (STSR), we calculate "semantic compliance" as the degree to which an individual's responses follow a semantically predictable pattern. This allows us to examine how context, in the form of job type, influences respondent interpretations of items. In total, 399 respondents from 18 widely different job types (from CEOs through lawyers, priests and artists to sex workers and professional soldiers) self-rated their work motivation on eight commonly applied scales from research on motivation. A second sample served as an external evaluation panel (n = 30) and rated the 18 job types across eight job characteristics. Independent measures of the job types' salary levels were obtained from national statistics. The findings indicate that while job type predicts motivational score levels significantly, semantic compliance as moderated by job type job also predicts motivational score levels usually at a lesser but significant magnitude. Combined, semantic compliance and job type explained up to 41% of the differences in motional score levels. The variation in semantic compliance was also significantly related to job characteristics as rated by an external panel, and to national income levels. Our findings indicate that people in different contexts interpret items differently to a degree that substantially affects their score levels. We discuss how future measurements of motivation may improve by taking semantic compliance and the STSR perspective into consideration.

2.
PLoS One ; 13(12): e0207643, 2018.
Article in English | MEDLINE | ID: mdl-30517132

ABSTRACT

Research on sensemaking in organisations and on linguistic relativity suggests that speakers of the same language may use this language in different ways to construct social realities at work. We apply a semantic theory of survey response (STSR) to explore such differences in quantitative survey research. Using text analysis algorithms, we have studied how language from three media domains-the business press, PR Newswire and general newspapers-has differential explanatory value for analysing survey responses in leadership research. We projected well-known surveys measuring leadership, motivation and outcomes into large text samples from these three media domains significantly different impacts on survey responses. Business press language was best in explaining leadership-related items, PR language best at explaining organizational results and "ordinary" newspaper language seemed to explain the relationship among motivation items. These findings shed light on how different public arenas construct organizational realities in different ways, and how these differences have consequences on methodology in research on leadership.


Subject(s)
Algorithms , Communications Media , Leadership , Semantics , Adult , Female , Humans , Language , Linguistics , Male , Middle Aged , Models, Organizational , Motivation , Social Behavior , Surveys and Questionnaires
3.
Behav Res Methods ; 50(6): 2345-2365, 2018 12.
Article in English | MEDLINE | ID: mdl-29330764

ABSTRACT

The traditional understanding of data from Likert scales is that the quantifications involved result from measures of attitude strength. Applying a recently proposed semantic theory of survey response, we claim that survey responses tap two different sources: a mixture of attitudes plus the semantic structure of the survey. Exploring the degree to which individual responses are influenced by semantics, we hypothesized that in many cases, information about attitude strength is actually filtered out as noise in the commonly used correlation matrix. We developed a procedure to separate the semantic influence from attitude strength in individual response patterns, and compared these results to, respectively, the observed sample correlation matrices and the semantic similarity structures arising from text analysis algorithms. This was done with four datasets, comprising a total of 7,787 subjects and 27,461,502 observed item pair responses. As we argued, attitude strength seemed to account for much information about the individual respondents. However, this information did not seem to carry over into the observed sample correlation matrices, which instead converged around the semantic structures offered by the survey items. This is potentially disturbing for the traditional understanding of what survey data represent. We argue that this approach contributes to a better understanding of the cognitive processes involved in survey responses. In turn, this could help us make better use of the data that such methods provide.


Subject(s)
Attitude , Semantics , Surveys and Questionnaires/statistics & numerical data , Algorithms , Correlation of Data , Female , Humans , Individuality , Male , Mental Processes
4.
PLoS One ; 9(9): e106361, 2014.
Article in English | MEDLINE | ID: mdl-25184672

ABSTRACT

Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60-86% of the variance in the response patterns and allowed remarkably precise prediction of survey responses from humans, except in a personality test. Even the relationships between independent and their purported dependent variables were accurately predicted. This raises concern about the empirical nature of data collected through some surveys if results are already given a priori through the way subjects are being asked. Survey response patterns seem heavily determined by semantics. Language algorithms may suggest these prior to administering a survey. This study suggests that semantic algorithms are becoming new tools for the social sciences, opening perspectives on survey responses that prevalent psychometric theory cannot explain.


Subject(s)
Data Collection , Semantics , Social Sciences , Humans , Leadership , Motivation , Natural Language Processing , Social Behavior
SELECTION OF CITATIONS
SEARCH DETAIL
...