Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
1.
Environ Pollut ; 334: 122094, 2023 Oct 01.
Article in English | MEDLINE | ID: mdl-37392868

ABSTRACT

Artificial turf (AT) is a surfacing material that simulates natural grass by using synthetic, mainly plastic, fibers in different shapes, sizes and properties. AT has spread beyond sports facilities and today shapes many urban landscapes, from private lawns to rooftops and public venues. Despite concerns regarding the impacts of AT, little is known about the release of AT fibers into natural environment. Here, for the first time, we specifically investigate the presence of AT fibers in river and ocean waters as major conduits and final destination of plastic debris transported by water runoff. Our sampling survey showed that, AT fibers - composed mainly of polyethylene and polypropylene - can constitute over 15% of the mesoplastics and macroplastics content, suggesting that AT fibers may contribute significantly to plastic pollution. Up to 20,000 fibers a day flowed down through the river, and up to 213,200 fibers per km2 were found floating on the sea surface of nearshore areas. AT, apart from impacting on urban biodiversity, urban runoff, heat island formation, and hazardous chemical leaching, is a major source of plastic pollution to natural aquatic environments.


Subject(s)
Environmental Pollutants , Water Pollutants, Chemical , Plastics , Water Pollutants, Chemical/analysis , Cities , Environmental Monitoring , Hot Temperature
2.
Mar Pollut Bull ; 191: 114882, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37054479

ABSTRACT

Systematic seafloor surveys are a highly desirable method of marine litter monitoring, but the high costs involved in seafloor sampling are not a trivial handicap. In the present work, we explore the opportunity provided by the artisanal trawling fisheries to obtain systematic data on marine litter in the Gulf of Cadiz between 2019 and 2021. We find that plastic was the most frequent material, with a prevalence of single-use and fishing-related items. Litter densities decreased with increasing distance to shore with a seasonal migration of the main litter hotspots. During pre-lockdown and post-lockdown stages derived from COVID-19, marine litter density decreased by 65 %, likely related to the decline in tourism and outdoor recreational activities. A continuous collaboration of 33 % of the local fleet would imply a removal of hundreds of thousands of items each year. The artisanal trawl fishing sector can play a unique role of monitoring marine litter on the seabed.


Subject(s)
COVID-19 , Fisheries , Humans , Environmental Monitoring , Communicable Disease Control , Environmental Pollution , Plastics , Waste Products/analysis
3.
Behav Res Methods ; 55(8): 4369-4381, 2023 12.
Article in English | MEDLINE | ID: mdl-36396834

ABSTRACT

Visual analog scales (VASs) are gaining popularity for collecting responses in computer administration of psychometric tests and surveys. The VAS format consists of a line marked at its endpoints with the minimum and maximum positions that it covers for respondents to place a mark at their selected location. Creating the line with intermediate marks along its length was discouraged, but no empirical evidence has ever been produced to show that their absence does any good. We report a study that asked respondents to place marks at pre-selected locations on a 100-unit VAS line, first when it only had numerical labels (0 and 100) at its endpoints and then when intermediate locations (from 0 to 100 in steps of 20) were also labeled. The results show that settings are more accurate and more precise when the VAS line has intermediate tick marks: The average absolute error decreased from 3.02 units without intermediate marks to 0.82 units with them. Provision of intermediate tick marks also reduced substantially inter- and intra-individual variability in accuracy and precision: The standard deviation of absolute error decreased from 0.87 units without tick marks to 0.25 units with them and the standard deviation of signed distance to target decreased from 1.16 units without tick marks to 0.24 units with them. These results prompt the recommendation that the design of VASs includes intermediate tick marks along the length of the line.


Subject(s)
Computers , Humans , Visual Analog Scale , Surveys and Questionnaires , Pain Measurement , Psychometrics
4.
Mar Pollut Bull ; 170: 112622, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34146860

ABSTRACT

Microplastics (MPs) patterns in a weakly-stratified estuary were investigated using a combined approach of observations and modeling. The study was conducted in the Guadalquivir River Estuary, which is of high environmental value, yet significantly altered by human activities. The study aims to contribute to understanding and quantifying the land-ocean transport of MPs. Mean concentrations of MPs in the estuary were 0.041itemsm-3, with maximum values up to 0.20itemsm-3, in agreement with the range reported in other estuaries. Polyethylene floating MPs were predominant. Relationships between increases in MP concentration and local rainfall events were identified in the middle estuary when there were no significant discharges from the head dam. Modeling results mimicked observations and revealed the effects of tidal straining, density-driven, and river flow-induced circulation on the net transport. Convergence of transports favors the MPs trapping in the vicinity of Doñana National Park, overlapping the location of the Estuarine Turbidity Maximum.


Subject(s)
Estuaries , Water Pollutants, Chemical , Environmental Monitoring , Humans , Microplastics , Plastics , Rivers , Spain , Water Pollutants, Chemical/analysis
5.
Behav Res Methods ; 52(5): 2168-2187, 2020 10.
Article in English | MEDLINE | ID: mdl-32232736

ABSTRACT

Adaptive psychophysical methods are widely used for the quick estimation of percentage points (thresholds) on psychometric functions for two-alternative forced-choice (2AFC) tasks. The use of adaptive methods is supported by numerous simulation studies documenting their performance, which have shown that thresholds can be reasonably estimated with them when their founding assumptions hold. One of these assumptions is that the psychometric function is invariant, but empirical evidence is mounting that human performance in 2AFC tasks needs to be described by two different psychometric functions, one that holds when the test stimulus is presented first in the 2AFC trial and a different one that holds when the test is presented second. The same holds when presentations are instead simultaneous at two spatial locations rather than sequential. We re-evaluated the performance of adaptive methods in the presence of these order effects via simulation studies and an empirical study with human observers. The simulation study showed that thresholds are severely overestimated by adaptive methods in these conditions, and the empirical study corroborated these findings. These results question the validity of threshold estimates obtained with adaptive methods that incorrectly assume the psychometric function to be invariant with presentation order. Alternative ways in which thresholds can be accurately estimated in the presence of order effects are discussed.


Subject(s)
Psychometrics , Psychophysics , Computer Simulation , Humans
6.
Span J Psychol ; 22: E56, 2019 Dec 23.
Article in English | MEDLINE | ID: mdl-31868158

ABSTRACT

Many areas of research require measuring psychometric functions or their descriptors (thresholds, slopes, etc.). Data for this purpose are collected with psychophysical methods of various types and justification for the interpretation of results arises from a model of performance grounded in signal detection theory. Decades of research have shown that psychophysical data display features that are incompatible with such framework, questioning the validity of interpretations obtained under it and revealing that psychophysical performance is more complex than this framework entertains. This paper describes the assumptions and formulation of the conventional framework for the two major classes of psychophysical methods (single- and dual-presentation methods) and presents various lines of empirical evidence that the framework is inconsistent with. An alternative framework is then described and shown to account for all the characteristics that the conventional framework regards as anomalies. This alternative process model explicitly separates the sensory, decisional, and response components of performance and represents them via parameters whose estimation characterizes the corresponding processes. Retrospective and prospective evidence of the validity of the alternative framework is also presented. A formal analysis also reveals that some psychophysical methods and response formats are unsuitable for separation of the three components of observed performance. Recommendations are thus given regarding practices that should be avoided and those that should be followed to ensure interpretability of the psychometric function, or descriptors (detection threshold, difference limen, point of subjective equality, etc.) obtained with shortcut methods that do not require estimation of psychometric functions.


Subject(s)
Psychological Theory , Psychometrics , Psychophysics , Humans , Psychometrics/methods , Psychometrics/standards , Psychophysics/methods , Psychophysics/standards
7.
Span. j. psychol ; 22: e56.1-e56.30, 2019. graf
Article in English | IBECS | ID: ibc-190207

ABSTRACT

Many areas of research require measuring psychometric functions or their descriptors (thresholds, slopes, etc.). Data for this purpose are collected with psychophysical methods of various types and justification for the interpretation of results arises from a model of performance grounded in signal detection theory. Decades of research have shown that psychophysical data display features that are incompatible with such framework, questioning the validity of interpretations obtained under it and revealing that psychophysical performance is more complex than this framework entertains. This paper describes the assumptions and formulation of the conventional framework for the two major classes of psychophysical methods (single- and dual-presentation methods) and presents various lines of empirical evidence that the framework is inconsistent with. An alternative framework is then described and shown to account for all the characteristics that the conventional framework regards as anomalies. This alternative process model explicitly separates the sensory, decisional, and response components of performance and represents them via parameters whose estimation characterizes the corresponding processes. Retrospective and prospective evidence of the validity of the alternative framework is also presented. A formal analysis also reveals that some psychophysical methods and response formats are unsuitable for separation of the three components of observed performance. Recommendations are thus given regarding practices that should be avoided and those that should be followed to ensure interpretability of the psychometric function, or descriptors (detection threshold, difference limen, point of subjective equality, etc.) obtained with shortcut methods that do not require estimation of psychometric functions


No disponible


Subject(s)
Humans , Psychological Theory , Psychometrics , Psychophysics , Psychometrics/methods , Psychometrics/standards , Psychophysics/methods , Psychophysics/standards
8.
Front Psychol ; 8: 1142, 2017.
Article in English | MEDLINE | ID: mdl-28747893

ABSTRACT

Psychophysical data from dual-presentation tasks are often collected with the two-alternative forced-choice (2AFC) response format, asking observers to guess when uncertain. For an analytical description of performance, psychometric functions are then fitted to data aggregated across the two orders/positions in which stimuli were presented. Yet, order effects make aggregated data uninterpretable, and the bias with which observers guess when uncertain precludes separating sensory from decisional components of performance. A ternary response format in which observers are also allowed to report indecision should fix these problems, but a comparative analysis with the 2AFC format has never been conducted. In addition, fitting ternary data separated by presentation order poses serious challenges. To address these issues, we extended the indecision model of psychophysical performance to accommodate the ternary, 2AFC, and same-different response formats in detection and discrimination tasks. Relevant issues for parameter estimation are also discussed along with simulation results that document the superiority of the ternary format. These advantages are demonstrated by fitting the indecision model to published detection and discrimination data collected with the ternary, 2AFC, or same-different formats, which had been analyzed differently in the sources. These examples also show that 2AFC data are unsuitable for testing certain types of hypotheses. matlab and R routines written for our purposes are available as Supplementary Material, which should help spread the use of the ternary format for dependable collection and interpretation of psychophysical data.

9.
Front Psychol ; 7: 1042, 2016.
Article in English | MEDLINE | ID: mdl-27458424

ABSTRACT

Hoekstra et al. (Psychonomic Bulletin & Review, 2014, 21:1157-1164) surveyed the interpretation of confidence intervals (CIs) by first-year students, master students, and researchers with six items expressing misinterpretations of CIs. They asked respondents to answer all items, computed the number of items endorsed, and concluded that misinterpretation of CIs is robust across groups. Their design may have produced this outcome artifactually for reasons that we describe. This paper discusses first the two interpretations of CIs and, hence, why misinterpretation cannot be inferred from endorsement of some of the items. Next, a re-analysis of Hoekstra et al.'s data reveals some puzzling differences between first-year and master students that demand further investigation. For that purpose, we designed a replication study with an extended questionnaire including two additional items that express correct interpretations of CIs (to compare endorsement of correct vs. nominally incorrect interpretations) and we asked master students to indicate which items they would have omitted had they had the option (to distinguish deliberate from uninformed endorsement caused by the forced-response format). Results showed that incognizant first-year students endorsed correct and nominally incorrect items identically, revealing that the two item types are not differentially attractive superficially; in contrast, master students were distinctively more prone to endorsing correct items when their uninformed responses were removed, although they admitted to nescience more often that might have been expected. Implications for teaching practices are discussed.

10.
Conscious Cogn ; 37: 16-26, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26261896

ABSTRACT

Temporal-order judgment (TOJ) and simultaneity judgment (SJ) tasks are used to study differences in speed of processing across sensory modalities, stimulus types, or experimental conditions. Matthews and Welch (2015) reported that observed performance in SJ and TOJ tasks is superior when visual stimuli are presented in the left visual field (LVF) compared to the right visual field (RVF), revealing an LVF advantage presumably reflecting attentional influences. Because observed performance reflects the interplay of perceptual and decisional processes involved in carrying out the tasks, analyses that separate out these influences are needed to determine the origin of the LVF advantage. We re-analyzed the data of Matthews and Welch (2015) using a model of performance in SJ and TOJ tasks that separates out these influences. Parameter estimates capturing the operation of perceptual processes did not differ between hemifields by these analyses, whereas parameter estimates capturing the operation of decisional processes differed. In line with other evidence, perceptual processing also did not differ between SJ and TOJ tasks. Thus, the LVF advantage occurs with identical speeds of processing in both visual hemifields. If attention is responsible for the LVF advantage, it does not exert its influence via prior entry.


Subject(s)
Attention/physiology , Judgment/physiology , Models, Psychological , Psychomotor Performance/physiology , Visual Fields/physiology , Visual Perception/physiology , Adult , Humans , Time Factors
11.
Atten Percept Psychophys ; 77(5): 1750-66, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25813739

ABSTRACT

Perception of simultaneity and temporal order is studied with simultaneity judgment (SJ) and temporal-order judgment (TOJ) tasks. In the former, observers report whether presentation of two stimuli was subjectively simultaneous; in the latter, they report which stimulus was subjectively presented first. SJ and TOJ tasks typically give discrepant results, which has prompted the view that performance is mediated by different processes in each task. We looked at these discrepancies from a model that yields psychometric functions whose parameters characterize the timing, decisional, and response processes involved in SJ and TOJ tasks. We analyzed 12 data sets from published studies in which both tasks had been used in within-subjects designs, all of which had reported differences in performance across tasks. Fitting the model jointly to data from both tasks, we tested the hypothesis that common timing processes sustain simultaneity and temporal-order judgments, with differences in performance arising from task-dependent decisional and response processes. The results supported this hypothesis, also showing that model psychometric functions account for aspects of SJ and TOJ data that classical analyses overlook. Implications for research on perception of simultaneity and temporal order are discussed.


Subject(s)
Judgment/physiology , Time Perception/physiology , Auditory Perception/physiology , Decision Making/physiology , Humans , Models, Psychological , Psychometrics , Psychomotor Performance/physiology , Visual Perception/physiology
12.
Iperception ; 6(6): 2041669515615735, 2015 Dec.
Article in English | MEDLINE | ID: mdl-27551361

ABSTRACT

Research on asynchronous audiovisual speech perception manipulates experimental conditions to observe their effects on synchrony judgments. Probabilistic models establish a link between the sensory and decisional processes underlying such judgments and the observed data, via interpretable parameters that allow testing hypotheses and making inferences about how experimental manipulations affect such processes. Two models of this type have recently been proposed, one based on independent channels and the other using a Bayesian approach. Both models are fitted here to a common data set, with a subsequent analysis of the interpretation they provide about how experimental manipulations affected the processes underlying perceived synchrony. The data consist of synchrony judgments as a function of audiovisual offset in a speech stimulus, under four within-subjects manipulations of the quality of the visual component. The Bayesian model could not accommodate asymmetric data, was rejected by goodness-of-fit statistics for 8/16 observers, and was found to be nonidentifiable, which renders uninterpretable parameter estimates. The independent-channels model captured asymmetric data, was rejected for only 1/16 observers, and identified how sensory and decisional processes mediating asynchronous audiovisual speech perception are affected by manipulations that only alter the quality of the visual component of the speech signal.

13.
Behav Res Methods ; 47(1): 147-61, 2015 Mar.
Article in English | MEDLINE | ID: mdl-24788323

ABSTRACT

Omnibus tests of significance in contingency tables use statistics of the chi-square type. When the null is rejected, residual analyses are conducted to identify cells in which observed frequencies differ significantly from expected frequencies. Residual analyses are thus conditioned on a significant omnibus test. Conditional approaches have been shown to substantially alter type I error rates in cases involving t tests conditional on the results of a test of equality of variances, or tests of regression coefficients conditional on the results of tests of heteroscedasticity. We show that residual analyses conditional on a significant omnibus test are also affected by this problem, yielding type I error rates that can be up to 6 times larger than nominal rates, depending on the size of the table and the form of the marginal distributions. We explored several unconditional approaches in search for a method that maintains the nominal type I error rate and found out that a bootstrap correction for multiple testing achieved this goal. The validity of this approach is documented for two-way contingency tables in the contexts of tests of independence, tests of homogeneity, and fitting psychometric functions. Computer code in MATLAB and R to conduct these analyses is provided as Supplementary Material.


Subject(s)
Chi-Square Distribution , Computing Methodologies , Multivariate Analysis , Psychometrics/methods , Biometry , Humans , Reproducibility of Results , Systems Analysis
14.
Behav Res Methods ; 45(4): 972-98, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23572250

ABSTRACT

Research on temporal-order perception uses temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks in their binary SJ2 or ternary SJ3 variants. In all cases, two stimuli are presented with some temporal delay, and observers judge the order of presentation. Arbitrary psychometric functions are typically fitted to obtain performance measures such as sensitivity or the point of subjective simultaneity, but the parameters of these functions are uninterpretable. We describe routines in MATLAB and R that fit model-based functions whose parameters are interpretable in terms of the processes underlying temporal-order and simultaneity judgments and responses. These functions arise from an independent-channels model assuming arrival latencies with exponential distributions and a trichotomous decision space. Different routines fit data separately for SJ2, SJ3, and TOJ tasks, jointly for any two tasks, or also jointly for the three tasks (for common cases in which two or even the three tasks were used with the same stimuli and participants). Additional routines provide bootstrap p-values and confidence intervals for estimated parameters. A further routine is included that obtains performance measures from the fitted functions. An R package for Windows and source code of the MATLAB and R routines are available as Supplementary Files.


Subject(s)
Logistic Models , Models, Psychological , Psychometrics/methods , Software , Time Perception/physiology , Humans , Judgment , Research Design
15.
Q J Exp Psychol (Hove) ; 66(2): 319-37, 2013.
Article in English | MEDLINE | ID: mdl-22950887

ABSTRACT

Morgan, Dillenburger, Raphael, and Solomon have shown that observers can use different response strategies when unsure of their answer, and, thus, they can voluntarily shift the location of the psychometric function estimated with the method of single stimuli (MSS; sometimes also referred to as the single-interval, two-alternative method). They wondered whether MSS could distinguish response bias from a true perceptual effect that would also shift the location of the psychometric function. We demonstrate theoretically that the inability to distinguish response bias from perceptual effects is an inherent shortcoming of MSS, although a three-response format including also an "undecided" response option may solve the problem under restrictive assumptions whose validity cannot be tested with MSS data. We also show that a proper two-alternative forced-choice (2AFC) task with the three-response format is free of all these problems so that bias and perceptual effects can easily be separated out. The use of a three-response 2AFC format is essential to eliminate a confound (response bias) in studies of perceptual effects and, hence, to eliminate a threat to the internal validity of research in this area.


Subject(s)
Bias , Discrimination, Psychological/physiology , Perception/physiology , Psychometrics , Choice Behavior , Humans , Models, Psychological , Psychophysics , Uncertainty
16.
Psychon Bull Rev ; 19(5): 820-46, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22829342

ABSTRACT

Research on the perception of temporal order uses either temporal-order judgment (TOJ) tasks or synchrony judgment (SJ) tasks, in both of which two stimuli are presented with some temporal delay and observers must judge the order of presentation. Results generally differ across tasks, raising concerns about whether they measure the same processes. We present a model including sensory and decisional parameters that places these tasks in a common framework that allows studying their implications on observed performance. TOJ tasks imply specific decisional components that explain the discrepancy of results obtained with TOJ and SJ tasks. The model is also tested against published data on audiovisual temporal-order judgments, and the fit is satisfactory, although model parameters are more accurately estimated with SJ tasks. Measures of latent point of subjective simultaneity and latent sensitivity are defined that are invariant across tasks by isolating the sensory parameters governing observed performance, whereas decisional parameters vary across tasks and account for observed differences across them. Our analyses concur with other evidence advising against the use of TOJ tasks in research on perception of temporal order.


Subject(s)
Judgment , Time Perception , Acoustic Stimulation , Auditory Perception , Humans , Models, Psychological , Photic Stimulation , Time Factors , Visual Perception
17.
Front Psychol ; 3: 94, 2012.
Article in English | MEDLINE | ID: mdl-22493586

ABSTRACT

Independent-channels models of perception of temporal order (also referred to as threshold models or perceptual latency models) have been ruled out because two formal properties of these models (monotonicity and parallelism) are not borne out by data from ternary tasks in which observers must judge whether stimulus A was presented before, after, or simultaneously with stimulus B. These models generally assume that observed responses are authentic indicators of unobservable judgments, but blinks, lapses of attention, or errors in pressing the response keys (maybe, but not only, motivated by time pressure when reaction times are being recorded) may make observers misreport their judgments or simply guess a response. We present an extension of independent-channels models that considers response errors and we show that the model produces psychometric functions that do not satisfy monotonicity and parallelism. The model is illustrated by fitting it to data from a published study in which the ternary task was used. The fitted functions describe very accurately the absence of monotonicity and parallelism shown by the data. These characteristics of empirical data are thus consistent with independent-channels models when response errors are taken into consideration. The implications of these results for the analysis and interpretation of temporal order judgment data are discussed.

18.
Span J Psychol ; 14(2): 1023-49, 2011 Nov.
Article in English | MEDLINE | ID: mdl-22059346

ABSTRACT

Solving theoretical or empirical issues sometimes involves establishing the equality of two variables with repeated measures. This defies the logic of null hypothesis significance testing, which aims at assessing evidence against the null hypothesis of equality, not for it. In some contexts, equivalence is assessed through regression analysis by testing for zero intercept and unit slope (or simply for unit slope in case that regression is forced through the origin). This paper shows that this approach renders highly inflated Type I error rates under the most common sampling models implied in studies of equivalence. We propose an alternative approach based on omnibus tests of equality of means and variances and in subject-by-subject analyses (where applicable), and we show that these tests have adequate Type I error rates and power. The approach is illustrated with a re-analysis of published data from a signal detection theory experiment with which several hypotheses of equivalence had been tested using only regression analysis. Some further errors and inadequacies of the original analyses are described, and further scrutiny of the data contradict the conclusions raised through inadequate application of regression analyses.


Subject(s)
Choice Behavior , Mathematical Computing , Psychological Tests/statistics & numerical data , Psychometrics/statistics & numerical data , Signal Detection, Psychological , Software , Analysis of Variance , Computer Simulation , Humans , Likelihood Functions , Linear Models , Models, Statistical , Probability , Reproducibility of Results
19.
Span. j. psychol ; 14(2): 1023-1049, nov. 2011.
Article in English | IBECS | ID: ibc-91242

ABSTRACT

Solving theoretical or empirical issues sometimes involves establishing the equality of two variables with repeated measures. This defies the logic of null hypothesis significance testing, which aims at assessing evidence against the null hypothesis of equality, not for it. In some contexts, equivalence is assessed through regression analysis by testing for zero intercept and unit slope (or simply for unit slope in case that regression is forced through the origin). This paper shows that this approach renders highly inflated Type I error rates under the most common sampling models implied in studies of equivalence. We propose an alternative approach based on omnibus tests of equality of means and variances and in subject-by-subject analyses (where applicable), and we show that these tests have adequate Type I error rates and power. The approach is illustrated with a re-analysis of published data from a signal detection theory experiment with which several hypotheses of equivalence had been tested using only regression analysis. Some further errors and inadequacies of the original analyses are described, and further scrutiny of the data contradict the conclusions raised through inadequate application of regression analyses (AU)


Resolver problemas teóricos o empíricos requiere en ocasiones contrastar la equivalencia de dos variables usando medidas repetidas. El mero planteamiento de este objetivo supone un desafío para la lógica subyacente a los métodos de contraste de hipótesis estadísticas, que están diseñados para evaluar la magnitud de la evidencia contraria a la hipótesis nula y de ningún modo permiten evaluar la evidencia a favor de ella. En algunos contextos aplicados se ha abordado el problema utilizando métodos de regresión y contrastando la hipótesis de que la pendiente es 1 y la hipótesis de que la ordenada en el origen es 0 (o simplemente la primera de ellas cuando se fuerza la regresión «por el origen»). Este trabajo muestra que esa estrategia conlleva tasas empíricas de error tipo I muy superiores a las tasas nominales bajo cualquiera de los modelos de muestreo más comúnmente implicados en estudios de equivalencia. Como alternativa, se propone una estrategia basada tanto en pruebas tipo ómnibus que incluyen contrastes de medias y varianzas como en análisis sujeto a sujeto (cuando la situación lo permita). Un estudio de simulación con estas pruebas muestra que la tasa empírica de error tipo I se ajusta a la tasa nominal y que la potencia de los contrastes es adecuada. A modo de ilustración, se aplican estos contrastes para re-analizar los datos de un experimento psicofísico sobre detección de contraste que originalmente sólo fueron analizados mediante regresión por parte de los autores del estudio, pese a que todas las hipótesis consideradas implicaban equivalencia con medidas repetidas. Nuestro reanálisis permite una inspección más minuciosa de los datos que revela contradicciones entre las características empíricas de los datos y las conclusiones extraídas mediante la aplicación inadecuada de métodos de regresión. Los resultados de este re-análisis también invalidan las conclusiones extraídas en la publicación original (AU)


Subject(s)
Humans , Male , Female , Multivariate Analysis , Statistical Distributions , Hypothesis-Testing , Psychophysics/methods , Psychophysics/trends , Signal Detection, Psychological/physiology , Logistic Models , Psychophysics/organization & administration , Psychophysics/standards
20.
Atten Percept Psychophys ; 73(7): 2332-52, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21735314

ABSTRACT

Proportion correct in two-alternative forced choice (2AFC) detection tasks often varies when the stimulus is presented in the first or in the second interval. Reanalysis of published data reveals that these order effects (or interval bias) are strong and prevalent, refuting the standard difference model of signal detection theory. Order effects are commonly regarded as evidence that observers use an off-center criterion under the difference model with bias. We consider an alternative difference model with indecision whereby observers are occasionally undecided and guess with some bias toward one of the response options. Whether or not the data show order effects, the two models fit 2AFC data indistinguishably, but they yield meaningfully different estimates of sensory parameters. Under indeterminacy as to which model governs 2AFC performance, parameter estimates are suspect and potentially misleading. The indeterminacy can be circumvented by modifying the response format so that observers can express indecision when needed. Reanalysis of published data collected in this way lends support to the indecision model. We illustrate alternative approaches to fitting psychometric functions under the indecision model and discuss designs for 2AFC experiments that improve the accuracy of parameter estimates, whether or not order effects are apparent in the data.


Subject(s)
Choice Behavior , Discrimination, Psychological , Perception , Signal Detection, Psychological , Artifacts , Attention , Differential Threshold , Humans , Models, Statistical , Observer Variation , Psychometrics , Psychophysics , Uncertainty
SELECTION OF CITATIONS
SEARCH DETAIL
...