Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuroimage ; 273: 120109, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37059157

RESUMO

Deep learning (DL) models find increasing application in mental state decoding, where researchers seek to understand the mapping between mental states (e.g., experiencing anger or joy) and brain activity by identifying those spatial and temporal features of brain activity that allow to accurately identify (i.e., decode) these states. Once a DL model has been trained to accurately decode a set of mental states, neuroimaging researchers often make use of methods from explainable artificial intelligence research to understand the model's learned mappings between mental states and brain activity. Here, we benchmark prominent explanation methods in a mental state decoding analysis of multiple functional Magnetic Resonance Imaging (fMRI) datasets. Our findings demonstrate a gradient between two key characteristics of an explanation in mental state decoding, namely, its faithfulness and its alignment with other empirical evidence on the mapping between brain activity and decoded mental state: explanation methods with high explanation faithfulness, which capture the model's decision process well, generally provide explanations that align less well with other empirical evidence than the explanations of methods with less faithfulness. Based on our findings, we provide guidance for neuroimaging researchers on how to choose an explanation method to gain insight into the mental state decoding decisions of DL models.


Assuntos
Encéfalo , Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico/métodos , Inteligência Artificial , Benchmarking , Imageamento por Ressonância Magnética/métodos
2.
Trends Cogn Sci ; 26(11): 972-986, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36223760

RESUMO

In mental state decoding, researchers aim to identify the set of mental states (e.g., experiencing happiness or fear) that can be reliably identified from the activity patterns of a brain region (or network). Deep learning (DL) models are highly promising for mental state decoding because of their unmatched ability to learn versatile representations of complex data. However, their widespread application in mental state decoding is hindered by their lack of interpretability, difficulties in applying them to small datasets, and in ensuring their reproducibility and robustness. We recommend approaching these challenges by leveraging recent advances in explainable artificial intelligence (XAI) and transfer learning, and also provide recommendations on how to improve the reproducibility and robustness of DL models in mental state decoding.


Assuntos
Inteligência Artificial , Mapeamento Encefálico , Aprendizado Profundo , Encéfalo , Humanos , Aprendizado de Máquina , Neuroimagem , Reprodutibilidade dos Testes
3.
PLoS Comput Biol ; 18(7): e1010283, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35793388

RESUMO

Choices are influenced by gaze allocation during deliberation, so that fixating an alternative longer leads to increased probability of choosing it. Gaze-dependent evidence accumulation provides a parsimonious account of choices, response times and gaze-behaviour in many simple decision scenarios. Here, we test whether this framework can also predict more complex context-dependent patterns of choice in a three-alternative risky choice task, where choices and eye movements were subject to attraction and compromise effects. Choices were best described by a gaze-dependent evidence accumulation model, where subjective values of alternatives are discounted while not fixated. Finally, we performed a systematic search over a large model space, allowing us to evaluate the relative contribution of different forms of gaze-dependence and additional mechanisms previously not considered by gaze-dependent accumulation models. Gaze-dependence remained the most important mechanism, but participants with strong attraction effects employed an additional similarity-dependent inhibition mechanism found in other models of multi-alternative multi-attribute choice.


Assuntos
Comportamento de Escolha , Movimentos Oculares , Comportamento de Escolha/fisiologia , Fixação Ocular , Humanos , Probabilidade , Tempo de Reação/fisiologia , Assunção de Riscos
4.
Elife ; 102021 04 06.
Artigo em Inglês | MEDLINE | ID: mdl-33821787

RESUMO

How do we choose when confronted with many alternatives? There is surprisingly little decision modelling work with large choice sets, despite their prevalence in everyday life. Even further, there is an apparent disconnect between research in small choice sets, supporting a process of gaze-driven evidence accumulation, and research in larger choice sets, arguing for models of optimal choice, satisficing, and hybrids of the two. Here, we bridge this divide by developing and comparing different versions of these models in a many-alternative value-based choice experiment with 9, 16, 25, or 36 alternatives. We find that human choices are best explained by models incorporating an active effect of gaze on subjective value. A gaze-driven, probabilistic version of satisficing generally provides slightly better fits to choices and response times, while the gaze-driven evidence accumulation and comparison model provides the best overall account of the data when also considering the empirical relation between gaze allocation and choice.


In our everyday lives, we often have to choose between many different options. When deciding what to order off a menu, for example, or what type of soda to buy in the supermarket, we have a range of possibilities to consider. So how do we decide what to go for? Researchers believe we make such choices by assigning a subjective value to each of the available options. But we can do this in several different ways. We could look at every option in turn, and then choose the best one once we have considered them all. This is a so-called 'rational' decision-making approach. But we could also consider each of the options one at a time and stop as soon as we find one that is good enough. This strategy is known as 'satisficing'. In both approaches, we use our eyes to gather information about the items available. Most scientists have assumed that merely looking at an item ­ such as a particular brand of soda ­ does not affect how we feel about that item. But studies in which animals or people choose between much smaller sets of objects ­ usually up to four ­ suggest otherwise. The results from these studies indicate that looking at an item makes that item more attractive to the observer, thereby increasing its subjective value. Thomas et al. now show that gaze also plays an active role in the decision-making process when people are spoilt for choice. Healthy volunteers looked at pictures of up to 36 snack foods on a screen and were asked to select the one they would most like to eat. The researchers then recorded the volunteers' choices and response times, and used eye-tracking technology to follow the direction of their gaze. They then tested which of the various decision-making strategies could best account for all the behaviour. The results showed that the volunteers' behaviour was best explained by computer models that assumed that looking at an item increases its subjective value. Moreover, the results confirmed that we do not examine all items and then choose the best one. But neither do we use a purely satisficing approach: the volunteers chose the last item they had looked at less than half the time. Instead, we make decisions by comparing individual items against one another, going back and forth between them. The longer we look at an item, the more attractive it becomes, and the more likely we are to choose it.


Assuntos
Comportamento de Escolha , Fixação Ocular , Modelos Psicológicos , Adulto , Biologia Computacional , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
5.
PLoS One ; 14(12): e0226428, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31841564

RESUMO

Recent empirical findings have indicated that gaze allocation plays a crucial role in simple decision behaviour. Many of these findings point towards an influence of gaze allocation onto the speed of evidence accumulation in an accumulation-to-bound decision process (resulting in generally higher choice probabilities for items that have been looked at longer). Further, researchers have shown that the strength of the association between gaze and choice behaviour is highly variable between individuals, encouraging future work to study this association on the individual level. However, few decision models exist that enable a straightforward characterization of the gaze-choice association at the individual level, due to the high cost of developing and implementing them. The model space is particularly scarce for choice sets with more than two choice alternatives. Here, we present GLAMbox, a Python-based toolbox that is built upon PyMC3 and allows the easy application of the gaze-weighted linear accumulator model (GLAM) to experimental choice data. The GLAM assumes gaze-dependent evidence accumulation in a linear stochastic race that extends to decision scenarios with many choice alternatives. GLAMbox enables Bayesian parameter estimation of the GLAM for individual, pooled or hierarchical models, provides an easy-to-use interface to predict choice behaviour and visualize choice data, and benefits from all of PyMC3's Bayesian statistical modeling functionality. Further documentation, resources and the toolbox itself are available at https://glambox.readthedocs.io.


Assuntos
Algoritmos , Comportamento de Escolha/fisiologia , Tomada de Decisões/fisiologia , Fixação Ocular/fisiologia , Modelos Lineares , Atenção/fisiologia , Humanos , Probabilidade , Tempo de Reação
6.
Nat Hum Behav ; 3(6): 625-635, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30988476

RESUMO

How do we make simple choices such as deciding between an apple and an orange? Recent empirical evidence suggests that choice behaviour and gaze allocation are closely linked at the group level, whereby items looked at longer during the decision-making process are more likely to be chosen. However, it is unclear how variable this gaze bias effect is between individuals. Here we investigate this question across four different simple choice experiments and using a computational model that can be easily applied to individuals. We show that an association between gaze and choice is present for most individuals, but differs considerably in strength. Generally, individuals with a strong association between gaze and choice behaviour are worse at choosing the best item from a choice set compared with individuals with a weak association. Accounting for individuals' variability in gaze bias in the model can explain and accurately predict individual differences in choice behaviour.


Assuntos
Comportamento de Escolha/fisiologia , Fixação Ocular/fisiologia , Individualidade , Modelos Teóricos , Adulto , Conjuntos de Dados como Assunto , Medições dos Movimentos Oculares , Humanos , Fatores de Tempo
7.
Front Neurosci ; 13: 1321, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31920491

RESUMO

The application of deep learning (DL) models to neuroimaging data poses several challenges, due to the high dimensionality, low sample size, and complex temporo-spatial dependency structure of these data. Even further, DL models often act as black boxes, impeding insight into the association of cognitive state and brain activity. To approach these challenges, we introduce the DeepLight framework, which utilizes long short-term memory (LSTM) based DL models to analyze whole-brain functional Magnetic Resonance Imaging (fMRI) data. To decode a cognitive state (e.g., seeing the image of a house), DeepLight separates an fMRI volume into a sequence of axial brain slices, which is then sequentially processed by an LSTM. To maintain interpretability, DeepLight adapts the layer-wise relevance propagation (LRP) technique. Thereby, decomposing its decoding decision into the contributions of the single input voxels to this decision. Importantly, the decomposition is performed on the level of single fMRI volumes, enabling DeepLight to study the associations between cognitive state and brain activity on several levels of data granularity, from the level of the group down to the level of single time points. To demonstrate the versatility of DeepLight, we apply it to a large fMRI dataset of the Human Connectome Project. We show that DeepLight outperforms conventional approaches of uni- and multivariate fMRI analysis in decoding the cognitive states and in identifying the physiologically appropriate brain regions associated with these states. We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...