Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 77
Filter
1.
Article in English | MEDLINE | ID: mdl-38960910

ABSTRACT

Mentalizing, or theory of mind (ToM), impairments and self-referential hypermentalizing bias are well-evident in schizophrenia. However, findings compared to individuals with at-risk mental states (ARMS) are inconsistent, and investigations into the relationship between social cognitive impairments and social anxiety in the two populations are scarce. This study aimed to examine and compare these deficits in first-episode schizophrenia-spectrum disorder (FES) and ARMS, and to explore potential specific associations with neurocognition and symptomatology. Forty patients with FES, 40 individuals with ARMS, and 40 healthy controls (HC) completed clinical assessments, a battery of neurocognitive tasks, and three social cognitive tasks. The comic strip and hinting tasks were used to measure non-verbal and verbal mentalizing abilities, and the gaze perception task was employed to assess self-referential hypermentalizing bias. FES and ARMS showed comparable mentalizing impairments and self-referential hypermentalizing bias compared to HC. However, only ambiguous self-referential gaze perception (SRGP) bias remained significantly different between three groups after controlling for covariates. Findings suggested that self-referential hypermentalizing bias could be a specific deficit and may be considered a potential behavioral indicator in early-stage and prodromal psychosis. Moreover, working memory and social anxiety were related to the social cognitive impairments in ARMS, whereas higher-order executive functions and positive symptoms were associated with the impairments in FES. The current study indicates the presence of stage-specific mechanisms of mentalizing impairments and self-referential hypermentalizing bias, providing insights into the importance of personalized interventions to improve specific neurocognitive domains, social cognition, and clinical outcomes for FES and ARMS.

3.
Br J Psychol ; 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38858823

ABSTRACT

Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. Here, we examined human participants' attention strategies when classifying images and when explaining how they classified the images through eye-tracking and compared their attention strategies with saliency-based explanations from current XAI methods. We found that humans adopted more explorative attention strategies for the explanation task than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations, which contained more specific information for inferring class labels, whereas the other involved explorative scanning with more visual explanations, which were rated higher in effectiveness for early category learning. Interestingly, XAI saliency map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans use both visual and conceptual information during explanation, which serve different purposes, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.

4.
Top Cogn Sci ; 16(3): 349-376, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38781432

ABSTRACT

One important goal of cognitive science is to understand the mind in terms of its representational and computational capacities, where computational modeling plays an essential role in providing theoretical explanations and predictions of human behavior and mental phenomena. In my research, I have been using computational modeling, together with behavioral experiments and cognitive neuroscience methods, to investigate the information processing mechanisms underlying learning and visual cognition in terms of perceptual representation and attention strategy. In perceptual representation, I have used neural network models to understand how the split architecture in the human visual system influences visual cognition, and to examine perceptual representation development as the results of expertise. In attention strategy, I have developed the Eye Movement analysis with Hidden Markov Models method for quantifying eye movement pattern and consistency using both spatial and temporal information, which has led to novel findings across disciplines not discoverable using traditional methods. By integrating it with deep neural networks (DNN), I have developed DNN+HMM to account for eye movement strategy learning in human visual cognition. The understanding of the human mind through computational modeling also facilitates research on artificial intelligence's (AI) comparability with human cognition, which can in turn help explainable AI systems infer humans' belief on AI's operations and provide human-centered explanations to enhance human-AI interaction and mutual understanding. Together, these demonstrate the essential role of computational modeling methods in providing theoretical accounts of the human mind as well as its interaction with its environment and AI systems.


Subject(s)
Cognition , Neural Networks, Computer , Humans , Cognition/physiology , Eye Movements/physiology , Computer Simulation , Visual Perception/physiology , Attention/physiology , Artificial Intelligence , Learning/physiology
5.
Neural Netw ; 177: 106392, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38788290

ABSTRACT

Explainable artificial intelligence (XAI) has been increasingly investigated to enhance the transparency of black-box artificial intelligence models, promoting better user understanding and trust. Developing an XAI that is faithful to models and plausible to users is both a necessity and a challenge. This work examines whether embedding human attention knowledge into saliency-based XAI methods for computer vision models could enhance their plausibility and faithfulness. Two novel XAI methods for object detection models, namely FullGrad-CAM and FullGrad-CAM++, were first developed to generate object-specific explanations by extending the current gradient-based XAI methods for image classification models. Using human attention as the objective plausibility measure, these methods achieve higher explanation plausibility. Interestingly, all current XAI methods when applied to object detection models generally produce saliency maps that are less faithful to the model than human attention maps from the same object detection task. Accordingly, human attention-guided XAI (HAG-XAI) was proposed to learn from human attention how to best combine explanatory information from the models to enhance explanation plausibility by using trainable activation functions and smoothing kernels to maximize the similarity between XAI saliency map and human attention map. The proposed XAI methods were evaluated on widely used BDD-100K, MS-COCO, and ImageNet datasets and compared with typical gradient-based and perturbation-based XAI methods. Results suggest that HAG-XAI enhanced explanation plausibility and user trust at the expense of faithfulness for image classification models, and it enhanced plausibility, faithfulness, and user trust simultaneously and outperformed existing state-of-the-art XAI methods for object detection models.


Subject(s)
Artificial Intelligence , Attention , Humans , Attention/physiology , Neural Networks, Computer
6.
Article in English | MEDLINE | ID: mdl-38517727

ABSTRACT

We propose the gradient-weighted Object Detector Activation Maps (ODAM), a visual explanation technique for interpreting the predictions of object detectors. Utilizing the gradients of detector targets flowing into the intermediate feature maps, ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute. Compared to previous works on classification activation maps (CAM), ODAM generates instance-specific explanations rather than class-specific ones. We show that ODAM is applicable to one-stage, two-stage, and transformer-based detectors with different types of detector backbones and heads, and produces higher-quality visual explanations than the state-of-the-art in terms of both effectiveness and efficiency. We discuss two explanation tasks for object detection: 1) object specification: what is the important region for the prediction? 2) object discrimination: which object is detected? Aiming at these two aspects, we present a detailed analysis of the visual explanations of detectors and carry out extensive experiments to validate the effectiveness of the proposed ODAM. Furthermore, we investigate user trust on the explanation maps, how well the visual explanations of object detectors agrees with human explanations, as measured through human eye gaze, and whether this agreement is related with user trust. Finally, we also propose two applications, ODAM-KD and ODAM-NMS, based on these two abilities of ODAM. ODAM-KD utilizes the object specification of ODAM to generate top-down attention for key predictions and instruct the knowledge distillation of object detection. ODAM-NMS considers the location of the model's explanation for each prediction to distinguish the duplicate detected objects. A training scheme, ODAM-Train, is proposed to improve the quality on object discrimination, and help with ODAM-NMS. The code of ODAM is available: https://github.com/Cyang-Zhao/ODAM.

7.
J Sleep Res ; : e14176, 2024 Feb 25.
Article in English | MEDLINE | ID: mdl-38404186

ABSTRACT

The present study aims to investigate the influence of 24-hr sleep deprivation on implicit emotion regulation using the emotional conflict task. Twenty-five healthy young adults completed a repeated-measures study protocol involving a night of at-home normal sleep control and a night of in-laboratory sleep deprivation. Prior to the experimental session, all participants wore an actigraph watch and completed the sleep diary. Following each condition, participants performed an emotional conflict task with electroencephalographic recordings. Emotional faces (fearful or happy) overlaid with words ("fear" or "happy") were used as stimuli creating congruent or incongruent trials, and participants were instructed to indicate whether the facial expression was happy or fearful. We measured the accuracy and reaction time on the emotional conflict task, as well as the mean amplitude of the P300 component of the event-related potential at CPz. At the behavioural level, sleep-deprived participants showed reduced alertness with overall longer reaction times and higher error rates. In addition, participants in the sleep deprivation condition made more errors when the current trial followed congruent trials compared with when it followed incongruent trials. At the neural level, P300 amplitude evoked under the sleep-deprived condition was significantly more positive compared with the normal sleep condition, and this effect interacted with previous-trial and current-trial congruency conditions, suggesting that participants used more attentional resources to resolve emotional conflicts when sleep deprived. Our study provided pioneering data demonstrating that sleep deprivation may impair the regulation of emotional processing in the absence of explicit instruction among emerging adults.

8.
Brain Behav ; 13(10): e3205, 2023 10.
Article in English | MEDLINE | ID: mdl-37721530

ABSTRACT

INTRODUCTION: Ocular artifact has long been viewed as an impediment to the interpretation of electroencephalogram (EEG) signals in basic and applied research. Today, the use of blind source separation (BSS) methods, including independent component analysis (ICA) and second-order blind identification (SOBI), is considered an essential step in improving the quality of neural signals. Recently, we introduced a method consisting of SOBI and a discriminant and similarity (DANS)-based identification method, capable of identifying and extracting eye movement-related components. These recovered components can be localized within ocular structures with a high goodness of fit (>95%). This raised the possibility that such EEG-derived SOBI components may be used to build predictive models for tracking gaze position. METHODS: As proof of this new concept, we designed an EEG-based virtual eye-tracker (EEG-VET) for tracking eye movement from EEG alone. The EEG-VET is composed of a SOBI algorithm for separating EEG signals into different components, a DANS algorithm for automatically identifying ocular components, and a linear model to transfer ocular components into gaze positions. RESULTS: The prototype of EEG-VET achieved an accuracy of 0.920° and precision of 1.510° of a visual angle in the best participant, whereas an average accuracy of 1.008° ± 0.357° and a precision of 2.348° ± 0.580° of a visual angle across all participants (N = 18). CONCLUSION: This work offers a novel approach that readily co-registers eye movement and neural signals from a single-EEG recording, thus increasing the ease of studying neural mechanisms underlying natural cognition in the context of free eye movement.


Subject(s)
Electroencephalography , Eye Movements , Humans , Electroencephalography/methods , Artifacts , Algorithms , Cognition , Signal Processing, Computer-Assisted
9.
Br J Psychol ; 114 Suppl 1: 17-20, 2023 May.
Article in English | MEDLINE | ID: mdl-36951761

ABSTRACT

Multiple factors have been proposed to contribute to the other-race effect in face recognition, including perceptual expertise and social-cognitive accounts. Here, we propose to understand the effect and its contributing factors from the perspectives of learning mechanisms that involve joint learning of visual attention strategies and internal representations for faces, which can be modulated by quality of contact with other-race individuals including emotional and motivational factors. Computational simulations of this process will enhance our understanding of interactions among factors and help resolve inconsistent results in the literature. In particular, since learning is driven by task demands, visual attention effects observed in different face-processing tasks, such as passive viewing or recognition, are likely to be task specific (although may be associated) and should be examined and compared separately. When examining visual attention strategies, the use of more data-driven and comprehensive eye movement measures, taking both spatial-temporal pattern and consistency of eye movements into account, can lead to novel discoveries in other-race face processing. The proposed framework and analysis methods may be applied to other tasks of real-life significance such as face emotion recognition, further enhancing our understanding of the relationship between learning and visual cognition.


Subject(s)
Pattern Recognition, Visual , Racial Groups , Humans , Racial Groups/psychology , Learning , Recognition, Psychology , Eye Movements
11.
Sci Rep ; 13(1): 1704, 2023 01 30.
Article in English | MEDLINE | ID: mdl-36717669

ABSTRACT

Using background music (BGM) during learning is a common behavior, yet whether BGM can facilitate or hinder learning remains inconclusive and the underlying mechanism is largely an open question. This study aims to elucidate the effect of self-selected BGM on reading task for learners with different characteristics. Particularly, learners' reading task performance, metacognition, and eye movements were examined, in relation to their personal traits including language proficiency, working memory capacity, music experience and personality. Data were collected from a between-subject experiment with 100 non-native English speakers who were randomly assigned into two groups. Those in the experimental group read English passages with music of their own choice played in the background, while those in the control group performed the same task in silence. Results showed no salient differences on passage comprehension accuracy or metacognition between the two groups. Comparisons on fine-grained eye movement measures reveal that BGM imposed heavier cognitive load on post-lexical processes but not on lexical processes. It was also revealed that students with higher English proficiency level or more frequent BGM usage in daily self-learning/reading experienced less cognitive load when reading with their BGM, whereas students with higher working memory capacity (WMC) invested more mental effort than those with lower WMC in the BGM condition. These findings further scientific understanding of how BGM interacts with cognitive tasks in the foreground, and provide practical guidance for learners and learning environment designers on making the most of BGM for instruction and learning.


Subject(s)
Eye Movements , Music , Humans , Comprehension , Language , Reading
12.
IEEE Trans Neural Netw Learn Syst ; 34(3): 1537-1551, 2023 Mar.
Article in English | MEDLINE | ID: mdl-34464269

ABSTRACT

The hidden Markov model (HMM) is a broadly applied generative model for representing time-series data, and clustering HMMs attract increased interest from machine learning researchers. However, the number of clusters ( K ) and the number of hidden states ( S ) for cluster centers are still difficult to determine. In this article, we propose a novel HMM-based clustering algorithm, the variational Bayesian hierarchical EM algorithm, which clusters HMMs through their densities and priors and simultaneously learns posteriors for the novel HMM cluster centers that compactly represent the structure of each cluster. The numbers K and S are automatically determined in two ways. First, we place a prior on the pair (K,S) and approximate their posterior probabilities, from which the values with the maximum posterior are selected. Second, some clusters and states are pruned out implicitly when no data samples are assigned to them, thereby leading to automatic selection of the model complexity. Experiments on synthetic and real data demonstrate that our algorithm performs better than using model selection techniques with maximum likelihood estimation.

13.
Dev Psychol ; 59(2): 353-363, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36342437

ABSTRACT

Early attention bias to threat-related negative emotions may lead children to overestimate dangers in social situations. This study examined its emergence and how it might develop in tandem with a known predictor namely temperamental shyness for toddlers' fear of strangers in 168 Chinese toddlers. Measurable individual differences in such attention bias to fearful faces were found and remained stable from age 12 to 18 months. When shown photos of paired happy versus fearful or happy versus angry faces, toddlers initially gazed more and had longer initial fixation and total fixation at fearful faces compared with happy faces consistently. However, they initially gazed more at happy faces compared with angry faces consistently and had a longer total fixation at angry faces only at 18 months. Stranger anxiety at 12 months predicted attention bias to fearful faces at 18 months. Temperamentally shyer 12-month-olds went on to show stronger attention bias to fearful faces at 18 months, and their fear of strangers also increased more from 12 to 18 months. Together with prior research suggesting attention bias to angry or fearful faces foretelling social anxiety, the present findings point to likely positive feedback loops among attention bias to fearful faces, temperamental shyness, and stranger anxiety in early childhood. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Facial Expression , Fear , Humans , Child, Preschool , Infant , Fear/psychology , Anxiety , Anger , Happiness , Emotions
14.
Emotion ; 23(4): 1028-1039, 2023 Jun.
Article in English | MEDLINE | ID: mdl-35980687

ABSTRACT

Recent research has suggested that dynamic emotion recognition involves strong audiovisual association; that is, facial or vocal information alone automatically induces perceptual processes in the other modality. We hypothesized that different emotions may differ in the automaticity of audiovisual association, resulting in differential audiovisual information processing. Participants judged the emotion of a talking-head video under audiovisual, video-only (with no sound), and audio-only (with a static neutral face) conditions. Among the six basic emotions, disgust had the largest audiovisual advantage over the unimodal conditions in recognition accuracy. In addition, in the recognition of all the emotions except for disgust, participants' eye-movement patterns did not change significantly across the three conditions, suggesting mandatory audiovisual information processing. In contrast, in disgust recognition, participants' eye movements in the audiovisual condition were less eyes-focused than the video-only condition and more eyes-focused than the audio-only condition, suggesting that audio information in the audiovisual condition interfered with eye-movement planning for important features (eyes) for disgust. In addition, those whose eye-movement pattern was affected less by concurrent disgusted voice information benefited more in recognition accuracy. Disgust recognition is learned later in life and thus may involve a reduced amount of audiovisual associative learning. Consequently, audiovisual association in disgust recognition is less automatic and demands more attentional resources than other emotions. Thus, audiovisual information processing in emotion recognition depends on the automaticity of audiovisual association of the emotion resulting from associative learning. This finding has important implications for real-life emotion recognition and multimodal learning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Disgust , Facial Recognition , Humans , Eye-Tracking Technology , Emotions , Cognition , Learning , Facial Expression
15.
NPJ Sci Learn ; 7(1): 28, 2022 Oct 25.
Article in English | MEDLINE | ID: mdl-36284113

ABSTRACT

Greater eyes-focused eye movement pattern during face recognition is associated with better performance in adults but not in children. We test the hypothesis that higher eye movement consistency across trials, instead of a greater eyes-focused pattern, predicts better performance in children since it reflects capacity in developing visual routines. We first simulated visual routine development through combining deep neural network and hidden Markov model that jointly learn perceptual representations and eye movement strategies for face recognition. The model accounted for the advantage of eyes-focused pattern in adults, and predicted that in children (partially trained models) consistency but not pattern of eye movements predicted recognition performance. This result was then verified with data from typically developing children. In addition, lower eye movement consistency in children was associated with autism diagnosis, particularly autistic traits in social skills. Thus, children's face recognition involves visual routine development through social exposure, indexed by eye movement consistency.

16.
Cogn Res Princ Implic ; 7(1): 64, 2022 07 22.
Article in English | MEDLINE | ID: mdl-35867196

ABSTRACT

Use of face masks is one of the measures adopted by the general community to stop the transmission of disease during this ongoing COVID-19 pandemic. This wide use of face masks has indeed been shown to disrupt day-to-day face recognition. People with autism spectrum disorder (ASD) often have predisposed impairment in face recognition and are expected to be more vulnerable to this disruption in face recognition. Here, we recruited typically developing adult participants and those with ASD, and we measured their non-verbal intelligence, autism spectrum quotient, empathy quotient, and recognition performances of faces with and without a face mask covering the lower halves of the face. When faces were initially learned unobstructed, we showed that participants had a general reduced face recognition performance for masked faces. In contrast, when masked faces were first learned, typically developing adults benefit with an overall advantage in recognizing both masked and unmasked faces; while adults with ASD recognized unmasked faces with a significantly more reduced level of performance than masked faces-this face recognition discrepancy is predicted by a higher level of autistic traits. This paper also discusses how autistic traits influence processing of faces with and without face masks.


Subject(s)
Autism Spectrum Disorder , COVID-19 , Adult , Humans , Masks , Pandemics , Recognition, Psychology
17.
Sci Rep ; 12(1): 9144, 2022 06 01.
Article in English | MEDLINE | ID: mdl-35650229

ABSTRACT

Here we tested the hypothesis that in Chinese-English bilinguals, music reading experience may modulate eye movement planning in reading English but not Chinese sentences due to the similarity in perceptual demands on processing sequential symbol strings separated by spaces between music notation and English sentence reading. Chinese-English bilingual musicians and non-musicians read legal, semantically incorrect, and syntactically (and semantically) incorrect sentences in both English and Chinese. In English reading, musicians showed more dispersed eye movement patterns in reading syntactically incorrect sentences than legal sentences, whereas non-musicians did not. This effect was not observed in Chinese reading. Musicians also had shorter saccade lengths when viewing syntactically incorrect than correct musical notations and sentences in an unfamiliar alphabetic language (Tibetan), whereas non-musicians did not. Thus, musicians' eye movement planning was disturbed by syntactic violations in both music and English reading but not in Chinese reading, and this effect was generalized to an unfamiliar alphabetic language. These results suggested that music reading experience may modulate perceptual processes in reading differentially in bilinguals' two languages, depending on their processing similarities.


Subject(s)
Multilingualism , Music , China , Eye Movements , Humans , Language , Reading
18.
Sci Rep ; 12(1): 7462, 2022 05 06.
Article in English | MEDLINE | ID: mdl-35523808

ABSTRACT

No previous studies have investigated eye-movement patterns to show children's information processing while viewing clinical images. Therefore, this study aimed to explore children and their educators' perception of a midline diastema by applying eye-movement analysis using the hidden Markov models (EMHMM). A total of 155 children between 2.5 and 5.5 years of age and their educators (n = 34) viewed pictures with and without a midline diastema while Tobii Pro Nano eye-tracker followed their eye movements. Fixation data were analysed using data-driven, and fixed regions of interest (ROIs) approaches with EMHMM. Two different eye-movement patterns were identified: explorative pattern (76%), where the children's ROIs were predominantly around the nose and mouth, and focused pattern (26%), where children's ROIs were precise, locating on the teeth with and without a diastema, and fixations transited among the ROIs with similar frequencies. Females had a significantly higher eye-movement preference for without diastema image than males. Comparisons between the different age groups showed a statistically significant difference for overall entropies. The 3.6-4.5y age groups exhibited higher entropies, indicating lower eye-movement consistency. In addition, children and their educators exhibited two specific eye-movement patterns. Children in the explorative pattern saw the midline diastema more often while their educators focussed on the image without diastema. Thus, EMHMMs are valuable in analysing eye-movement patterns in children and adults.


Subject(s)
Diastema , Eye Movements , Adult , Attention , Child , Face , Female , Humans , Male , Mouth
19.
Cogn Res Princ Implic ; 7(1): 39, 2022 05 07.
Article in English | MEDLINE | ID: mdl-35524920

ABSTRACT

Holistic processing has been identified as an expertise marker of face and object recognition. By contrast, reduced holistic processing is purportedly an expertise marker in recognising orthographic characters in Chinese. Does holistic processing increase or decrease in expertise development? Is orthographic recognition a domain-specific exception to all other kinds of recognition (e.g. face and objects)? In two studies, we examined the developmental trend of holistic processing in Chinese character recognition in Chinese and non-Chinese children, and its relationship with literacy abilities: Chinese first graders-with emergent Chinese literacy acquired in kindergarten-showed increased holistic processing perhaps as an inchoate expertise marker when compared with kindergartners and non-Chinese first graders; however, the holistic processing effect was reduced in higher-grade Chinese children. These results suggest a non-monotonic inverted U-shape trend of holistic processing in visual expertise development: An increase in holistic processing due to initial reading experience followed by a decrease in holistic processing due to literacy enhancement. This result marks the development of holistic and analytic processing skills, both of which can be essential for mastering visual recognition. This study is the first to investigate the developmental trend of holistic processing in Chinese character recognition using the composite paradigm.


Subject(s)
Pattern Recognition, Visual , Reading , Child , China , Humans , Recognition, Psychology , Visual Perception
20.
Dent Traumatol ; 38(5): 410-416, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35460595

ABSTRACT

BACKGROUND/AIM: Traumatic dental injuries (TDIs) in the primary dentition may result in tooth discolouration and fractures. The aim of this child-centred study was to explore the differences between preschool children's eye movement patterns and visual attention to typical outcomes following TDIs to primary teeth. MATERIALS AND METHODS: An eye-tracker recorded 155 healthy preschool children's eye movements when they viewed clinical images of healthy teeth, tooth fractures and discolourations. The visual search pattern was analysed using the eye movement analysis with the Hidden Markov Models (EMHMM) approach and preference for the various regions of interest (ROIs). RESULTS: Two different eye movement patterns (distributed and selective) were identified (p < .05). Children with the distributed pattern shifted their fixations between the presented images, while those with the selective pattern remained focused on the same image they first saw. CONCLUSIONS: Preschool children noticed teeth. However, most of them did not have an attentional bias, implying that they did not interpret these TDI outcomes negatively. Only a few children avoided looking at images with TDIs indicating a potential negative impact. The EMHMM approach is appropriate for assessing inter-individual differences in children's visual attention to TDI outcomes.


Subject(s)
Tooth Fractures , Tooth Injuries , Child, Preschool , Eye-Tracking Technology , Humans , Tooth, Deciduous
SELECTION OF CITATIONS
SEARCH DETAIL
...