Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
PeerJ ; 12: e17468, 2024.
Article in English | MEDLINE | ID: mdl-38827287

ABSTRACT

The aim of this study was to evaluate the effectiveness of ChatGPT-3.5 and ChatGPT-4 in incorporating critical risk factors, namely history of depression and access to weapons, into suicide risk assessments. Both models assessed suicide risk using scenarios that featured individuals with and without a history of depression and access to weapons. The models estimated the likelihood of suicidal thoughts, suicide attempts, serious suicide attempts, and suicide-related mortality on a Likert scale. A multivariate three-way ANOVA analysis with Bonferroni post hoc tests was conducted to examine the impact of the forementioned independent factors (history of depression and access to weapons) on these outcome variables. Both models identified history of depression as a significant suicide risk factor. ChatGPT-4 demonstrated a more nuanced understanding of the relationship between depression, access to weapons, and suicide risk. In contrast, ChatGPT-3.5 displayed limited insight into this complex relationship. ChatGPT-4 consistently assigned higher severity ratings to suicide-related variables than did ChatGPT-3.5. The study highlights the potential of these two models, particularly ChatGPT-4, to enhance suicide risk assessment by considering complex risk factors.


Subject(s)
Depression , Suicide , Humans , Risk Assessment , Male , Female , Adult , Suicide/psychology , Depression/psychology , Depression/epidemiology , Risk Factors , Suicidal Ideation , Weapons , Middle Aged , Young Adult , Suicide, Attempted/psychology , Suicide, Attempted/statistics & numerical data , Suicide Prevention
2.
JMIR Ment Health ; 11: e54781, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38787297

ABSTRACT

Unlabelled: This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.


Subject(s)
Artificial Intelligence , Psychotherapy , Artificial Intelligence/ethics , Humans , Psychotherapy/methods , Psychotherapy/ethics
3.
Omega (Westport) ; : 302228241254559, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38776395

ABSTRACT

This study examined the roles of resilience and willingness to seek psychological help in influencing Post-Traumatic Growth (PTG) among 173 emerging adults who experienced parental loss during their school years. A positive relationship was found between resilience, the willingness to seek psychological help, and PTG. Participants who endured loss over five years prior manifested increased PTG (New-Possibilities, Spiritual Change, and Appreciation of Life sub-scales) relative to those with more recent losses. The multiple regression model was notable, accounting for 33% of the variance in PTG. Both resilience and the willingness to seek psychological help assistance significantly predicted PTG, surpassing other predictors in the model. It is worth noting that the type of loss, whether sudden or anticipated, did not alter PTG levels. In essence, this study underscores the enduring positive psychological impact of parental loss on emerging adults, highlighting the critical need for comprehensive psychological resources and support for such individuals.

4.
Front Neurol ; 15: 1365369, 2024.
Article in English | MEDLINE | ID: mdl-38711564

ABSTRACT

Introduction: The vestibulo-ocular reflex (VOR) stabilizes vision during head movements. VOR disorders lead to symptoms such as imbalance, dizziness, and oscillopsia. Despite similar VOR dysfunction, patients display diverse complaints. This study analyses saccades, balance, and spatial orientation in chronic peripheral and central VOR disorders, specifically examining the impact of oscillopsia. Methods: Participants involved 15 patients with peripheral bilateral vestibular loss (pBVL), 21 patients with clinically and genetically confirmed Machado-Joseph disease (MJD) who also have bilateral vestibular deficit, and 22 healthy controls. All pBVL and MJD participants were tested at least 9 months after the onset of symptoms and underwent a detailed clinical neuro-otological evaluation at the Dizziness and Eye Movements Clinic of the Meir Medical Center. Results: Among the 15 patients with pBVL and 21 patients with MJD, only 5 patients with pBVL complained of chronic oscillopsia while none of the patients with MJD reported this complaint. Comparison between groups exhibited significant differences in vestibular, eye movements, balance, and spatial orientation. When comparing oscillopsia with no-oscillopsia subjects, significant differences were found in the dynamic visual acuity test, the saccade latency of eye movements, and the triangle completion test. Discussion: Even though there is a significant VOR gain impairment in MJD with some subjects having less VOR gain than pBVL with reported oscillopsia, no individuals with MJD reported experiencing oscillopsia. This study further supports that subjects experiencing oscillopsia present a real impairment to stabilize the image on the retina, whereas those without oscillopsia may utilize saccade strategies to cope with it and may also rely on visual information for spatial orientation. Finding objective differences will help to understand the causes of the oscillopsia experience and develop coping strategies to overcome it.

5.
JMIR Ment Health ; 11: e55988, 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38593424

ABSTRACT

BACKGROUND: Large language models (LLMs) hold potential for mental health applications. However, their opaque alignment processes may embed biases that shape problematic perspectives. Evaluating the values embedded within LLMs that guide their decision-making have ethical importance. Schwartz's theory of basic values (STBV) provides a framework for quantifying cultural value orientations and has shown utility for examining values in mental health contexts, including cultural, diagnostic, and therapist-client dynamics. OBJECTIVE: This study aimed to (1) evaluate whether the STBV can measure value-like constructs within leading LLMs and (2) determine whether LLMs exhibit distinct value-like patterns from humans and each other. METHODS: In total, 4 LLMs (Bard, Claude 2, Generative Pretrained Transformer [GPT]-3.5, GPT-4) were anthropomorphized and instructed to complete the Portrait Values Questionnaire-Revised (PVQ-RR) to assess value-like constructs. Their responses over 10 trials were analyzed for reliability and validity. To benchmark the LLMs' value profiles, their results were compared to published data from a diverse sample of 53,472 individuals across 49 nations who had completed the PVQ-RR. This allowed us to assess whether the LLMs diverged from established human value patterns across cultural groups. Value profiles were also compared between models via statistical tests. RESULTS: The PVQ-RR showed good reliability and validity for quantifying value-like infrastructure within the LLMs. However, substantial divergence emerged between the LLMs' value profiles and population data. The models lacked consensus and exhibited distinct motivational biases, reflecting opaque alignment processes. For example, all models prioritized universalism and self-direction, while de-emphasizing achievement, power, and security relative to humans. Successful discriminant analysis differentiated the 4 LLMs' distinct value profiles. Further examination found the biased value profiles strongly predicted the LLMs' responses when presented with mental health dilemmas requiring choosing between opposing values. This provided further validation for the models embedding distinct motivational value-like constructs that shape their decision-making. CONCLUSIONS: This study leveraged the STBV to map the motivational value-like infrastructure underpinning leading LLMs. Although the study demonstrated the STBV can effectively characterize value-like infrastructure within LLMs, substantial divergence from human values raises ethical concerns about aligning these models with mental health applications. The biases toward certain cultural value sets pose risks if integrated without proper safeguards. For example, prioritizing universalism could promote unconditional acceptance even when clinically unwise. Furthermore, the differences between the LLMs underscore the need to standardize alignment processes to capture true cultural diversity. Thus, any responsible integration of LLMs into mental health care must account for their embedded biases and motivation mismatches to ensure equitable delivery across diverse populations. Achieving this will require transparency and refinement of alignment techniques to instill comprehensive human values.


Subject(s)
Allied Health Personnel , Mental Health , Humans , Cross-Sectional Studies , Reproducibility of Results , Language
6.
J Neurol Sci ; 460: 122990, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38579416

ABSTRACT

Cerebellar ataxia with neuropathy and vestibular areflexia syndrome (CANVAS) is a slowly progressing autosomal recessive ataxic disorder linked to an abnormal biallelic intronic (most commonly) AAGGG repeat expansion in the replication factor complex subunit 1 (RFC1). While the clinical diagnosis is relatively straightforward when the three components of the disorder are present, it becomes challenging when only one of the triad (cerebellar ataxia, neuropathy or vestibular areflexia) manifests. Isolated cases of Bilateral Vestibulopathy (BVP) or vestibular areflexia that later developed the other components of CANVAS have not been documented. We report four cases of patients with chronic imbalance and BVP that, after several years, developed cerebellar and neuropathic deficits with positive genetic testing for RFC1. Our report supports the concept that CANVAS should be considered in every patient with BVP of unknown etiology, even without the presence of the other triad components. This is especially important given that about 50% of cases in many BVP series are diagnosed as idiopathic, some of which may be undiagnosed CANVAS.


Subject(s)
Bilateral Vestibulopathy , Cerebellar Ataxia , Humans , Bilateral Vestibulopathy/diagnosis , Bilateral Vestibulopathy/genetics , Bilateral Vestibulopathy/complications , Male , Female , Adult , Cerebellar Ataxia/genetics , Cerebellar Ataxia/diagnosis , Middle Aged , Replication Protein C
7.
JMIR Ment Health ; 11: e53043, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38533615

ABSTRACT

Background: The current paradigm in mental health care focuses on clinical recovery and symptom remission. This model's efficacy is influenced by therapist trust in patient recovery potential and the depth of the therapeutic relationship. Schizophrenia is a chronic illness with severe symptoms where the possibility of recovery is a matter of debate. As artificial intelligence (AI) becomes integrated into the health care field, it is important to examine its ability to assess recovery potential in major psychiatric disorders such as schizophrenia. Objective: This study aimed to evaluate the ability of large language models (LLMs) in comparison to mental health professionals to assess the prognosis of schizophrenia with and without professional treatment and the long-term positive and negative outcomes. Methods: Vignettes were inputted into LLMs interfaces and assessed 10 times by 4 AI platforms: ChatGPT-3.5, ChatGPT-4, Google Bard, and Claude. A total of 80 evaluations were collected and benchmarked against existing norms to analyze what mental health professionals (general practitioners, psychiatrists, clinical psychologists, and mental health nurses) and the general public think about schizophrenia prognosis with and without professional treatment and the positive and negative long-term outcomes of schizophrenia interventions. Results: For the prognosis of schizophrenia with professional treatment, ChatGPT-3.5 was notably pessimistic, whereas ChatGPT-4, Claude, and Bard aligned with professional views but differed from the general public. All LLMs believed untreated schizophrenia would remain static or worsen without professional treatment. For long-term outcomes, ChatGPT-4 and Claude predicted more negative outcomes than Bard and ChatGPT-3.5. For positive outcomes, ChatGPT-3.5 and Claude were more pessimistic than Bard and ChatGPT-4. Conclusions: The finding that 3 out of the 4 LLMs aligned closely with the predictions of mental health professionals when considering the "with treatment" condition is a demonstration of the potential of this technology in providing professional clinical prognosis. The pessimistic assessment of ChatGPT-3.5 is a disturbing finding since it may reduce the motivation of patients to start or persist with treatment for schizophrenia. Overall, although LLMs hold promise in augmenting health care, their application necessitates rigorous validation and a harmonious blend with human expertise.


Subject(s)
General Practitioners , Schizophrenia , Humans , Mental Health , Artificial Intelligence , Health Occupations
8.
JMIR Ment Health ; 11: e54369, 2024 Feb 06.
Article in English | MEDLINE | ID: mdl-38319707

ABSTRACT

BACKGROUND: Mentalization, which is integral to human cognitive processes, pertains to the interpretation of one's own and others' mental states, including emotions, beliefs, and intentions. With the advent of artificial intelligence (AI) and the prominence of large language models in mental health applications, questions persist about their aptitude in emotional comprehension. The prior iteration of the large language model from OpenAI, ChatGPT-3.5, demonstrated an advanced capacity to interpret emotions from textual data, surpassing human benchmarks. Given the introduction of ChatGPT-4, with its enhanced visual processing capabilities, and considering Google Bard's existing visual functionalities, a rigorous assessment of their proficiency in visual mentalizing is warranted. OBJECTIVE: The aim of the research was to critically evaluate the capabilities of ChatGPT-4 and Google Bard with regard to their competence in discerning visual mentalizing indicators as contrasted with their textual-based mentalizing abilities. METHODS: The Reading the Mind in the Eyes Test developed by Baron-Cohen and colleagues was used to assess the models' proficiency in interpreting visual emotional indicators. Simultaneously, the Levels of Emotional Awareness Scale was used to evaluate the large language models' aptitude in textual mentalizing. Collating data from both tests provided a holistic view of the mentalizing capabilities of ChatGPT-4 and Bard. RESULTS: ChatGPT-4, displaying a pronounced ability in emotion recognition, secured scores of 26 and 27 in 2 distinct evaluations, significantly deviating from a random response paradigm (P<.001). These scores align with established benchmarks from the broader human demographic. Notably, ChatGPT-4 exhibited consistent responses, with no discernible biases pertaining to the sex of the model or the nature of the emotion. In contrast, Google Bard's performance aligned with random response patterns, securing scores of 10 and 12 and rendering further detailed analysis redundant. In the domain of textual analysis, both ChatGPT and Bard surpassed established benchmarks from the general population, with their performances being remarkably congruent. CONCLUSIONS: ChatGPT-4 proved its efficacy in the domain of visual mentalizing, aligning closely with human performance standards. Although both models displayed commendable acumen in textual emotion interpretation, Bard's capabilities in visual emotion interpretation necessitate further scrutiny and potential refinement. This study stresses the criticality of ethical AI development for emotional recognition, highlighting the need for inclusive data, collaboration with patients and mental health experts, and stringent governmental oversight to ensure transparency and protect patient privacy.


Subject(s)
Artificial Intelligence , Emotions , Humans , Pilot Projects , Benchmarking , Eye
10.
Omega (Westport) ; : 302228231223275, 2024 Jan 04.
Article in English | MEDLINE | ID: mdl-38174720

ABSTRACT

Non-suicidal self-injury (NNSI) among adolescents is a significant concern. This study aimed to explore teachers' perceptions and experiences in cases of NSSI among their students. This qualitative-phenomenological study used in-depth, semi-structured interviews conducted with 27 teachers from high-schools in Israel. Thematic analysis was used to identify patterns and themes. Theme 1 highlighted the emotional impact of discovering self-injury incidents, including panic, confusion, and helplessness. Theme 2 focused on teachers' limited professional support and their need for training and guidance. Theme 3 explored teachers' desire to help students and their strategies for building connections and providing empathy, sometimes despite emotional detachment. Theme 4 emphasized the importance of involving parents and the need for effective communication. This study emphasizes the importance of providing teachers comprehensive training to address NSSI effectively. These findings provide a better understanding of teachers' experiences and underscore the need for enhanced support systems.

11.
Fam Med Community Health ; 12(Suppl 1)2024 01 09.
Article in English | MEDLINE | ID: mdl-38199604

ABSTRACT

BACKGROUND: Artificial intelligence (AI) has rapidly permeated various sectors, including healthcare, highlighting its potential to facilitate mental health assessments. This study explores the underexplored domain of AI's role in evaluating prognosis and long-term outcomes in depressive disorders, offering insights into how AI large language models (LLMs) compare with human perspectives. METHODS: Using case vignettes, we conducted a comparative analysis involving different LLMs (ChatGPT-3.5, ChatGPT-4, Claude and Bard), mental health professionals (general practitioners, psychiatrists, clinical psychologists and mental health nurses), and the general public that reported previously. We evaluate the LLMs ability to generate prognosis, anticipated outcomes with and without professional intervention, and envisioned long-term positive and negative consequences for individuals with depression. RESULTS: In most of the examined cases, the four LLMs consistently identified depression as the primary diagnosis and recommended a combined treatment of psychotherapy and antidepressant medication. ChatGPT-3.5 exhibited a significantly pessimistic prognosis distinct from other LLMs, professionals and the public. ChatGPT-4, Claude and Bard aligned closely with mental health professionals and the general public perspectives, all of whom anticipated no improvement or worsening without professional help. Regarding long-term outcomes, ChatGPT 3.5, Claude and Bard consistently projected significantly fewer negative long-term consequences of treatment than ChatGPT-4. CONCLUSIONS: This study underscores the potential of AI to complement the expertise of mental health professionals and promote a collaborative paradigm in mental healthcare. The observation that three of the four LLMs closely mirrored the anticipations of mental health experts in scenarios involving treatment underscores the technology's prospective value in offering professional clinical forecasts. The pessimistic outlook presented by ChatGPT 3.5 is concerning, as it could potentially diminish patients' drive to initiate or continue depression therapy. In summary, although LLMs show potential in enhancing healthcare services, their utilisation requires thorough verification and a seamless integration with human judgement and skills.


Subject(s)
Artificial Intelligence , General Practitioners , Humans , Depression/diagnosis , Depression/therapy , Prospective Studies , Prognosis , Models, Psychological
13.
Front Psychiatry ; 14: 1280440, 2023.
Article in English | MEDLINE | ID: mdl-37928920

ABSTRACT

Objective: Stimulation of the peripheral visual field has been previously reported as beneficial for cognitive performance in ADHD. This study assesses the safety and efficacy of a novel intervention involving peripheral visual stimuli in managing attention deficit hyperactivity disorder (ADHD). Methods: One hundred and eight adults, 18-40 years old, with ADHD, were enrolled in a two-month open-label study. The intervention (i.e., Neuro-glasses) consisted of standard eyeglasses with personalized peripheral visual stimuli embedded on the lenses. Participants were assessed at baseline and at the end of the study with self-report measures of ADHD symptoms (the Adult ADHD Self-Report Scale; ASRS), and executive functions (The Behavior Rating Inventory of Executive Function Adult Version; BRIEF-A). A computerized test of continuous performance (The Conners' Continuous Performance Test-3; CPT-3) was tested at baseline with standard eyeglasses and at the end of study using Neuro-glasses. The Clinical Global Impression-Improvement scale (CGI-I) was assessed at the intervention endpoint. Safety was monitored by documentation of adverse events. Results: The efficacy analysis included 97 participants. Significant improvements were demonstrated in self-reported measures of inattentive symptoms (ASRS inattentive index; p = 0.037) and metacognitive functions concerning self-management and performance monitoring (BRIEF-A; p = 0.029). A continuous-performance test (CPT-3) indicated significant improvement in detectability (d'; p = 0.027) and reduced commission errors (p = 0.004), suggesting that the Neuro-glasses have positive effects on response inhibition. Sixty-two percent of the participants met the response criteria assessed by a clinician (CGI-I). No major adverse events were reported. Conclusion: Neuro-glasses may offer a safe and effective approach to managing adult ADHD. Results encourage future controlled efficacy studies to confirm current findings in adults and possibly children with ADHD.Clinical trial registration: https://www.clinicaltrials.gov/, Identifier NCT05777785.

14.
Article in English | MEDLINE | ID: mdl-37844967

ABSTRACT

OBJECTIVE: To compare evaluations of depressive episodes and suggested treatment protocols generated by Chat Generative Pretrained Transformer (ChatGPT)-3 and ChatGPT-4 with the recommendations of primary care physicians. METHODS: Vignettes were input to the ChatGPT interface. These vignettes focused primarily on hypothetical patients with symptoms of depression during initial consultations. The creators of these vignettes meticulously designed eight distinct versions in which they systematically varied patient attributes (sex, socioeconomic status (blue collar worker or white collar worker) and depression severity (mild or severe)). Each variant was subsequently introduced into ChatGPT-3.5 and ChatGPT-4. Each vignette was repeated 10 times to ensure consistency and reliability of the ChatGPT responses. RESULTS: For mild depression, ChatGPT-3.5 and ChatGPT-4 recommended psychotherapy in 95.0% and 97.5% of cases, respectively. Primary care physicians, however, recommended psychotherapy in only 4.3% of cases. For severe cases, ChatGPT favoured an approach that combined psychotherapy, while primary care physicians recommended a combined approach. The pharmacological recommendations of ChatGPT-3.5 and ChatGPT-4 showed a preference for exclusive use of antidepressants (74% and 68%, respectively), in contrast with primary care physicians, who typically recommended a mix of antidepressants and anxiolytics/hypnotics (67.4%). Unlike primary care physicians, ChatGPT showed no gender or socioeconomic biases in its recommendations. CONCLUSION: ChatGPT-3.5 and ChatGPT-4 aligned well with accepted guidelines for managing mild and severe depression, without showing the gender or socioeconomic biases observed among primary care physicians. Despite the suggested potential benefit of using atificial intelligence (AI) chatbots like ChatGPT to enhance clinical decision making, further research is needed to refine AI recommendations for severe cases and to consider potential risks and ethical issues.


Subject(s)
Anti-Anxiety Agents , Physicians, Primary Care , Humans , Depression/drug therapy , Reproducibility of Results , Choline O-Acetyltransferase , Antidepressive Agents/therapeutic use
16.
JMIR Ment Health ; 10: e51232, 2023 Sep 20.
Article in English | MEDLINE | ID: mdl-37728984

ABSTRACT

BACKGROUND: ChatGPT, a linguistic artificial intelligence (AI) model engineered by OpenAI, offers prospective contributions to mental health professionals. Although having significant theoretical implications, ChatGPT's practical capabilities, particularly regarding suicide prevention, have not yet been substantiated. OBJECTIVE: The study's aim was to evaluate ChatGPT's ability to assess suicide risk, taking into consideration 2 discernable factors-perceived burdensomeness and thwarted belongingness-over a 2-month period. In addition, we evaluated whether ChatGPT-4 more accurately evaluated suicide risk than did ChatGPT-3.5. METHODS: ChatGPT was tasked with assessing a vignette that depicted a hypothetical patient exhibiting differing degrees of perceived burdensomeness and thwarted belongingness. The assessments generated by ChatGPT were subsequently contrasted with standard evaluations rendered by mental health professionals. Using both ChatGPT-3.5 and ChatGPT-4 (May 24, 2023), we executed 3 evaluative procedures in June and July 2023. Our intent was to scrutinize ChatGPT-4's proficiency in assessing various facets of suicide risk in relation to the evaluative abilities of both mental health professionals and an earlier version of ChatGPT-3.5 (March 14 version). RESULTS: During the period of June and July 2023, we found that the likelihood of suicide attempts as evaluated by ChatGPT-4 was similar to the norms of mental health professionals (n=379) under all conditions (average Z score of 0.01). Nonetheless, a pronounced discrepancy was observed regarding the assessments performed by ChatGPT-3.5 (May version), which markedly underestimated the potential for suicide attempts, in comparison to the assessments carried out by the mental health professionals (average Z score of -0.83). The empirical evidence suggests that ChatGPT-4's evaluation of the incidence of suicidal ideation and psychache was higher than that of the mental health professionals (average Z score of 0.47 and 1.00, respectively). Conversely, the level of resilience as assessed by both ChatGPT-4 and ChatGPT-3.5 (both versions) was observed to be lower in comparison to the assessments offered by mental health professionals (average Z score of -0.89 and -0.90, respectively). CONCLUSIONS: The findings suggest that ChatGPT-4 estimates the likelihood of suicide attempts in a manner akin to evaluations provided by professionals. In terms of recognizing suicidal ideation, ChatGPT-4 appears to be more precise. However, regarding psychache, there was an observed overestimation by ChatGPT-4, indicating a need for further research. These results have implications regarding ChatGPT-4's potential to support gatekeepers, patients, and even mental health professionals' decision-making. Despite the clinical potential, intensive follow-up studies are necessary to establish the use of ChatGPT-4's capabilities in clinical practice. The finding that ChatGPT-3.5 frequently underestimates suicide risk, especially in severe cases, is particularly troubling. It indicates that ChatGPT may downplay one's actual suicide risk level.

17.
J Vestib Res ; 2023 Aug 29.
Article in English | MEDLINE | ID: mdl-37661905

ABSTRACT

BACKGROUND: Machado Joseph Disease (MJD) is an autosomal dominant neurodegenerative disease. In previous studies, we described significant bilateral horizontal Vestibulo-Ocular Reflex (VOR) deficit within this population without any reference to the presence of vestibular symptomatology. OBJECTIVE: To evaluate whether, beyond cerebellar ataxia complaints, MJD patients have typical vestibular symptomatology corresponding to the accepted diagnostic criteria of Bilateral Vestibulopathy (BVP) according to the definition of the International Barany Society of Neuro-Otology. METHODS: Twenty-one MJD, 12 clinically stable chronic Unilateral Vestibulopathy (UVP), 15 clinically stable chronic BVP, and 22 healthy Controls underwent the video Head Impulse Test (vHIT) evaluating VOR gain and filled out the following questionnaires related to vestibular symptomatology: The Dizziness Handicap Inventory (DHI), the Activities-specific Balance Confidence Scale (ABC), the Vertigo Visual Scale (VVS) and the Beck Anxiety Inventory (BAI). RESULTS: The MJD group demonstrated significant bilateral vestibular impairment with horizontal gain less than 0.6 in 71% of patients (0.54±0.17). Similar to UVP and BVP, MJD patients reported a significantly higher level of symptoms than Controls in the DHI, ABC, VVS, and BAI questionnaires. CONCLUSIONS: MJD demonstrated significant VOR impairment and clinical symptoms typical of BVP. We suggest that in a future version of the International Classification of Vestibular Disorders (ICVD), MJD should be categorized under a separate section of central vestibulopathy with the heading of bilateral vestibulopathy. The present findings are of importance regarding the clinical diagnosis process and possible treatment based on vestibular rehabilitation.

18.
Front Psychiatry ; 14: 1234397, 2023.
Article in English | MEDLINE | ID: mdl-37720897

ABSTRACT

This study evaluated the potential of ChatGPT, a large language model, to generate mentalizing-like abilities that are tailored to a specific personality structure and/or psychopathology. Mentalization is the ability to understand and interpret one's own and others' mental states, including thoughts, feelings, and intentions. Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD) are characterized by distinct patterns of emotional regulation. Individuals with BPD tend to experience intense and unstable emotions, while individuals with SPD tend to experience flattened or detached emotions. We used ChatGPT's free version 23.3 and assessed the extent to which its responses akin to emotional awareness (EA) were customized to the distinctive personality structure-character characterized by Borderline Personality Disorder (BPD) and Schizoid Personality Disorder (SPD), employing the Levels of Emotional Awareness Scale (LEAS). ChatGPT was able to accurately describe the emotional reactions of individuals with BPD as more intense, complex, and rich than those with SPD. This finding suggests that ChatGPT can generate mentalizing-like responses consistent with a range of psychopathologies in line with clinical and theoretical knowledge. However, the study also raises concerns regarding the potential for stigmas or biases related to mental diagnoses to impact the validity and usefulness of chatbot-based clinical interventions. We emphasize the need for the responsible development and deployment of chatbot-based interventions in mental health, which considers diverse theoretical frameworks.

19.
Front Psychiatry ; 14: 1213141, 2023.
Article in English | MEDLINE | ID: mdl-37593450

ABSTRACT

ChatGPT, an artificial intelligence language model developed by OpenAI, holds the potential for contributing to the field of mental health. Nevertheless, although ChatGPT theoretically shows promise, its clinical abilities in suicide prevention, a significant mental health concern, have yet to be demonstrated. To address this knowledge gap, this study aims to compare ChatGPT's assessments of mental health indicators to those of mental health professionals in a hypothetical case study that focuses on suicide risk assessment. Specifically, ChatGPT was asked to evaluate a text vignette describing a hypothetical patient with varying levels of perceived burdensomeness and thwarted belongingness. The ChatGPT assessments were compared to the norms of mental health professionals. The results indicated that ChatGPT rated the risk of suicide attempts lower than did the mental health professionals in all conditions. Furthermore, ChatGPT rated mental resilience lower than the norms in most conditions. These results imply that gatekeepers, patients or even mental health professionals who rely on ChatGPT for evaluating suicidal risk or as a complementary tool to improve decision-making may receive an inaccurate assessment that underestimates the actual suicide risk.

20.
Harefuah ; 162(7): 434-439, 2023 Aug.
Article in Hebrew | MEDLINE | ID: mdl-37561033

ABSTRACT

INTRODUCTION: Machado-Joseph disease (MJD) is an inherited neurodegenerative disease with progressive cerebellar ataxia manifested through lack of coordination and balance. MJD patients also present significant Vestibulo-Ocular Reflex (VOR) deficit but their whole vestibular features have not been previously evaluated. We aimed to evaluate whether MJD patients have vestibular features fitting the diagnostic criteria of Bilateral Vestibulopathy established by the International Society for Neuro-otology. METHODS: Sixteen MJD patients and 21 healthy controls underwent a detailed clinical neuro-otological examination including a quantitative evaluation of the VOR gain using the video Head Impulse Test (vHIT). Vestibular-related symptoms were evaluated by the Dizziness Handicap Inventory (DHI), the Activities-specific Balance Confidence Scale (ABC), the Vertigo Visual Scale (VVS). In addition, anxiety that is frequently present in vestibular disorders, was evaluated by the Beck Anxiety Inventory (BAI). RESULTS: MJD patients had significantly reduced horizontal VOR gain with significantly higher scores in all vestibular-related symptoms questionnaires. These symptoms scores were like those reported in studies evaluating patients with bilateral peripheral vestibular loss. CONCLUSIONS: Beyond the cerebellar deficits, MJD patients have vestibular signs and symptoms fitting the diagnostic criteria of Bilateral Vestibulopathy established by the International Society for Neuro-otology. These findings are of relevance not only for the diagnosis and evaluation of progressive cerebellar diseases but also for the possible beneficial effect of vestibular rehabilitation techniques on dizziness, balance and the emotional, physiological and functional aspects of MJD.


Subject(s)
Bilateral Vestibulopathy , Machado-Joseph Disease , Neurodegenerative Diseases , Humans , Machado-Joseph Disease/diagnosis , Dizziness/diagnosis , Dizziness/etiology , Bilateral Vestibulopathy/diagnosis , Reflex, Vestibulo-Ocular/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...