Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Front Artif Intell ; 6: 1229805, 2023.
Article in English | MEDLINE | ID: mdl-37899961

ABSTRACT

Virtual Mental Health Assistants (VMHAs) continuously evolve to support the overloaded global healthcare system, which receives approximately 60 million primary care visits and 6 million emergency room visits annually. These systems, developed by clinical psychologists, psychiatrists, and AI researchers, are designed to aid in Cognitive Behavioral Therapy (CBT). The main focus of VMHAs is to provide relevant information to mental health professionals (MHPs) and engage in meaningful conversations to support individuals with mental health conditions. However, certain gaps prevent VMHAs from fully delivering on their promise during active communications. One of the gaps is their inability to explain their decisions to patients and MHPs, making conversations less trustworthy. Additionally, VMHAs can be vulnerable in providing unsafe responses to patient queries, further undermining their reliability. In this review, we assess the current state of VMHAs on the grounds of user-level explainability and safety, a set of desired properties for the broader adoption of VMHAs. This includes the examination of ChatGPT, a conversation agent developed on AI-driven models: GPT3.5 and GPT-4, that has been proposed for use in providing mental health services. By harnessing the collaborative and impactful contributions of AI, natural language processing, and the mental health professionals (MHPs) community, the review identifies opportunities for technological progress in VMHAs to ensure their capabilities include explainable and safe behaviors. It also emphasizes the importance of measures to guarantee that these advancements align with the promise of fostering trustworthy conversations.

2.
Patterns (N Y) ; 2(8): 100308, 2021 Aug 13.
Article in English | MEDLINE | ID: mdl-34430927

ABSTRACT

Artificial intelligence (AI) technologies have long been positioned as a tool to provide crucial data-driven decision support to people. In this survey paper, I look at how collaboration assistants (chatbots for short), a type of AI that allows people to interact with them naturally (such as using speech, gesture, and text), have been used during a true global exigency-the COVID-19 pandemic. The key observation is that chatbots missed their "Apollo Moment" when at the time of need, they could have provided people with useful and life-saving contextual, personalized, and reliable decision support at a scale that the state-of-the-art makes possible. By "Apollo Moment", I refer to the opportunity for a technology to attain the pinnacle of its impact. I review the chatbot capabilities that are feasible with existing methods, identify the potential that chatbots could have met, and highlight the use-cases they were deployed on, the challenges they faced, and gaps that persisted. Finally, I draw lessons that, if implemented, would make them more relevant in future health emergencies.

SELECTION OF CITATIONS
SEARCH DETAIL
...