Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Med Internet Res ; 21(4): e11410, 2019 04 26.
Artigo em Inglês | MEDLINE | ID: mdl-31025945

RESUMO

BACKGROUND: Online peer support forums require oversight to ensure they remain safe and therapeutic. As online communities grow, they place a greater burden on their human moderators, which increases the likelihood that people at risk may be overlooked. This study evaluated the potential for machine learning to assist online peer support by directing moderators' attention where it is most needed. OBJECTIVE: This study aimed to evaluate the accuracy of an automated triage system and the extent to which it influences moderator behavior. METHODS: A machine learning classifier was trained to prioritize forum messages as green, amber, red, or crisis depending on how urgently they require attention from a moderator. This was then launched as a set of widgets injected into a popular online peer support forum hosted by ReachOut.com, an Australian Web-based youth mental health service that aims to intervene early in the onset of mental health problems in young people. The accuracy of the system was evaluated using a holdout test set of manually prioritized messages. The impact on moderator behavior was measured as response ratio and response latency, that is, the proportion of messages that receive at least one reply from a moderator and how long it took for these replies to be made. These measures were compared across 3 periods: before launch, after an informal launch, and after a formal launch accompanied by training. RESULTS: The algorithm achieved 84% f-measure in identifying content that required a moderator response. Between prelaunch and post-training periods, response ratios increased by 0.9, 4.4, and 10.5 percentage points for messages labelled as crisis, red, and green, respectively, but decreased by 5.0 percentage points for amber messages. Logistic regression indicated that the triage system was a significant contributor to response ratios for green, amber, and red messages, but not for crisis messages. Response latency was significantly reduced (P<.001), between the same periods, by factors of 80%, 80%, 77%, and 12% for crisis, red, amber, and green messages, respectively. Regression analysis indicated that the triage system made a significant and unique contribution to reducing the time taken to respond to green, amber, and red messages, but not to crisis messages, after accounting for moderator and community activity. CONCLUSIONS: The triage system was generally accurate, and moderators were largely in agreement with how messages were prioritized. It had a modest effect on response ratios, primarily because moderators were already more likely to respond to high priority content before the introduction of triage. However, it significantly and substantially reduced the time taken for moderators to respond to prioritized content. Further evaluations are needed to assess the impact of mistakes made by the triage algorithm and how changes to moderator responsiveness impact the well-being of forum members.


Assuntos
Apoio Social , Triagem/métodos , Educação a Distância , Feminino , Humanos , Internet , Masculino
2.
JMIR Ment Health ; 5(2): e26, 2018 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-29622528

RESUMO

BACKGROUND: Given the widespread availability of mental health screening apps, providing personalized feedback may encourage people at high risk to seek help to manage their symptoms. While apps typically provide personal score feedback only, feedback types that are user-friendly and increase personal relevance may encourage further help-seeking. OBJECTIVE: The aim of this study was to compare the effects of providing normative and humor-driven feedback on immediate online help-seeking, defined as clicking on a link to an external resource, and to explore demographic predictors that encourage help-seeking. METHODS: An online sample of 549 adults were recruited using social media advertisements. Participants downloaded a smartphone app known as "Mindgauge" which allowed them to screen their mental wellbeing by completing standardized measures on Symptoms (Kessler 6-item Scale), Wellbeing (World Health Organization [Five] Wellbeing Index), and Resilience (Brief Resilience Scale). Participants were randomized to receive normative feedback that compared their scores to a reference group or humor-driven feedback that presented their scores in a relaxed manner. Those who scored in the moderate or poor ranges in any measure were encouraged to seek help by clicking on a link to an external online resource. RESULTS: A total of 318 participants scored poorly on one or more measures and were provided with an external link after being randomized to receive normative or humor-driven feedback. There was no significant difference of feedback type on clicking on the external link across all measures. A larger proportion of participants from the Wellbeing measure (170/274, 62.0%) clicked on the links than the Resilience (47/179, 26.3%) or Symptoms (26/75, 34.7%) measures (χ2=60.35, P<.001). There were no significant demographic factors associated with help-seeking for the Resilience or Wellbeing measures. Participants with a previous episode of poor mental health were less likely than those without such history to click on the external link in the Symptoms measure (P=.003, odds ratio [OR] 0.83, 95% CI 0.02-0.44), and younger adults were less likely to click on the link compared to older adults across all measures (P=.005, OR 0.44, 95% CI 0.25-0.78). CONCLUSIONS: This pilot study found that there was no difference between normative and humor-driven feedback on promoting immediate clicks to an external resource, suggesting no impact on online help-seeking. Limitations included: lack of personal score control group, limited measures of predictors and potential confounders, and the fact that other forms of professional help-seeking were not assessed. Further investigation into other predictors and factors that impact on help-seeking is needed. TRIAL REGISTRATION: Australian New Zealand Clinical Trials Registry ACTRN12616000707460; https://www.anzctr.org.au/ Trial/Registration/TrialReview.aspx?id=370187 (Archived by WebCite at http://www.webcitation.org/6y8m8sVxr).

3.
J Med Internet Res ; 19(8): e267, 2017 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-28784594

RESUMO

BACKGROUND: Synchronous written conversations (or "chats") are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions. OBJECTIVE: The aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat. METHODS: A systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure. RESULTS: A total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling). CONCLUSIONS: Feasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.


Assuntos
Saúde Mental/normas , Envio de Mensagens de Texto/estatística & dados numéricos , Estudos de Viabilidade , Humanos
4.
Internet Interv ; 8: 27-34, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30135825

RESUMO

A growing number of researchers are using Facebook to recruit for a range of online health, medical, and psychosocial studies. There is limited research on the representativeness of participants recruited from Facebook, and the content is rarely mentioned in the methods, despite some suggestion that the advertisement content affects recruitment success. This study explores the impact of different Facebook advertisement content for the same study on recruitment rate, engagement, and participant characteristics. Five Facebook advertisement sets ("resilience", "happiness", "strength", "mental fitness", and "mental health") were used to recruit male participants to an online mental health study which allowed them to find out about their mental health and wellbeing through completing six measures. The Facebook advertisements recruited 372 men to the study over a one month period. The cost per participant from the advertisement sets ranged from $0.55 to $3.85 Australian dollars. The "strength" advertisements resulted in the highest recruitment rate, but participants from this group were least engaged in the study website. The "strength" and "happiness" advertisements recruited more younger men. Participants recruited from the "mental health" advertisements had worse outcomes on the clinical measures of distress, wellbeing, strength, and stress. This study confirmed that different Facebook advertisement content leads to different recruitment rates and engagement with a study. Different advertisement also leads to selection bias in terms of demographic and mental health characteristics. Researchers should carefully consider the content of social media advertisements to be in accordance with their target population and consider reporting this to enable better assessment of generalisability.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA