Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
PLoS One ; 16(11): e0259499, 2021.
Article in English | MEDLINE | ID: covidwho-1506558

ABSTRACT

BACKGROUND: The popularization of social media has led to the coalescing of user groups around mental health conditions; in particular, depression. Social media offers a rich environment for contextualizing and predicting users' self-reported burden of depression. Modern artificial intelligence (AI) methods are commonly employed in analyzing user-generated sentiment on social media. In the forthcoming systematic review, we will examine the content validity of these computer-based health surveillance models with respect to standard diagnostic frameworks. Drawing from a clinical perspective, we will attempt to establish a normative judgment about the strengths of these modern AI applications in the detection of depression. METHODS: We will perform a systematic review of English and German language publications from 2010 to 2020 in PubMed, APA PsychInfo, Science Direct, EMBASE Psych, Google Scholar, and Web of Science. The inclusion criteria span cohort, case-control, cross-sectional studies, randomized controlled studies, in addition to reports on conference proceedings. The systematic review will exclude some gray source materials, specifically editorials, newspaper articles, and blog posts. Our primary outcome is self-reported depression, as expressed on social media. Secondary outcomes will be the types of AI methods used for social media depression screen, and the clinical validation procedures accompanying these methods. In a second step, we will utilize the evidence-strengthening Population, Intervention, Comparison, Outcomes, Study type (PICOS) tool to refine our inclusion and exclusion criteria. Following the independent assessment of the evidence sources by two authors for the risk of bias, the data extraction process will culminate in a thematic synthesis of reviewed studies. DISCUSSION: We present the protocol for a systematic review which will consider all existing literature from peer reviewed publication sources relevant to the primary and secondary outcomes. The completed review will discuss depression as a self-reported health outcome in social media material. We will examine the computational methods, including AI and machine learning techniques which are commonly used for online depression surveillance. Furthermore, we will focus on standard clinical assessments, as indicating content validity, in the design of the algorithms. The methodological quality of the clinical construct of the algorithms will be evaluated with the COnsensus-based Standards for the selection of health status Measurement Instruments (COSMIN) framework. We conclude the study with a normative judgment about the current application of AI to screen for depression on social media. SYSTEMATIC REVIEW REGISTRATION: International Prospective Register of Systematic Reviews PROSPERO (registration number CRD42020187874).


Subject(s)
Artificial Intelligence , Cross-Sectional Studies , Depression , Social Media
2.
J Am Med Inform Assoc ; 28(9): 2013-2016, 2021 08 13.
Article in English | MEDLINE | ID: covidwho-1377973

ABSTRACT

Open discussions of social justice and health inequities may be an uncommon focus within information technology science, business, and health care delivery partnerships. However, the COVID-19 pandemic-which disproportionately affected Black, indigenous, and people of color-has reinforced the need to examine and define roles that technology partners should play to lead anti-racism efforts through our work. In our perspective piece, we describe the imperative to prioritize TechQuity-equity and social justice as a technology business strategy-through collaborating in partnerships that focus on eliminating racial and social inequities.


Subject(s)
COVID-19 , Racism , Humans , Pandemics , SARS-CoV-2 , Technology
3.
Journal of Health Care for the Poor and Underserved ; 32(2 Supplement):xiii-xviii, 2021.
Article in English | ProQuest Central | ID: covidwho-1208148

ABSTRACT

Three seminal reports, the 2001 Institute of Medicine's Crossing the Quality Chasm, the 2003 report Unequal Treatment,1 and the 2020 National Academy of Medicine's (formerly Institute of Medicine) Artificial Intelligence in Healthcare2 represented inflection points in highlighting the substantial disparities in access, clinical care, and outcomes, and recommended that equity in health care and health technology must be achieved to deliver quality care.3 Though Crossing the Quality Chasm set up the STEEEP framework, which explicitly called out equity as one of six health care quality domains (alongside safety, timeliness, effectiveness, efficiency, and patient-centered care) the issue of inequities in health care delivery was truly laid bare in Unequal Treatment, which also called upon health care institutions and providers to develop strategies to confront disparities in care.4 Artificial Intelligence in Healthcare introduced the "Quintiple Aim" where "Equity and Inclusion" was added to the "Quadruple Aim. Equity Dashboards The application of analytics to demonstrate health care quality in the domains of safety, timeliness, effectiveness, efficiency, and patient-centeredness has been common in diverse dashboards for hospital ratings and other key health care certifications (e.g., National Committee for Quality Assurance, Joint Commission);however, equity has often been overlooked.17 Peter Drucker, a famous business thinker and writer for the modern company, stated that "if you can't measure it, you can't improve it. [...]we must move AI from being a "black box" to a "clear box" with AI factsheets like nutrition labels where buyers and end-users of AI algorithms can transparently see who trained the AI, what datasets were used, and what specific AI algorithms and models were used.28 We must assure transparent, ethical, fair, and equitable AI. Institute of Medicine, Board on Health Sciences Policy, Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care.

4.
Journal of Health Care for the Poor and Underserved ; 32(2 Supplement):300-317, 2021.
Article in English | ProQuest Central | ID: covidwho-1208109

ABSTRACT

The COVID-19 pandemic has created multiple opportunities to deploy artificial intelligence (AI)-driven tools and applied interventions to understand, mitigate, and manage the pandemic and its consequences. The disproportionate impact of COVID-19 on racial/ethnic minority and socially disadvantaged populations underscores the need to anticipate and address social inequalities and health disparities in AI development and application. Before the pandemic, there was growing optimism about AI's role in addressing inequities and enhancing personalized care. Unfortunately, ethical and social issues that are encountered in scaling, developing, and applying advanced technologies in health care settings have intensified during the rapidly evolving public health crisis. Critical voices concerned with the disruptive potentials and risk for engineered inequities have called for reexamining ethical guidelines in the development and application of AI. This paper proposes a framework to incorporate ethical AI principles into the development process in ways that intentionally promote racial health equity and social justice. Without centering on equity, justice, and ethical AI, these tools may exacerbate structural inequities that can lead to disparate health outcomes.

SELECTION OF CITATIONS
SEARCH DETAIL