Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
JAMA Netw Open ; 7(5): e2413855, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38809550

RESUMO

Importance: Free video-sharing platforms (VSPs) make up a high proportion of children's daily screen use. Many VSPs make algorithmic recommendations, appearing as thumbnail images from the video, which content creators use to advertise their video content. Objective: To explore how VSP thumbnails use attention-capture designs to encourage engagement with content and to test whether VSP algorithmic recommendations offer more problematic thumbnail features over time. Design, Setting, and Participants: In this cross-sectional study conducted in January 2022, researchers mimicked the search behavior of children on a popular VSP by randomly clicking on recommended videos in order to test whether thumbnail designs changed over 20 sequential video engagements. A digital, footprint-free data collection setting was created by using a new computer and wireless internet router. Data were collected from YouTube via an internet browser not logged into a user account. Data analysis occurred from April to December 2022. Exposures: Manual searches using 12 top-searched terms popular with school-aged children were conducted. Researchers captured the video thumbnails recommended at the end of each video and randomly clicked subsequent videos for 20 sequential engagements. Main Outcomes and Measures: Thumbnail content codes were developed through iterative review of screenshots by a multidisciplinary research team and applied by trained coders (reliability, κ >.70). The prevalence of problematic thumbnail content and change in prevalence over 20 engagements was calculated using the Cochran-Armitage trend test. Results: A total of 2880 video thumbnails were analyzed and 6 features were coded, including visual loudness; drama and intrigue; lavish excess and wish fulfillment; creepy, bizarre, and disturbing; violence, peril, and pranks; and gender stereotypes. A high proportion contained problematic features including the creepy, bizarre, and disturbing feature (1283 thumbnails [44.6%]), violence, peril, and pranks feature (1170 thumbnails [40.6%]), and gender stereotypes feature (525 thumbnails [18.2%]). Other features included attention-capture designs such as the visual loudness feature (2278 thumbnails [79.1%]), drama and intrigue feature (2636 thumbnails [91.5%]) and lavish excess and wish fulfillment feature (1286 thumbnails [44.7%]). Contrary to the hypotheses, problematic feature prevalence did not increase over time, but the gender stereotypes feature increased with more engagement in the recommendations feed (P for trend < .001). Conclusions and Relevance: In this study of video recommendations for search terms popular with children, thumbnails contained problematic and attention-capturing designs including violent, stereotyped, and frightening themes. Research is needed to understand how children respond to thumbnail designs and whether such designs influence the quality of content children consume.


Assuntos
Algoritmos , Gravação em Vídeo , Humanos , Estudos Transversais , Criança , Masculino , Feminino , Mídias Sociais/estatística & dados numéricos , Tempo de Tela , Adolescente
2.
Anesth Analg ; 128(6): 1292-1299, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31094802

RESUMO

BACKGROUND: Limited data exist regarding computational drug error rates in anesthesia residents and faculty. We investigated the frequency and magnitude of computational errors in a sample of anesthesia residents and faculty. METHODS: With institutional review board approval from 7 academic institutions in the United States, a 15-question computational test was distributed during rounds. Error rates and the magnitude of the errors were analyzed according to resident versus faculty, years of practice (or residency training), duration of sleep, type of question, and institution. RESULTS: A total of 371 completed the test: 209 residents and 162 faculty. Both groups committed 2 errors (median value) per test, for a mean error rate of 17.0%. Twenty percent of residents and 25% of faculty scored 100% correct answers. The error rate for postgraduate year 2 residents was less than for postgraduate year 1 (P = .012). The error rate for faculty increased with years of experience, with a weak correlation (R = 0.22; P = .007). The error rates were independent of the number of hours of sleep. The error rate for percentage-type questions was greater than for rate, dose, and ratio questions (P = .001). The error rates varied with the number of operations needed to calculate the answer (P < .001). The frequency of large errors (100-fold greater or less than the correct answer) by residents was twice that of faculty. Error rates varied among institutions ranged from 12% to 22% (P = .021). CONCLUSIONS: Anesthesiology residents and faculty erred frequently on a computational test, with junior residents and faculty with more experience committing errors more frequently. Residents committed more serious errors twice as frequently as faculty.


Assuntos
Anestesiologia/educação , Anestesiologia/métodos , Anestésicos/administração & dosagem , Esquema de Medicação , Erros de Medicação/estatística & dados numéricos , Psicometria , Anestesia , Competência Clínica , Análise Fatorial , Docentes de Medicina , Humanos , Internato e Residência , Reprodutibilidade dos Testes , Risco , Inquéritos e Questionários , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...