Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37655047

RESUMO

Technological advances in psychological research have enabled large-scale studies of human behavior and streamlined pipelines for automatic processing of data. However, studies of infants and children have not fully reaped these benefits because the behaviors of interest, such as gaze duration and direction, still have to be extracted from video through a laborious process of manual annotation, even when these data are collected online. Recent advances in computer vision raise the possibility of automated annotation of these video data. In this article, we built on a system for automatic gaze annotation in young children, iCatcher, by engineering improvements and then training and testing the system (referred to hereafter as iCatcher+) on three data sets with substantial video and participant variability (214 videos collected in U.S. lab and field sites, 143 videos collected in Senegal field sites, and 265 videos collected via webcams in homes; participant age range = 4 months-3.5 years). When trained on each of these data sets, iCatcher+ performed with near human-level accuracy on held-out videos on distinguishing "LEFT" versus "RIGHT" and "ON" versus "OFF" looking behavior across all data sets. This high performance was achieved at the level of individual frames, experimental trials, and study videos; held across participant demographics (e.g., age, race/ethnicity), participant behavior (e.g., movement, head position), and video characteristics (e.g., luminance); and generalized to a fourth, entirely held-out online data set. We close by discussing next steps required to fully automate the life cycle of online infant and child behavioral studies, representing a key step toward enabling robust and high-throughput developmental research.

2.
Transplant Direct ; 8(9): e1361, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35935028

RESUMO

Access to lifesaving liver transplantation is limited by a severe organ shortage. One factor contributing to the shortage is the high rate of discard in livers with histologic steatosis. Livers with <30% macrosteatosis are generally considered safe for transplant. However, histologic assessment of steatosis by a pathologist remains subjective and is often limited by image quality. Here, we address this bottleneck by creating an automated digital algorithm for calculating histologic steatosis using only images of liver biopsy histology obtained with a smartphone. Methods: Multiple images of frozen section liver histology slides were captured using a smartphone camera via the optical lens of a simple light microscope. Biopsy samples from 80 patients undergoing liver transplantation were included. An automated digital algorithm was designed to capture and count steatotic droplets in liver tissue while discounting areas of vascular lumen, white space, and processing artifacts. Pathologists of varying experience provided steatosis scores, and results were compared with the algorithm's assessment. Interobserver agreement between pathologists was also assessed. Results: Interobserver agreement between all pathologists was very low but increased with specialist training in liver pathology. A significant linear relationship was found between steatosis estimates of the algorithm compared with expert liver pathologists, though the latter had consistently higher estimates. Conclusions: This study demonstrates proof of the concept that smartphone-captured images can be used in conjunction with a digital algorithm to measure steatosis. Integration of this technology into the transplant workflow may significantly improve organ utilization rates.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA