Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
2.
Clin Imaging ; 72: 22-30, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33197713

RESUMO

The global pandemic of COVID-19 pneumonia caused by the novel coronavirus (SARS-CoV-2) has strained healthcare resources across the world with emerging challenges of mass testing, resource allocation and management. While reverse transcriptase-polymerase chain reaction (RT-PCR) test is the most commonly utilized test and considered the current gold standard for diagnosis, the role of chest imaging has been highlighted by several studies demonstrating high sensitivity of computed tomography (CT). Many have suggested using CT chest as a first-line screening tool for the diagnosis of COVID-19. However, with advancement of laboratory testing and challenges in obtaining a CT scan without significant risk to healthcare providers, the role of imaging in diagnosis has been questioned. Several imaging societies have released consensus statements and guidelines on utilizing imaging resources and optimal reporting. In this review, we highlight the current evidence on various modalities in thoracic imaging for the diagnosis of COVID-19 and describe an algorithm on how to use these resources in an optimal fashion in accordance with the guidelines and statements released by major imaging societies.


Assuntos
COVID-19 , Algoritmos , Teste para COVID-19 , Técnicas de Laboratório Clínico , Humanos , SARS-CoV-2
3.
J Oncol Pract ; 7(5): 319-23, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22211130

RESUMO

PURPOSE: A wiki is a collaborative Web site, such as Wikipedia, that can be freely edited. Because of a wiki's lack of formal editorial control, we hypothesized that the content would be less complete and accurate than that of a professional peer-reviewed Web site. In this study, the coverage, accuracy, and readability of cancer information on Wikipedia were compared with those of the patient-orientated National Cancer Institute's Physician Data Query (PDQ) comprehensive cancer database. METHODS: For each of 10 cancer types, medically trained personnel scored PDQ and Wikipedia articles for accuracy and presentation of controversies by using an appraisal form. Reliability was assessed by using interobserver variability and test-retest reproducibility. Readability was calculated from word and sentence length. RESULTS: Evaluators were able to rapidly assess articles (18 minutes/article), with a test-retest reliability of 0.71 and interobserver variability of 0.53. For both Web sites, inaccuracies were rare, less than 2% of information examined. PDQ was significantly more readable than Wikipedia: Flesch-Kincaid grade level 9.6 versus 14.1. There was no difference in depth of coverage between PDQ and Wikipedia (29.9, 34.2, respectively; maximum possible score 72). Controversial aspects of cancer care were relatively poorly discussed in both resources (2.9 and 6.1 for PDQ and Wikipedia, respectively, NS; maximum possible score 18). A planned subanalysis comparing common and uncommon cancers demonstrated no difference. CONCLUSION: Although the wiki resource had similar accuracy and depth as the professionally edited database, it was significantly less readable. Further research is required to assess how this influences patients' understanding and retention.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...