Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59.195
Filtrar
2.
Br J Biomed Sci ; 81: 12054, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38952614

RESUMEN

The peer review process is a fundamental aspect of modern scientific paper publishing, underpinning essential quality control. First conceptualised in the 1700s, it is an iterative process that aims to elevate scientific literature to the highest standards whilst preventing publication of scientifically unsound, potentially misleading, and even plagiarised information. It is widely accepted that the peer review of scientific papers is an irreplaceable and fundamental aspect of the research process. However, the rapid growth of research and technology has led to a huge increase in the number of publications. This has led to increased pressure on the peer review system. There are several established peer review methodologies, ranging from single and double blind to open and transparent review, but their implementation across journals and research fields varies greatly. Some journals are testing entirely novel approaches (such as collaborative reviews), whilst others are piloting changes to established methods. Given the unprecedented growth in publication numbers, and the ensuing burden on journals, editors, and reviewers, it is imperative to improve the quality and efficiency of the peer review process. Herein we evaluate the peer review process, from its historical origins to current practice and future directions.


Asunto(s)
Revisión de la Investigación por Pares , Humanos , Investigación Biomédica/tendencias , Investigación Biomédica/normas , Historia del Siglo XXI , Revisión de la Investigación por Pares/tendencias , Revisión de la Investigación por Pares/normas , Publicaciones Periódicas como Asunto , Edición/normas , Edición/tendencias , Control de Calidad
4.
South Med J ; 117(7): 358-363, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38959961

RESUMEN

OBJECTIVES: Periodically, medical publications are retracted. The reasons vary from minor situations, such as author attributions, which do not undermine the validity of the data or the analysis in the article, to serious reasons, such as fraud. Understanding the reasons for retraction can provide important information for clinicians, educators, researchers, journals, and editorial boards. METHODS: The PubMed database was searched using the term "COVID-19" (coronavirus disease 2019) and the term limitation "retracted publication." The characteristics of the journals with retracted articles, the types of article, and the reasons for retraction were analyzed. RESULTS: This search recovered 196 articles that had been retracted. These retractions were published in 179 different journals; 14 journals had >1 retracted article. The mean impact factor of these journals was 8.4, with a range of 0.32-168.9. The most frequent reasons for retractions were duplicate publication, concerns about data validity and analysis, concerns about peer review, author request, and the lack of permission or ethical violation. There were significant differences between the types of article and the reasons for retraction but no consistent pattern. A more detailed analysis of two particular retractions demonstrates the complexity and the effort required to make decisions about article retractions. CONCLUSIONS: The retraction of published articles presents a significant challenge to journals, editorial boards, peer reviewers, and authors. This process has the potential to provide important benefits; it also has the potential to undermine confidence in both research and the editorial process.


Asunto(s)
COVID-19 , Publicaciones Periódicas como Asunto , PubMed , Retractación de Publicación como Asunto , Humanos , COVID-19/epidemiología , Publicaciones Periódicas como Asunto/estadística & datos numéricos , SARS-CoV-2 , Factor de Impacto de la Revista , Mala Conducta Científica
6.
8.
PLoS One ; 19(7): e0306334, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38959247

RESUMEN

OBJECTIVE: While statistical analysis plays a crucial role in medical science, some published studies might have utilized suboptimal analysis methods, potentially undermining the credibility of their findings. Critically appraising analytical approaches can help elevate the standard of evidence and ensure clinicians and other stakeholders have trustworthy results on which to base decisions. The aim of the present study was to examine the statistical characteristics of original articles published in Peruvian medical journals in 2021-2022. DESIGN AND SETTING: We performed a methodological study of articles published between 2021 and 2022 from nine medical journals indexed in SciELO-Peru, Scopus, and Medline. We included original articles that conducted analytical analyses (i.e., association between variables). The statistical variables assessed were: statistical software used for analysis, sample size, and statistical methods employed (measures of effect), controlling for confounders, and the method employed for confounder control or epidemiological approaches. RESULTS: We included 313 articles (ranging from 11 to 77 across journals), of which 67.7% were cross-sectional studies. While 90.7% of articles specified the statistical software used, 78.3% omitted details on sample size calculation. Descriptive and bivariate statistics were commonly employed, whereas measures of association were less common. Only 13.4% of articles (ranging from 0% to 39% across journals) presented measures of effect controlling for confounding and explained the criteria for selecting such confounders. CONCLUSION: This study revealed important statistical deficiencies within analytical studies published in Peruvian journals, including inadequate reporting of sample sizes, absence of measures of association and confounding control, and suboptimal explanations regarding the methodologies employed for adjusted analyses. These findings highlight the need for better statistical reporting and researcher-editor collaboration to improve the quality of research production and dissemination in Peruvian journals.


Asunto(s)
Publicaciones Periódicas como Asunto , Perú , Publicaciones Periódicas como Asunto/estadística & datos numéricos , Humanos , Tamaño de la Muestra , Edición/estadística & datos numéricos , Proyectos de Investigación
9.
Circ Res ; 135(2): 262-264, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38963868
10.
PLoS One ; 19(7): e0306749, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38968284

RESUMEN

It is unknown to what extent medical researchers generalize study findings beyond their samples when their sample size, sample diversity, or knowledge of conditions that support external validity do not warrant it. It is also unknown to what extent medical researchers describe their results with precise quantifications or unquantified generalizations, i.e., generics, that can obscure variations between individuals. We therefore systematically reviewed all prospective studies (n = 533) published in the top four highest ranking medical journals, Lancet, New England Journal of Medicine (NEJM), Journal of the American Medical Association (JAMA), and the British Medical Journal (BMJ), from January 2022 to May 2023. We additionally reviewed all NEJM Journal Watch clinical research summaries (n = 143) published during the same time. Of all research articles reporting prospective studies, 52.5% included generalizations beyond specific national study populations, with the numbers of articles with generics varying significantly between journals (JAMA = 12%; Lancet = 77%) (p < 0.001, V = 0.48). There was no evidence that articles containing broader generalizations or generics were correlated with larger or more nationally diverse samples. Moreover, only 10.2% of articles with generalizations beyond specific national populations reported external validity strengthening factors that could potentially support such extrapolations. There was no evidence that original research articles and NEJM Journal Watch summaries intended for practitioners differed in their use of broad generalizations, including generics. Finally, from the journal with the highest citation impact, articles containing broader conclusions were correlated with more citations. Since there was no evidence that studies with generalizations beyond specific national study populations or with generics were associated with larger, more nationally diverse samples, or with reports of population similarity that may permit extensions of conclusions, our findings suggest that the generalizations in many articles were insufficiently supported. Caution against overly broad generalizations in medical research is warranted.


Asunto(s)
Investigación Biomédica , Humanos , Estudios Prospectivos , Publicaciones Periódicas como Asunto/estadística & datos numéricos
12.
Vet Rec ; 195(1): 32, 2024 Jul 06.
Artículo en Inglés | MEDLINE | ID: mdl-38967161
13.
Int Marit Health ; 75(2): 137-146, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38949214
14.
18.
PLoS One ; 19(7): e0304807, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38995880

RESUMEN

The rapid advances in Generative AI tools have produced both excitement and worry about how AI will impact academic writing. However, little is known about what norms are emerging around AI use in manuscript preparation or how these norms might be enforced. We address both gaps in the literature by conducting a survey of 271 academics about whether it is necessary to report ChatGPT use in manuscript preparation and by running GPT-modified abstracts from 2,716 published papers through a leading AI detection software to see if these detectors can detect different AI uses in manuscript preparation. We find that most academics do not think that using ChatGPT to fix grammar needs to be reported, but detection software did not always draw this distinction, as abstracts for which GPT was used to fix grammar were often flagged as having a high chance of being written by AI. We also find disagreements among academics on whether more substantial use of ChatGPT to rewrite text needs to be reported, and these differences were related to perceptions of ethics, academic role, and English language background. Finally, we found little difference in their perceptions about reporting ChatGPT and research assistant help, but significant differences in reporting perceptions between these sources of assistance and paid proofreading and other AI assistant tools (Grammarly and Word). Our results suggest that there might be challenges in getting authors to report AI use in manuscript preparation because (i) there is not uniform agreement about what uses of AI should be reported and (ii) journals might have trouble enforcing nuanced reporting requirements using AI detection tools.


Asunto(s)
Inteligencia Artificial , Humanos , Publicaciones Periódicas como Asunto , Escritura , Programas Informáticos , Percepción
19.
Chest ; 166(1): 1-2, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38986631
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...