Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 481
Filter
1.
J Clin Epidemiol ; : 111459, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39004321

ABSTRACT

OBJECTIVE: To evaluate the completeness of reporting in a sample of abstracts on diagnostic accuracy studies before and after the release of STARD for Abstracts in 2017. METHODS: We included 278 diagnostic accuracy abstracts published in 2012 (N=138) and 2019 (N=140) and indexed in EMBASE. We analyzed their adherence to 10 items of the 11-item STARD for Abstracts checklist and explored variability in reporting across abstract characteristics using multivariable Poisson modeling. RESULTS: Most of the 278 abstracts (75%) were published in discipline-specific journals, with a median impact factor of 2.9 (IQR: 1.9-3.7). The majority (41%) of abstracts reported on imaging tests. Overall, a mean of 5.4/10 (SD: 1.4) STARD for Abstracts items was reported (range: 1.2-9.7). Items reported in less than one-third of abstracts included 'eligible patient demographics' (24%), 'setting of recruitment' (30%), 'method of enrolment' (18%), 'estimates of precision for accuracy measures' (26%), and 'protocol registration details' (4%). We observed substantial variability in reporting across several abstract characteristics, with higher adherence associated with the use of a structured abstract, no journal limit for abstract word count, abstract word count above the median, one-gate enrolment design, and prospective data collection. There was no evidence of an increase in the number of reported items between 2012 and 2019 (5.2 vs. 5.5 items; adjusted reporting ratio 1.04 [95%CI: 0.98-1.10]). CONCLUSION: This sample of diagnostic accuracy abstracts revealed suboptimal reporting practices, without improvement between 2012 and 2019. The test evaluation field could benefit from targeted knowledge translation strategies to improve completeness of reporting in abstracts.

2.
World J Clin Cases ; 12(20): 4041-4047, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-39015923

ABSTRACT

BACKGROUND: Obstructive sleep apnea hypoventilation syndrome (OSAHS) in children is a sleep respiratory disorder characterized by a series of pathophysiologic changes. Statistics in recent years have demonstrated an increasing yearly incidence. AIM: To investigate the risk factors for OSAHS in children and propose appropriate management measures. METHODS: This study had a case-control study design. Altogether, 85 children with OSAHS comprised the case group, and healthy children of the same age and sex were matched at 1:1 as the control group. Basic information, including age, sex, height, weight and family history, and medical history data of all study participants were collected. Polysomnography was used to detect at least 8 h of nocturnal sleep. All participants were clinically examined for the presence of adenoids, enlarged tonsils, sinusitis, and rhinitis. RESULTS: The analysis of variance revealed that the case group had a higher proportion of factors such as adenoid grading, tonsil indexing, sinusitis, and rhinitis than the control group. CONCLUSION: A regression model was established, and glandular pattern grading, tonsil indexing, sinusitis, and pharyngitis were identified as independent risk factors affecting OSAHS development.

3.
Methods Mol Biol ; 2842: 405-418, 2024.
Article in English | MEDLINE | ID: mdl-39012608

ABSTRACT

DNA methylation is an important epigenetic modification that regulates chromatin structure and the cell-type-specific expression of genes. The association of aberrant DNA methylation with many diseases, as well as the increasing interest in modifying the methylation mark in a directed manner at genomic sites using epigenome editing for research and therapeutic purposes, increases the need for easy and efficient DNA methylation analysis methods. The standard approach to analyze DNA methylation with a single-cytosine resolution is bisulfite conversion of DNA followed by next-generation sequencing (NGS). In this chapter, we describe a robust, powerful, and cost-efficient protocol for the amplification of target regions from bisulfite-converted DNA, followed by a second PCR step to generate libraries for Illumina NGS. In the two consecutive PCR steps, first, barcodes are added to individual amplicons, and in the second PCR, indices and Illumina adapters are added to the samples. Finally, we describe a detailed bioinformatics approach to extract DNA methylation levels of the target regions from the sequencing data. Combining barcodes with indices enables a high level of multiplexing allowing to sequence multiple pooled samples in the same sequencing run. Therefore, this method is a robust, accurate, quantitative, and cheap approach for the readout of DNA methylation patterns at defined genomic regions.


Subject(s)
DNA Methylation , High-Throughput Nucleotide Sequencing , Polymerase Chain Reaction , Sulfites , Sulfites/chemistry , High-Throughput Nucleotide Sequencing/methods , Polymerase Chain Reaction/methods , Humans , DNA/genetics , Sequence Analysis, DNA/methods , Computational Biology/methods , Epigenesis, Genetic , Epigenomics/methods
4.
bioRxiv ; 2024 May 22.
Article in English | MEDLINE | ID: mdl-38826299

ABSTRACT

Pangenomes are growing in number and size, thanks to the prevalence of high-quality long-read assemblies. However, current methods for studying sequence composition and conservation within pangenomes have limitations. Methods based on graph pangenomes require a computationally expensive multiple-alignment step, which can leave out some variation. Indexes based on k-mers and de Bruijn graphs are limited to answering questions at a specific substring length k. We present Maximal Exact Match Ordered (MEMO), a pangenome indexing method based on maximal exact matches (MEMs) between sequences. A single MEMO index can handle arbitrary-length queries over pangenomic windows. MEMO enables both queries that test k-mer presence/absence (membership queries) and that count the number of genomes containing k-mers in a window (conservation queries). MEMO's index for a pangenome of 89 human autosomal haplotypes fits in 2.04 GB, 8.8× smaller than a comparable KMC3 index and 11.4× smaller than a PanKmer index. MEMO indexes can be made smaller by sacrificing some counting resolution, with our decile-resolution HPRC index reaching 0.67 GB. MEMO can conduct a conservation query for 31-mers over the human leukocyte antigen locus in 13.89 seconds, 2.5x faster than other approaches. MEMO's small index size, lack of k-mer length dependence, and efficient queries make it a flexible tool for studying and visualizing substring conservation in pangenomes.

5.
Acta Crystallogr A Found Adv ; 80(Pt 4): 339-350, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38916131

ABSTRACT

In ab initio indexing, for a given diffraction/scattering pattern, the unit-cell parameters and the Miller indices assigned to reflections in the pattern are determined simultaneously. `Ab initio' means a process performed without any good prior information on the crystal lattice. Newly developed ab initio indexing software is frequently reported in crystallography. However, it is not widely recognized that use of a Bravais lattice determination method, which is tolerant of experimental errors, can simplify indexing algorithms and increase their success rates. One of the goals of this article is to collect information on the lattice-basis reduction theory and its applications. The main result is a Bravais lattice determination algorithm for 2D lattices, along with a mathematical proof that it works even for parameters containing large observational errors. It uses two lattice-basis reduction methods that seem to be optimal for different symmetries, similarly to the algorithm for 3D lattices implemented in the CONOGRAPH software. In indexing, a method for error-stable unit-cell identification is also required to exclude duplicate solutions. Several methods are introduced to measure the difference in unit cells known in crystallography and mathematics.

6.
Ann Hematol ; 2024 May 23.
Article in English | MEDLINE | ID: mdl-38777843

ABSTRACT

Flow-cytometry (FC) is a powerful tool that can assist in lymphoma diagnosis in lymph node (LN) specimens. Although lymphoma diagnosis and classification are mainly based on tumor cell characteristics, surrounding cells are less employed in this process. We retrospectively investigated alterations in the ploidy status, proliferative cell fraction (PF) and the percentages of surrounding immune cells in 62 consecutive LN specimens with B-Cell Non-Hodgkin Lymphoma (B-NHL) that were submitted for FC evaluation between 2019-2022. Compared with indolent B-NHLs, aggressive B-NHLs show increased DNA aneuploidy and PF, increased monocytes, immature-granulocytes, mature granulocytes, CD8+ T-cells, Double-Negative-T-cells and Double-Positive-T-cells, and decreased total CD45+ cells, total lymphocytes, CD4+ T-cells and CD4/CD8 ratio. Receiver operating characteristic analysis determined PF > 6.8% and immature-granulocytes > 0.9% as optimal cutoffs with highest specificity and sensitivity in differentiating aggressive and indolent B-NHLs. These findings further strength the diagnostic value of DNA content analysis by FC and suggest the utilization of tumor surrounding immune cells in NHL diagnosis and classification.

7.
Foods ; 13(10)2024 May 17.
Article in English | MEDLINE | ID: mdl-38790868

ABSTRACT

The aim of this research was to validate the effectiveness of the Healthy Fatty Index (HFI) regarding some foods of animal origin (meat, processed, fish, milk products, and eggs) typical of the Western diet and to compare these results with two consolidated indices (atherogenic-AI, and thrombogenic-TI) in the characterization of the nutritional features of their lipids. The fatty acids profile (% of total fatty acids and mg/100 g) of 60 foods, grouped in six subclasses, was used. The AI, TI, and HFI indexes were calculated, and the intraclass correlation coefficients and the degree of agreement were evaluated using different statistical approaches. The results demonstrated that HFI, with respect to AI and TI, seems better able to consider the complexity of the fatty acid profile and the different fat contents. HFI and AI are the two most diverse indices, and they can provide different food classifications. AI and IT exhibit only a fair agreement in regards to food classification, confirming that such indexes are always to be considered indissolubly and never separately, in contrast to the HFI, which can stand alone.

8.
Med Ref Serv Q ; 43(2): 106-118, 2024.
Article in English | MEDLINE | ID: mdl-38722606

ABSTRACT

The objective of this study was to examine the accuracy of indexing for "Appalachian Region"[Mesh]. Researchers performed a search in PubMed for articles published in 2019 using "Appalachian Region"[Mesh] or "Appalachia" or "Appalachian" in the title or abstract. Only 17.88% of the articles retrieved by the search were about Appalachia according to the ARC definition. Most articles retrieved appeared because they were indexed with state terms that were included as part of the mesh term. Database indexing and searching transparency is of growing importance as indexers rely increasingly on automated systems to catalog information and publications.


Subject(s)
Abstracting and Indexing , Appalachian Region , Abstracting and Indexing/methods , Humans , Medical Subject Headings , PubMed , Bibliometrics
9.
Sensors (Basel) ; 24(7)2024 Mar 24.
Article in English | MEDLINE | ID: mdl-38610290

ABSTRACT

Remote sensing image is a vital basis for land management decisions. The protection of remote sensing images has seen the application of blockchain's notarization function by many scholars. Yet, research on efficient retrieval of such images on the blockchain remains sparse. Addressing this issue, this paper introduces a blockchain-based spatial index verification method using Hyperledger Fabric. It linearizes the spatial information of remote sensing images via Geohash and integrates it with LSM trees for effective retrieval and verification. The system also incorporates IPFS as an underlying storage unit for Hyperledger Fabric, ensuring the safe storage and transmission of images. The experiments indicate that this method significantly reduces the latency in data retrieval and verification without impacting the write performance of Hyperledger Fabric, enhancing throughput and providing a solid foundation for efficient blockchain-based verification of remote sensing images in land registry systems.

10.
PeerJ Comput Sci ; 10: e1951, 2024.
Article in English | MEDLINE | ID: mdl-38660149

ABSTRACT

Software plays a fundamental role in research as a tool, an output, or even as an object of study. This special issue on software citation, indexing, and discoverability brings together five papers examining different aspects of how the use of software is recorded and made available to others. It describes new work on datasets that enable large-scale analysis of the evolution of software usage and citation, that presents evidence of increased citation rates when software artifacts are released, that provides guidance for registries and repositories to support software citation and findability, and that shows there are still barriers to improving and formalising software citation and publication practice. As the use of software increases further, driven by modern research methods, addressing the barriers to software citation and discoverability will encourage greater sharing and reuse of software, in turn enabling research progress.

11.
Methods Mol Biol ; 2787: 333-353, 2024.
Article in English | MEDLINE | ID: mdl-38656501

ABSTRACT

X-ray crystallography is a robust and widely used technique that facilitates the three-dimensional structure determination of proteins at an atomic scale. This methodology entails the growth of protein crystals under controlled conditions followed by their exposure to X-ray beams and the subsequent analysis of the resulting diffraction patterns via computational tools to determine the three-dimensional architecture of the protein. However, achieving high-resolution structures through X-ray crystallography can be quite challenging due to complexities associated with protein purity, crystallization efficiency, and crystal quality.In this chapter, we provide a detailed overview of the gene to structure determination pipeline used in X-ray crystallography, a crucial tool for understanding protein structures. The chapter covers the steps in protein crystallization, along with the processes of data collection, processing, structure determination, and refinement. The most commonly faced challenges throughout this procedure are also addressed. Finally, the importance of standardized protocols for reproducibility and accuracy is emphasized, as they are crucial for advancing the understanding of protein structure and function.


Subject(s)
Crystallization , Protein Conformation , Proteins , Crystallography, X-Ray/methods , Proteins/chemistry , Crystallization/methods , Models, Molecular , Software
12.
Acta Ortop Mex ; 38(1): 22-28, 2024.
Article in Spanish | MEDLINE | ID: mdl-38657148

ABSTRACT

Predatory journals are distinguished from legitimate journals by their lack of adequate reviews and editorial processes, compromising the quality of published content. These journals do not conduct peer reviews or detect plagiarism, and accept manuscripts without requiring substantial modifications. Their near 100% acceptance rate is driven by profit motives, regardless of the content they publish. While they boast a prestigious editorial board composed of renowned researchers, in most cases, it is a facade aimed at impressing and attracting investigators. Furthermore, these journals lack appropriate ethical practices and are non-transparent in their editorial processes. Predatory journals have impacted multiple disciplines, including Orthopedics and Traumatology, and their presence remains unknown to many researchers, making them unwitting victims. Their strategy involves soliciting articles via email from authors who have published in legitimate journals, promising quick, easy, and inexpensive publication. The implications and negative consequences of predatory journals on the scientific community and researchers are numerous. The purpose of this work is to provide general information about these journals, specifically in the field of Orthopedics and Traumatology, offering guidelines to identify and avoid them, so that authors can make informed decisions when publishing their manuscripts and avoid falling into the hands of predatory journals or publishers.


Las revistas depredadoras se diferencian de las revistas legítimas por su falta de adecuadas revisiones y procesos editoriales, lo que compromete la calidad del contenido publicado. Estas revistas no llevan a cabo revisiones por pares ni realizan acciones que detecten y prevengan el plagio y aceptan manuscritos sin exigir modificaciones sustanciales. Su tasa de aceptación cercana al 100% se debe a su enfoque lucrativo, sin importarles el contenido que publican. Aunque presumen tener un comité editorial compuesto por investigadores destacados, en la mayoría de los casos es una simulación destinada a impresionar y atraer a los investigadores. Además, estas revistas carecen de prácticas éticas adecuadas y no son transparentes en sus procesos editoriales. Las revistas depredadoras han afectado a múltiples disciplinas, incluida la Ortopedia y Traumatología y su presencia es aún desconocida para muchos investigadores, lo que los convierte en víctimas sin saberlo. Su estrategia consiste en solicitar artículos por correo electrónico a autores que han publicado en revistas legítimas, prometiendo una publicación rápida, sencilla y económica. Las implicaciones y consecuencias negativas de las revistas depredadoras en la comunidad científica y los investigadores son numerosas. El propósito de este trabajo es proporcionar información general sobre estas revistas y específicamente en el campo de la Ortopedia y Traumatología, brindando pautas para identificarlas y evitarlas, para que los autores puedan tomar decisiones informadas al publicar sus manuscritos y evitar caer en manos de revistas o editoriales depredadoras.


Subject(s)
Orthopedics , Periodicals as Topic , Publishing , Traumatology , Orthopedics/standards , Periodicals as Topic/standards , Traumatology/standards , Publishing/standards , Editorial Policies , Humans
13.
Algorithms Mol Biol ; 19(1): 15, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600518

ABSTRACT

FM-indexes are crucial data structures in DNA alignment, but searching with them usually takes at least one random access per character in the query pattern. Ferragina and Fischer [1] observed in 2007 that word-based indexes often use fewer random accesses than character-based indexes, and thus support faster searches. Since DNA lacks natural word-boundaries, however, it is necessary to parse it somehow before applying word-based FM-indexing. In 2022, Deng et al. [2] proposed parsing genomic data by induced suffix sorting, and showed that the resulting word-based FM-indexes support faster counting queries than standard FM-indexes when patterns are a few thousand characters or longer. In this paper we show that using prefix-free parsing-which takes parameters that let us tune the average length of the phrases-instead of induced suffix sorting, gives a significant speedup for patterns of only a few hundred characters. We implement our method and demonstrate it is between 3 and 18 times faster than competing methods on queries to GRCh38, and is consistently faster on queries made to 25,000, 50,000 and 100,000 SARS-CoV-2 genomes. Hence, it seems our method accelerates the performance of count over all state-of-the-art methods with a moderate increase in the memory. The source code for PFP - FM is available at https://github.com/AaronHong1024/afm .

14.
J Imaging Inform Med ; 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38653911

ABSTRACT

In this paper, we focus on indexing mechanisms for unstructured clinical big integrated data repository systems. Clinical data is unstructured and heterogeneous, which comes in different files and formats. Accessing data efficiently and effectively are critical challenges. Traditional indexing mechanisms are difficult to apply on unstructured data, especially by identifying correlation information between clinical data elements. In this research work, we developed a correlation-aware relevance-based index that retrieves clinical data by fetching most relevant cases efficiently. In our previous work, we designed a methodology that categorizes medical data based on the semantics of data elements and merges them into an integrated repository. We developed a data integration system for medical data sources that combines heterogeneous medical data and provides access to knowledge-based database repositories to different users. In this research work, we designed an indexing system using semantic tags extracted from clinical data sources and medical ontologies that retrieves relevant data from database repositories and speeds up the process of data retrieval. Our objective is to provide an integrated biomedical database repository that can be used by radiologists as a reference, or for patient care, or by researchers. In this paper, we focus on designing a technique that performs data processing for data integration, learn the semantic properties of data elements, and develop a correlation-aware topic index that facilitates efficient data retrieval. We generated semantic tags by identifying key elements from integrated clinical cases using topic modeling techniques. We investigated a technique that identifies tags for merged categories and provides an index to fetch data from an integrated database repository. We developed a topic coherence matrix that shows how well a topic is supported by a corpus from clinical cases and medical ontologies. We were able to find more relevant results using an annotation index from an integrated database repository, and there was a 61% increase in a recall. We evaluated results with the help of experts and compared them with naive index (index with all terms from the corpus). Our approach improved data retrieval quality by providing most relevant results and reduced data retrieval time as we applied correlation-aware index on an integrated data repository. Topic indexing approach proposed in this research work identifies tags based on a correlation between different data elements, improves data retrieval time, and provides most relevant cases as an outcome of this system.

15.
Microb Genom ; 10(4)2024 Apr.
Article in English | MEDLINE | ID: mdl-38578268

ABSTRACT

Background. PCR amplification is a necessary step in many next-generation sequencing (NGS) library preparation methods [1, 2]. Whilst many PCR enzymes are developed to amplify single targets efficiently, accurately and with specificity, few are developed to meet the challenges imposed by NGS PCR, namely unbiased amplification of a wide range of different sizes and GC content. As a result PCR amplification during NGS library prep often results in bias toward GC neutral and smaller fragments. As NGS has matured, optimized NGS library prep kits and polymerase formulations have emerged and in this study we have tested a wide selection of available enzymes for both short-read Illumina library preparation and long fragment amplification ahead of long-read sequencing.We tested over 20 different hi-fidelity PCR enzymes/NGS amplification mixes on a range of Illumina library templates of varying GC content and composition, and find that both yield and genome coverage uniformity characteristics of the commercially available enzymes varied dramatically. Three enzymes Quantabio RepliQa Hifi Toughmix, Watchmaker Library Amplification Hot Start Master Mix (2X) 'Equinox' and Takara Ex Premier were found to give a consistent performance, over all genomes, that mirrored closely that observed for PCR-free datasets. We also test a range of enzymes for long-read sequencing by amplifying size fractionated S. cerevisiae DNA of average size 21.6 and 13.4 kb, respectively.The enzymes of choice for short-read (Illumina) library fragment amplification are Quantabio RepliQa Hifi Toughmix, Watchmaker Library Amplification Hot Start Master Mix (2X) 'Equinox' and Takara Ex Premier, with RepliQa also being the best performing enzyme from the enzymes tested for long fragment amplification prior to long-read sequencing.


Subject(s)
DNA , Saccharomyces cerevisiae , Polymerase Chain Reaction/methods , Gene Library , High-Throughput Nucleotide Sequencing/methods
16.
Algorithms Mol Biol ; 19(1): 16, 2024 Apr 28.
Article in English | MEDLINE | ID: mdl-38679714

ABSTRACT

PURPOSE: String indexes such as the suffix array (SA) and the closely related longest common prefix (LCP) array are fundamental objects in bioinformatics and have a wide variety of applications. Despite their importance in practice, few scalable parallel algorithms for constructing these are known, and the existing algorithms can be highly non-trivial to implement and parallelize. METHODS: In this paper we present CAPS-SA, a simple and scalable parallel algorithm for constructing these string indexes inspired by samplesort and utilizing an LCP-informed mergesort. Due to its design, CAPS-SA has excellent memory-locality and thus incurs fewer cache misses and achieves strong performance on modern multicore systems with deep cache hierarchies. RESULTS: We show that despite its simple design, CAPS-SA outperforms existing state-of-the-art parallel SA and LCP-array construction algorithms on modern hardware. Finally, motivated by applications in modern aligners where the query strings have bounded lengths, we introduce the notion of a bounded-context SA and show that CAPS-SA can easily be extended to exploit this structure to obtain further speedups. We make our code publicly available at https://github.com/jamshed/CaPS-SA .

17.
Open Mind (Camb) ; 8: 278-308, 2024.
Article in English | MEDLINE | ID: mdl-38571528

ABSTRACT

Multiple object tracking (MOT) involves simultaneous tracking of a certain number of target objects amongst a larger set of objects as they all move unpredictably over time. The prevalent explanation for successful target tracking by humans in MOT involving visually identical objects is based on the Visual Indexing Theory. This assumes that each target is indexed by a pointer using a non-conceptual mechanism to maintain an object's identity even as its properties change over time. Thus, successful tracking requires successful indexing and the absence of identification errors. Identity maintenance and successful tracking are measured in terms of identification (ID) and tracking accuracy respectively, with higher accuracy indicating better identity maintenance or better tracking. Existing evidence suggests that humans have high tracking accuracy despite poor identification accuracy, suggesting that it might be possible to perform MOT without indexing. Our work adds to existing evidence for this position through two experiments, and presents a computational model of multiple object tracking that does not require indexes. Our empirical results show that identification accuracy is aligned with tracking accuracy in humans for tracking up to three, but is lower when tracking more objects. Our computational model of MOT without indexing accounts for several empirical tracking accuracy patterns shown in earlier studies, reproduces the dissociation between tracking and identification accuracy produced earlier in the literature as well as in our experiments, and makes several novel predictions.

18.
Genome Biol ; 25(1): 90, 2024 04 08.
Article in English | MEDLINE | ID: mdl-38589969

ABSTRACT

Single-cell ATAC-seq has emerged as a powerful approach for revealing candidate cis-regulatory elements genome-wide at cell-type resolution. However, current single-cell methods suffer from limited throughput and high costs. Here, we present a novel technique called scifi-ATAC-seq, single-cell combinatorial fluidic indexing ATAC-sequencing, which combines a barcoded Tn5 pre-indexing step with droplet-based single-cell ATAC-seq using the 10X Genomics platform. With scifi-ATAC-seq, up to 200,000 nuclei across multiple samples can be indexed in a single emulsion reaction, representing an approximately 20-fold increase in throughput compared to the standard 10X Genomics workflow.


Subject(s)
Chromatin Immunoprecipitation Sequencing , Chromatin , High-Throughput Nucleotide Sequencing/methods , Sequence Analysis, DNA/methods , Cell Nucleus
19.
Heliyon ; 10(6): e27604, 2024 Mar 30.
Article in English | MEDLINE | ID: mdl-38545144

ABSTRACT

Cassava (Manihot esculenta Crantz) is a crop of global economic and food safety importance, used for human consumption and in various industrial applications. The genebank of the Genetic Resources Program of the Alliance of Bioversity International and CIAT currently holds the world's largest cassava collection, with 5965 in vitro accessions from 28 countries. Managing this extensive collection involves indexing quarantine pathogens as a phytosanitary certification requirement for safely distributing cassava germplasm. The study therefore aimed to optimize a quantitative diagnostic protocol to detect cassava common mosaic virus (CsCMV) using quantitative PCR (qPCR) as a better alternative to other molecular techniques. This was done through designing primers and a probe in the RdRP region of CsCMV, and optimizing the qPCR conditions of the diagnostic protocol using primer concentration assays, and reaction amplification conditions such as volume and reaction time. We also evaluated the qPCR protocol by comparing the results of 140 cassava accession evaluations using three diagnostic methodologies (DAS-ELISA, end-point PCR, and qPCR) for CsCMV. Our protocol established that qPCR technique analysis is ten-times more sensitive in detecting CsCMV compared to end-point PCR, showing a maximum detection level of 77.97 copies/µL of plasmid, with 76 min of reaction time. The comparison allowed us to verify the level of CsCMV detection through the techniques evaluated, concluding that qPCR was more sensitive and allowed the quantification of viral concentration. The optimized qPCR protocol will be used to accelerate diagnostic screening of cassava germplasm for the presence or absence of CsCMV to ensure safe movement and distribution of disease-free germplasm.

20.
Comput Biol Chem ; 110: 108050, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38447272

ABSTRACT

Read mapping as the foundation of computational biology is a bottleneck task under the pressure of sequencing throughput explodes. In this work, we present an efficient Burrows-Wheeler transform-based aligner for next-generation sequencing (NGS) short read. Firstly, we propose a difference-aware classification strategy to assign specific reads to the computationally more economical search modes, and present some acceleration techniques, such as a seed pruning method based on the property of maximum coverage interval to reduce the redundant locating for candidate regions, redesigning LF calculation to support fast query. Then, we propose a heuristic verification to determine the best mapping from amounts of flanking sequences. Incorporated with low-distortion string embedding, most dissimilar sequences are filtered out cheaply, and the highly similar sequences left are just right for the wavefront alignment algorithm's preference. We provide a full spectrum benchmark with different read lengths, the results show that our method is 1.3-1.4 times faster than state-of-the-art Burrows-Wheeler transform-based methods (including bowtie2, bwa-MEM, and hisat2) over 101bp reads and has a speedup with 1.5-13 times faster over 750bp to 1000bp reads; meanwhile, our method has comparable memory usage and accuracy. However, hash-based methods (including Strobealign, Minimap2, and Accel-Align) are significantly faster, in part because Burrows-Wheeler transform-based methods calculate on the compressed space. The source code is available: https://github.com/Lilu-guo/Effaln.

SELECTION OF CITATIONS
SEARCH DETAIL
...