Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 551
Filtrar
1.
J Exp Bot ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38954539

RESUMO

Linear mixed models (LMMs) are a commonly used method for genome-wide association studies (GWAS) that aim to detect associations between genetic markers and phenotypic measurements in a population of individuals while accounting for population structure and cryptic relatedness. In a standard GWAS, hundreds of thousands to millions of statistical tests are performed, requiring control for multiple hypothesis testing. Typically, static corrections that penalize the number of tests performed are used to control for the family-wise error rate, which is the probability of making at least one false positive. However, it has been shown that in practice this threshold is too conservative for normally distributed phenotypes and not stringent enough for non-normally distributed phenotypes. Therefore, permutation-based LMM approaches have recently been proposed to provide a more realistic threshold that takes phenotypic distributions into account. In this work, we will discuss the advantages of permutation-based GWAS approaches, including new simulations and results from a re-analysis of all publicly available Arabidopsis thaliana phenotypes from the AraPheno database.

2.
Comput Biol Med ; 179: 108831, 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38970834

RESUMO

This work presents an advanced agent-based model developed within the FLAMEGPU2 framework, aimed at simulating the intricate dynamics of cell microenvironments. Our primary objective is to showcase FLAMEGPU2's potential in modelling critical features such as cell-cell and cell-ECM interactions, species diffusion, vascularisation, cell migration, and/or cell cycling. By doing so, we provide a versatile template that serves as a foundational platform for researchers to model specific biological mechanisms or processes. We highlight the utility of our approach as a microscale component within multiscale frameworks. Through four example applications, we demonstrate the model's versatility in capturing phenomena such as strain-stiffening behaviour of hydrogels, cell migration patterns within hydrogels, spheroid formation and fibre reorientation, and the simulation of diffusion processes within a vascularised and deformable domain. This work aims to bridge the gap between computational efficiency and biological fidelity, offering a scalable and flexible platform to advance our understanding of tissue biology and engineering.

3.
Radiat Oncol ; 19(1): 86, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956685

RESUMO

PURPOSE: To apply an independent GPU-accelerated Monte Carlo (MC) dose verification for CyberKnife M6 with Iris collimator and evaluate the dose calculation accuracy of RayTracing (TPS-RT) algorithm and Monte Carlo (TPS-MC) algorithm in the Precision treatment planning system (TPS). METHODS: GPU-accelerated MC algorithm (ArcherQA-CK) was integrated into a commercial dose verification system, ArcherQA, to implement the patient-specific quality assurance in the CyberKnife M6 system. 30 clinical cases (10 cases in head, and 10 cases in chest, and 10 cases in abdomen) were collected in this study. For each case, three different dose calculation methods (TPS-MC, TPS-RT and ArcherQA-CK) were implemented based on the same treatment plan and compared with each other. For evaluation, the 3D global gamma analysis and dose parameters of the target volume and organs at risk (OARs) were analyzed comparatively. RESULTS: For gamma pass rates at the criterion of 2%/2 mm, the results were over 98.0% for TPS-MC vs.TPS-RT, TPS-MC vs. ArcherQA-CK and TPS-RT vs. ArcherQA-CK in head cases, 84.9% for TPS-MC vs.TPS-RT, 98.0% for TPS-MC vs. ArcherQA-CK and 83.3% for TPS-RT vs. ArcherQA-CK in chest cases, 98.2% for TPS-MC vs.TPS-RT, 99.4% for TPS-MC vs. ArcherQA-CK and 94.5% for TPS-RT vs. ArcherQA-CK in abdomen cases. For dose parameters of planning target volume (PTV) in chest cases, the deviations of TPS-RT vs. TPS-MC and ArcherQA-CK vs. TPS-MC had significant difference (P < 0.01), and the deviations of TPS-RT vs. TPS-MC and TPS-RT vs. ArcherQA-CK were similar (P > 0.05). ArcherQA-CK had less calculation time compared with TPS-MC (1.66 min vs. 65.11 min). CONCLUSIONS: Our proposed MC dose engine (ArcherQA-CK) has a high degree of consistency with the Precision TPS-MC algorithm, which can quickly identify the calculation errors of TPS-RT algorithm for some chest cases. ArcherQA-CK can provide accurate patient-specific quality assurance in clinical practice.


Assuntos
Algoritmos , Método de Monte Carlo , Órgãos em Risco , Radiocirurgia , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador , Humanos , Radiocirurgia/métodos , Radiocirurgia/instrumentação , Planejamento da Radioterapia Assistida por Computador/métodos , Órgãos em Risco/efeitos da radiação , Neoplasias/cirurgia , Neoplasias/radioterapia , Radioterapia de Intensidade Modulada/métodos , Gráficos por Computador
4.
J Biomed Opt ; 29(6): 066006, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38846677

RESUMO

Significance: Photoacoustic computed tomography (PACT) is a promising non-invasive imaging technique for both life science and clinical implementations. To achieve fast imaging speed, modern PACT systems have equipped arrays that have hundreds to thousands of ultrasound transducer (UST) elements, and the element number continues to increase. However, large number of UST elements with parallel data acquisition could generate a massive data size, making it very challenging to realize fast image reconstruction. Although several research groups have developed GPU-accelerated method for PACT, there lacks an explicit and feasible step-by-step description of GPU-based algorithms for various hardware platforms. Aim: In this study, we propose a comprehensive framework for developing GPU-accelerated PACT image reconstruction (GPU-accelerated photoacoustic computed tomography), to help the research community to grasp this advanced image reconstruction method. Approach: We leverage widely accessible open-source parallel computing tools, including Python multiprocessing-based parallelism, Taichi Lang for Python, CUDA, and possible other backends. We demonstrate that our framework promotes significant performance of PACT reconstruction, enabling faster analysis and real-time applications. Besides, we also described how to realize parallel computing on various hardware configurations, including multicore CPU, single GPU, and multiple GPUs platform. Results: Notably, our framework can achieve an effective rate of ∼ 871 times when reconstructing extremely large-scale three-dimensional PACT images on a dual-GPU platform compared to a 24-core workstation CPU. In this paper, we share example codes via GitHub. Conclusions: Our approach allows for easy adoption and adaptation by the research community, fostering implementations of PACT for both life science and medicine.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Técnicas Fotoacústicas , Técnicas Fotoacústicas/métodos , Técnicas Fotoacústicas/instrumentação , Processamento de Imagem Assistida por Computador/métodos , Animais , Gráficos por Computador , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada por Raios X/instrumentação , Humanos
5.
Front Neurosci ; 18: 1406821, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38863882

RESUMO

Over the past decade, reversed gradient polarity (RGP) methods have become a popular approach for correcting susceptibility artifacts in echo-planar imaging (EPI). Although several post-processing tools for RGP are available, their implementations do not fully leverage recent hardware, algorithmic, and computational advances, leading to correction times of several minutes per image volume. To enable 3D RGP correction in seconds, we introduce PyTorch Hyperelastic Susceptibility Correction (PyHySCO), a user-friendly EPI distortion correction tool implemented in PyTorch that enables multi-threading and efficient use of graphics processing units (GPUs). PyHySCO uses a time-tested physical distortion model and mathematical formulation and is, therefore, reliable without training. An algorithmic improvement in PyHySCO is its use of the one-dimensional distortion correction method by Chang and Fitzpatrick to initialize the non-linear optimization. PyHySCO is published under the GNU public license and can be used from the command line or its Python interface. Our extensive numerical validation using 3T and 7T data from the Human Connectome Project suggests that PyHySCO can achieve accuracy comparable to that of leading RGP tools at a fraction of the cost. We also validate the new initialization scheme, compare different optimization algorithms, and test the algorithm on different hardware and arithmetic precisions.

6.
J Environ Manage ; 360: 121024, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38759551

RESUMO

Urban waterlogging is a significant global issue. To achieve precisely control urban waterlogging and enhance our understanding of its causes, a novel study method was introduced. This method is based on a dynamic bidirectional coupling model that combines 1D-2D hydrodynamic and water quality simulations. The waterlogging phenomenon in densely populated metropolitan areas of Changzhi city, China, was studied. This study focused on investigating the process involved in waterlogging formation, particularly overflow at nodes induced by the design of the topological structure of the pipe network, constraints on the capacity of the underground drainage system, and the surface runoff accumulation. The complex interplay among these elements and their possible influences on waterlogging formation were clarified. The results indicated notable spatial and temporal variation in the waterlogging formation process in densely populated urban areas. Node overflow in the drainage system emerged as the key influencing factor in the waterlogging formation process, accounting for up to 71% of the total water accumulation at the peak time. The peak lag time of waterlogging during events with short return periods was primarily determined by the rainfall peak moment. In contrast, the peak time of waterlogging during events with long return periods was influenced by the rainfall peak moment, drainage capacity and topological structure of the pipe network. Notably, the access of inflow from both upstream and downstream segments of the pipe network drainage system significantly impacted the peak time of waterlogging, with upstream water potentially delaying the peak time substantially. This study not only provides new insights into urban waterlogging mechanisms but also provides practical guidance for optimizing urban drainage systems, urban planning, and disaster risk management.


Assuntos
Modelos Teóricos , China , Movimentos da Água , Chuva , Cidades , Qualidade da Água
7.
Int J Neural Syst ; 34(7): 2450038, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38755115

RESUMO

The parallel simulation of Spiking Neural P systems is mainly based on a matrix representation, where the graph inherent to the neural model is encoded in an adjacency matrix. The simulation algorithm is based on a matrix-vector multiplication, which is an operation efficiently implemented on parallel devices. However, when the graph of a Spiking Neural P system is not fully connected, the adjacency matrix is sparse and hence, lots of computing resources are wasted in both time and memory domains. For this reason, two compression methods for the matrix representation were proposed in a previous work, but they were not implemented nor parallelized on a simulator. In this paper, they are implemented and parallelized on GPUs as part of a new Spiking Neural P system with delays simulator. Extensive experiments are conducted on high-end GPUs (RTX2080 and A100 80GB), and it is concluded that they outperform other solutions based on state-of-the-art GPU libraries when simulating Spiking Neural P systems.


Assuntos
Potenciais de Ação , Algoritmos , Gráficos por Computador , Modelos Neurológicos , Potenciais de Ação/fisiologia , Neurônios/fisiologia , Redes Neurais de Computação , Simulação por Computador , Humanos
8.
BMC Bioinformatics ; 25(1): 186, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38730374

RESUMO

BACKGROUND: Commonly used next generation sequencing machines typically produce large amounts of short reads of a few hundred base-pairs in length. However, many downstream applications would generally benefit from longer reads. RESULTS: We present CAREx-an algorithm for the generation of pseudo-long reads from paired-end short-read Illumina data based on the concept of repeatedly computing multiple-sequence-alignments to extend a read until its partner is found. Our performance evaluation on both simulated data and real data shows that CAREx is able to connect significantly more read pairs (up to 99 % for simulated data) and to produce more error-free pseudo-long reads than previous approaches. When used prior to assembly it can achieve superior de novo assembly results. Furthermore, the GPU-accelerated version of CAREx exhibits the fastest execution times among all tested tools. CONCLUSION: CAREx is a new MSA-based algorithm and software for producing pseudo-long reads from paired-end short read data. It outperforms other state-of-the-art programs in terms of (i) percentage of connected read pairs, (ii) reduction of error rates of filled gaps, (iii) runtime, and (iv) downstream analysis using de novo assembly. CAREx is open-source software written in C++ (CPU version) and in CUDA/C++ (GPU version). It is licensed under GPLv3 and can be downloaded at ( https://github.com/fkallen/CAREx ).


Assuntos
Algoritmos , Sequenciamento de Nucleotídeos em Larga Escala , Software , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de DNA/métodos , Humanos , Alinhamento de Sequência/métodos
9.
J Comput Chem ; 2024 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-38795375

RESUMO

The fragment molecular orbital (FMO) scheme is one of the popular fragmentation-based methods and has the potential advantage of making the circuit shallow for quantum chemical calculations on quantum computers. In this study, we used a GPU-accelerated quantum simulator (cuQuantum) to perform the electron correlation part of the FMO calculation as unitary coupled-cluster singles and doubles (UCCSD) with the variational quantum eigensolver (VQE) for hydrogen-bonded (FH) 3 $$ {}_3 $$ and (FH) 2 $$ {}_2 $$ -H 2 $$ {}_2 $$ O systems with the STO-3G basis set. VQE-UCCSD calculations were performed using both canonical and localized MO sets, and the results were examined from the point of view of size-consistency and orbital-invariance affected by the Trotter error. It was found that the use of localized MO leads to better results, especially for (FH) 2 $$ {}_2 $$ -H 2 $$ {}_2 $$ O. The GPU acceleration was substantial for the simulations with larger numbers of qubits, and was about a factor of 6.7-7.7 for 18 qubit systems.

10.
J Synchrotron Radiat ; 31(Pt 4): 851-866, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38771775

RESUMO

Despite the increased brilliance of the new generation synchrotron sources, there is still a challenge with high-resolution scanning of very thick and absorbing samples, such as a whole mouse brain stained with heavy elements, and, extending further, brains of primates. Samples are typically cut into smaller parts, to ensure a sufficient X-ray transmission, and scanned separately. Compared with the standard tomography setup where the sample would be cut into many pillars, the laminographic geometry operates with slab-shaped sections significantly reducing the number of sample parts to be prepared, the cutting damage and data stitching problems. In this work, a laminography pipeline for imaging large samples (>1 cm) at micrometre resolution is presented. The implementation includes a low-cost instrument setup installed at the 2-BM micro-CT beamline of the Advanced Photon Source. Additionally, sample mounting, scanning techniques, data stitching procedures, a fast reconstruction algorithm with low computational complexity, and accelerated reconstruction on multi-GPU systems for processing large-scale datasets are presented. The applicability of the whole laminography pipeline was demonstrated by imaging four sequential slabs throughout an entire mouse brain sample stained with osmium, in total generating approximately 12 TB of raw data for reconstruction.

11.
Phys Med ; 121: 103346, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38608421

RESUMO

Partial breast irradiation for the treatment of early-stage breast cancer patients can be performed by means of Intra Operative electron Radiation Therapy (IOeRT). One of the main limitations of this technique is the absence of a treatment planning system (TPS) that could greatly help in ensuring a proper coverage of the target volume during irradiation. An IOeRT TPS has been developed using a fast Monte Carlo (MC) and an ultrasound imaging system to provide the best irradiation strategy (electron beam energy, applicator position and bevel angle) and to facilitate the optimisation of dose prescription and delivery to the target volume while maximising the organs at risk sparing. The study has been performed in silico, exploiting MC simulations of a breast cancer treatment. Ultrasound-based input has been used to compute the absorbed dose maps in different irradiation strategies and a quantitative comparison between the different options was carried out using Dose Volume Histograms. The system was capable of exploring different beam energies and applicator positions in few minutes, identifying the best strategy with an overall computation time that was found to be completely compatible with clinical implementation. The systematic uncertainty related to tissue deformation during treatment delivery with respect to imaging acquisition was taken into account. The potential and feasibility of a GPU based full MC TPS implementation of IOeRT breast cancer treatments has been demonstrated in-silico. This long awaited tool will greatly improve the treatment safety and efficacy, overcoming the limits identified within the clinical trials carried out so far.


Assuntos
Neoplasias da Mama , Método de Monte Carlo , Planejamento da Radioterapia Assistida por Computador , Neoplasias da Mama/radioterapia , Neoplasias da Mama/diagnóstico por imagem , Humanos , Planejamento da Radioterapia Assistida por Computador/métodos , Dosagem Radioterapêutica , Elétrons/uso terapêutico , Fatores de Tempo , Gráficos por Computador , Feminino , Órgãos em Risco/efeitos da radiação
12.
Sci Rep ; 14(1): 6983, 2024 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-38523195

RESUMO

This study assesses the effect of stone content on the stability of soil-rock mixture slopes and the dynamics of ensuing large displacement landslides using a material point strength reduction method. This method evaluates structural stability by incrementally decreasing material strength parameters. The author created four distinct soil-rock mixture slope models with varying stone contents yet consistent stone size distributions through digital image processing. The initial conditions were established by linearly ramping up the gravity in fixed proportionate steps until the full value was attained. Stability was monitored until a sudden shift in displacement marked the onset of instability. Upon destabilization, the author employed the material point method to reconstruct the landslide dynamics. Due to the substantial computational requirements, the author developed a high-performance GPU-based framework for the material point method, prioritizing the parallelization of the MPM algorithm and the optimization of data structures and memory allocation to exploit GPU parallel processing capabilities. Our results demonstrate a clear positive correlation between stone content and slope stability; increasing stone content from 10 to 20% improved the safety factor from 1.9 to 2.4, and further increments to 30% and 40% ensured comprehensive stability. This study not only sheds light on slope stability and the mechanics of landslides but also underscores the effectiveness of GPU-accelerated methods in handling complex geotechnical simulations.

13.
J Synchrotron Radiat ; 31(Pt 3): 517-526, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38517755

RESUMO

Physical optics simulations for beamlines and experiments allow users to test experiment feasibility and optimize beamline settings ahead of beam time in order to optimize valuable beam time at synchrotron light sources like NSLS-II. Further, such simulations also help to develop and test experimental data processing methods and software in advance. The Synchrotron Radiation Workshop (SRW) software package supports such complex simulations. We demonstrate how recent developments in SRW significantly improve the efficiency of physical optics simulations, such as end-to-end simulations of time-dependent X-ray photon correlation spectroscopy experiments with partially coherent undulator radiation (UR). The molecular dynamics simulation code LAMMPS was chosen to model the sample: a solution of silica nanoparticles in water at room temperature. Real-space distributions of nanoparticles produced by LAMMPS were imported into SRW and used to simulate scattering patterns of partially coherent hard X-ray UR from such a sample at the detector. The partially coherent UR illuminating the sample can be represented by a set of orthogonal coherent modes obtained by simulation of emission and propagation of this radiation through the coherent hard X-ray (CHX) scattering beamline followed by a coherent-mode decomposition. GPU acceleration is added for several key functions of SRW used in propagation from sample to detector, further improving the speed of the calculations. The accuracy of this simulation is benchmarked by comparison with experimental data.

14.
Magn Reson Med ; 92(2): 447-458, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38469890

RESUMO

PURPOSE: To introduce a tool (TensorFit) for ultrafast and robust metabolite fitting of MRSI data based on Torch's auto-differentiation and optimization framework. METHODS: TensorFit was implemented in Python based on Torch's auto-differentiation to fit individual metabolites in MRS spectra. The underlying time domain and/or frequency domain fitting model is based on a linear combination of metabolite spectroscopic response. The computational time efficiency and accuracy of TensorFit were tested on simulated and in vivo MRS data and compared against TDFDFit and QUEST. RESULTS: TensorFit demonstrates a significant improvement in computation speed, achieving a 165-times acceleration compared with TDFDFit and 115 times against QUEST. TensorFit showed smaller percentual errors on simulated data compared with TDFDFit and QUEST. When tested on in vivo data, it performed similarly to TDFDFit with a 2% better fit in terms of mean squared error while obtaining a 169-fold speedup. CONCLUSION: TensorFit enables fast and robust metabolite fitting in large MRSI data sets compared with conventional metabolite fitting methods. This tool could boost the clinical applicability of large 3D MRSI by enabling the fitting of large MRSI data sets within computation times acceptable in a clinical environment.


Assuntos
Algoritmos , Espectroscopia de Ressonância Magnética , Humanos , Espectroscopia de Ressonância Magnética/métodos , Simulação por Computador , Software , Encéfalo/metabolismo , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos
15.
Sensors (Basel) ; 24(6)2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38544210

RESUMO

Graphics processing units (GPUs) facilitate massive parallelism and high-capacity storage, and thus are suitable for the iterative reconstruction of ultrahigh-resolution micro computed tomography (CT) scans by on-the-fly system matrix (OTFSM) calculation using ordered subsets expectation maximization (OSEM). We propose a finite state automaton (FSA) method that facilitates iterative reconstruction using a heterogeneous multi-GPU platform through parallelizing the matrix calculations derived from a ray tracing system of ordered subsets. The FSAs perform flow control for parallel threading of the heterogeneous GPUs, which minimizes the latency of launching ordered-subsets tasks, reduces the data transfer between the main system memory and local GPU memory, and solves the memory-bound of a single GPU. In the experiments, we compared the operation efficiency of OS-MLTR for three reconstruction environments. The heterogeneous multiple GPUs with job queues for high throughput calculation speed is up to five times faster than the single GPU environment, and that speed up is nine times faster than the heterogeneous multiple GPUs with the FIFO queues of the device scheduling control. Eventually, we proposed an event-triggered FSA method for iterative reconstruction using multiple heterogeneous GPUs that solves the memory-bound issue of a single GPU at ultrahigh resolutions, and the routines of the proposed method were successfully executed on each GPU simultaneously.

16.
Sensors (Basel) ; 24(5)2024 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-38474935

RESUMO

Hyperspectral imaging (HSI) has become a very compelling technique in different scientific areas; indeed, many researchers use it in the fields of remote sensing, agriculture, forensics, and medicine. In the latter, HSI plays a crucial role as a diagnostic support and for surgery guidance. However, the computational effort in elaborating hyperspectral data is not trivial. Furthermore, the demand for detecting diseases in a short time is undeniable. In this paper, we take up this challenge by parallelizing three machine-learning methods among those that are the most intensively used: Support Vector Machine (SVM), Random Forest (RF), and eXtreme Gradient Boosting (XGB) algorithms using the Compute Unified Device Architecture (CUDA) to accelerate the classification of hyperspectral skin cancer images. They all showed a good performance in HS image classification, in particular when the size of the dataset is limited, as demonstrated in the literature. We illustrate the parallelization techniques adopted for each approach, highlighting the suitability of Graphical Processing Units (GPUs) to this aim. Experimental results show that parallel SVM and XGB algorithms significantly improve the classification times in comparison with their serial counterparts.


Assuntos
Algoritmos , Neoplasias Cutâneas , Humanos , Aprendizado de Máquina , Imageamento Hiperespectral , Aceleração , Máquina de Vetores de Suporte
17.
Sensors (Basel) ; 24(5)2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38475138

RESUMO

The approach of using more than one processor to compute in order to overcome the complexity of different medical imaging methods that make up an overall job is known as GPU (graphic processing unit)-based parallel processing. It is extremely important for several medical imaging techniques such as image classification, object detection, image segmentation, registration, and content-based image retrieval, since the GPU-based parallel processing approach allows for time-efficient computation by a software, allowing multiple computations to be completed at once. On the other hand, a non-invasive imaging technology that may depict the shape of an anatomy and the biological advancements of the human body is known as magnetic resonance imaging (MRI). Implementing GPU-based parallel processing approaches in brain MRI analysis with medical imaging techniques might be helpful in achieving immediate and timely image capture. Therefore, this extended review (the extension of the IWBBIO2023 conference paper) offers a thorough overview of the literature with an emphasis on the expanding use of GPU-based parallel processing methods for the medical analysis of brain MRIs with the imaging techniques mentioned above, given the need for quicker computation to acquire early and real-time feedback in medicine. Between 2019 and 2023, we examined the articles in the literature matrix that include the tasks, techniques, MRI sequences, and processing results. As a result, the methods discussed in this review demonstrate the advancements achieved until now in minimizing computing runtime as well as the obstacles and problems still to be solved in the future.


Assuntos
Algoritmos , Gráficos por Computador , Humanos , Software , Encéfalo , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
18.
Front Neuroinform ; 18: 1331220, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38444756

RESUMO

Spiking neural network simulations are a central tool in Computational Neuroscience, Artificial Intelligence, and Neuromorphic Engineering research. A broad range of simulators and software frameworks for such simulations exist with different target application areas. Among these, PymoNNto is a recent Python-based toolbox for spiking neural network simulations that emphasizes the embedding of custom code in a modular and flexible way. While PymoNNto already supports GPU implementations, its backend relies on NumPy operations. Here we introduce PymoNNtorch, which is natively implemented with PyTorch while retaining PymoNNto's modular design. Furthermore, we demonstrate how changes to the implementations of common network operations in combination with PymoNNtorch's native GPU support can offer speed-up over conventional simulators like NEST, ANNarchy, and Brian 2 in certain situations. Overall, we show how PymoNNto's modular and flexible design in combination with PymoNNtorch's GPU acceleration and optimized indexing operations facilitate research and development of spiking neural networks in the Python programming language.

19.
Sensors (Basel) ; 24(4)2024 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-38400470

RESUMO

Cardiac CINE, a form of dynamic cardiac MRI, is indispensable in the diagnosis and treatment of heart conditions, offering detailed visualization essential for the early detection of cardiac diseases. As the demand for higher-resolution images increases, so does the volume of data requiring processing, presenting significant computational challenges that can impede the efficiency of diagnostic imaging. Our research presents an approach that takes advantage of the computational power of multiple Graphics Processing Units (GPUs) to address these challenges. GPUs are devices capable of performing large volumes of computations in a short period, and have significantly improved the cardiac MRI reconstruction process, allowing images to be produced faster. The innovation of our work resides in utilizing a multi-device system capable of processing the substantial data volumes demanded by high-resolution, five-dimensional cardiac MRI. This system surpasses the memory capacity limitations of single GPUs by partitioning large datasets into smaller, manageable segments for parallel processing, thereby preserving image integrity and accelerating reconstruction times. Utilizing OpenCL technology, our system offers adaptability and cross-platform functionality, ensuring wider applicability. The proposed multi-device approach offers an advancement in medical imaging, accelerating the reconstruction process and facilitating faster and more effective cardiac health assessment.


Assuntos
Algoritmos , Imageamento por Ressonância Magnética , Coração/diagnóstico por imagem , Aumento da Imagem/métodos , Imageamento Tridimensional/métodos
20.
BMC Bioinformatics ; 25(1): 71, 2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38355413

RESUMO

BACKGROUND: Gene expression may be regulated by the DNA methylation of regulatory elements in cis, distal, and trans regions. One method to evaluate the relationship between DNA methylation and gene expression is the mapping of expression quantitative trait methylation (eQTM) loci (also called expression associated CpG loci, eCpG). However, no open-source tools are available to provide eQTM mapping. In addition, eQTM mapping can involve a large number of comparisons which may prevent the analyses due to limitations of computational resources. Here, we describe Torch-eCpG, an open-source tool to perform eQTM mapping that includes an optimized implementation that can use the graphical processing unit (GPU) to reduce runtime. RESULTS: We demonstrate the analyses using the tool are reproducible, up to 18 × faster using the GPU, and scale linearly with increasing methylation loci. CONCLUSIONS: Torch-eCpG is a fast, reliable, and scalable tool to perform eQTM mapping. Source code for Torch-eCpG is available at https://github.com/kordk/torch-ecpg .


Assuntos
Metilação de DNA , Locos de Características Quantitativas , Fenótipo , Sequências Reguladoras de Ácido Nucleico , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...