Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
Add more filters











Publication year range
1.
Water Res ; 266: 122318, 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39236501

ABSTRACT

As the size of water distribution network (WDN) models continues to grow, developing and applying real-time models or digital twins to simulate hydraulic behaviors in large-scale WDNs is becoming increasingly challenging. The long response time incurred when performing multiple hydraulic simulations in large-scale WDNs can no longer meet the current requirements for the efficient and real-time application of WDN models. To address this issue, there is a rising interest in accelerating hydraulic calculations in WDN models by integrating new model structures with abundant computational resources and mature parallel computing frameworks. This paper presents a novel and efficient framework for steady-state hydraulic calculations, comprising a joint topology-calculation decomposition method that decomposes the hydraulic calculation process and a high-performance decomposed gradient algorithm that integrates with parallel computation. Tests in four WDNs of different sizes with 8 to 85,118 nodes demonstrate that the framework maintains high calculation accuracy consistent with EPANET and can reduce calculation time by up to 51.93 % compared to EPANET in the largest WDN model. Further investigation found that factors affecting the acceleration include the decomposition level, consistency of sub-model sizes and sub-model structures. The framework aims to help develop rapid-responding models for large-scale WDNs and improve their efficiency in integrating multiple application algorithms, thereby supporting the water supply industry in achieving more adaptive and intelligent management of large-scale WDNs.

2.
Radiol Phys Technol ; 17(2): 402-411, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38546970

ABSTRACT

The projection data generated via the forward projection of a computed tomography (CT) image (FP-data) have useful potentials in cases where only image data are available. However, there is a question of whether the FP-data generated from an image severely corrupted by metal artifacts can be used for the metal artifact reduction (MAR). The aim of this study was to investigate the feasibility of a MAR technique using FP-data by comparing its performance with that of a conventional robust MAR using projection data normalization (NMARconv). The NMARconv was modified to make use of FP-data (FPNMAR). A graphics processing unit was used to reduce the time required to generate FP-data and subsequent processes. The performances of FPNMAR and NMARconv were quantitatively compared using a normalized artifact index (AIn) for two cases each of hip prosthesis and dental fillings. Several clinical CT images with metal artifacts were processed by FPNMAR. The AIn values of FPNMAR and NMARconv were not significantly different from each other, showing almost the same performance between these two techniques. For all the clinical cases tested, FPNMAR significantly reduced the metal artifacts; thereby, the images of the soft tissues and bones obscured by the artifacts were notably recovered. The computation time per image was ~ 56 ms. FPNMAR, which can be applied to CT images without accessing the projection data, exhibited almost the same performance as that of NMARconv, while consuming significantly shorter processing time. This capability testifies the potential of FPNMAR for wider use in clinical settings.


Subject(s)
Artifacts , Metals , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Humans , Image Processing, Computer-Assisted/methods , Hip Prosthesis , Phantoms, Imaging
3.
Magn Reson Imaging ; 109: 271-285, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38537891

ABSTRACT

Functional magnetic resonance imaging (fMRI) plays a crucial role in neuroimaging, enabling the exploration of brain activity through complex-valued signals. These signals, composed of magnitude and phase, offer a rich source of information for understanding brain functions. Traditional fMRI analyses have largely focused on magnitude information, often overlooking the potential insights offered by phase data. In this paper, we propose a novel fully Bayesian model designed for analyzing single-subject complex-valued fMRI (cv-fMRI) data. Our model, which we refer to as the CV-M&P model, is distinctive in its comprehensive utilization of both magnitude and phase information in fMRI signals, allowing for independent prediction of different types of activation maps. We incorporate Gaussian Markov random fields (GMRFs) to capture spatial correlations within the data, and employ image partitioning and parallel computation to enhance computational efficiency. Our model is rigorously tested through simulation studies, and then applied to a real dataset from a unilateral finger-tapping experiment. The results demonstrate the model's effectiveness in accurately identifying brain regions activated in response to specific tasks, distinguishing between magnitude and phase activation.


Subject(s)
Brain , Magnetic Resonance Imaging , Bayes Theorem , Magnetic Resonance Imaging/methods , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods , Computer Simulation
4.
Distrib Comput ; 37(1): 35-64, 2024.
Article in English | MEDLINE | ID: mdl-38370529

ABSTRACT

In this paper, we study the power and limitations of component-stable algorithms in the low-space model of massively parallel computation (MPC). Recently Ghaffari, Kuhn and Uitto (FOCS 2019) introduced the class of component-stable low-space MPC algorithms, which are, informally, those algorithms for which the outputs reported by the nodes in different connected components are required to be independent. This very natural notion was introduced to capture most (if not all) of the known efficient MPC algorithms to date, and it was the first general class of MPC algorithms for which one can show non-trivial conditional lower bounds. In this paper we enhance the framework of component-stable algorithms and investigate its effect on the complexity of randomized and deterministic low-space MPC. Our key contributions include: 1. We revise and formalize the lifting approach of Ghaffari, Kuhn and Uitto. This requires a very delicate amendment of the notion of component stability, which allows us to fill in gaps in the earlier arguments. 2. We also extend the framework to obtain conditional lower bounds for deterministic algorithms and fine-grained lower bounds that depend on the maximum degree Δ. 3. We demonstrate a collection of natural graph problems for which deterministic component-unstable algorithms break the conditional lower bound obtained for component-stable algorithms. This implies that, in the context of deterministic algorithms, component-stable algorithms are conditionally weaker than the component-unstable ones. 4. We also show that the restriction to component-stable algorithms has an impact in the randomized setting. We present a natural problem which can be solved in O(1) rounds by a component-unstable MPC algorithm, but requires Ω(loglog∗n) rounds for any component-stable algorithm, conditioned on the connectivity conjecture. Altogether our results imply that component-stability might limit the computational power of the low-space MPC model, at least in certain contexts, paving the way for improved upper bounds that escape the conditional lower bound setting of Ghaffari, Kuhn, and Uitto.

5.
Neurophotonics ; 10(4): 045007, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38076725

ABSTRACT

Significance: Frequent assessment of cerebral blood flow (CBF) is crucial for the diagnosis and management of cerebral vascular diseases. In contrast to large and expensive imaging modalities, such as nuclear medicine and magnetic resonance imaging, optical imaging techniques are portable and inexpensive tools for continuous measurements of cerebral hemodynamics. The recent development of an innovative noncontact speckle contrast diffuse correlation tomography (scDCT) enables three-dimensional (3D) imaging of CBF distributions. However, scDCT requires complex and time-consuming 3D reconstruction, which limits its ability to achieve high spatial resolution without sacrificing temporal resolution and computational efficiency. Aim: We investigate a new diffuse speckle contrast topography (DSCT) method with parallel computation for analyzing scDCT data to achieve fast and high-density two-dimensional (2D) mapping of CBF distributions at different depths without the need for 3D reconstruction. Approach: A new moving window method was adapted to improve the sampling rate of DSCT. A fast computation method utilizing MATLAB functions in the Image Processing Toolbox™ and Parallel Computing Toolbox™ was developed to rapidly generate high-density CBF maps. The new DSCT method was tested for spatial resolution and depth sensitivity in head-simulating layered phantoms and in-vivo rodent models. Results: DSCT enables 2D mapping of the particle flow in the phantom at different depths through the top layer with varied thicknesses. Both DSCT and scDCT enable the detection of global and regional CBF changes in deep brains of adult rats. However, DSCT achieves fast and high-density 2D mapping of CBF distributions at different depths without the need for complex and time-consuming 3D reconstruction. Conclusions: The depth-sensitive DSCT method has the potential to be used as a noninvasive, noncontact, fast, high resolution, portable, and inexpensive brain imager for basic neuroscience research in small animal models and for translational studies in human neonates.

6.
Phys Med Biol ; 68(24)2023 Dec 11.
Article in English | MEDLINE | ID: mdl-37890461

ABSTRACT

Objective. Real-time reconstruction of magnetic particle imaging (MPI) shows promising clinical applications. However, prevalent reconstruction methods are mainly based on serial iteration, which causes large delay in real-time reconstruction. In order to achieve lower latency in real-time MPI reconstruction, we propose a parallel method for accelerating the speed of reconstruction methods.Approach. The proposed method, named adaptive multi-frame parallel iterative method (AMPIM), enables the processing of multi-frame signals to multi-frame MPI images in parallel. To facilitate parallel computing, we further propose an acceleration strategy for parallel computation to improve the computational efficiency of our AMPIM.Main results. OpenMPIData was used to evaluate our AMPIM, and the results show that our AMPIM improves the reconstruction frame rate per second of real-time MPI reconstruction by two orders of magnitude compared to prevalent iterative algorithms including the Kaczmarz algorithm, the conjugate gradient normal residual algorithm, and the alternating direction method of multipliers algorithm. The reconstructed image using AMPIM has high contrast-to-noise with reducing artifacts.Significance. The AMPIM can parallelly optimize least squares problems with multiple right-hand sides by exploiting the dimension of the right-hand side. AMPIM has great potential for application in real-time MPI imaging with high imaging frame rate.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Diagnostic Imaging , Phantoms, Imaging , Magnetic Phenomena
7.
Brief Bioinform ; 24(5)2023 09 20.
Article in English | MEDLINE | ID: mdl-37738400

ABSTRACT

Implementing a specific cloud resource to analyze extensive genomic data on severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) poses a challenge when resources are limited. To overcome this, we repurposed a cloud platform initially designed for use in research on cancer genomics (https://cgc.sbgenomics.com) to enable its use in research on SARS-CoV-2 to build Cloud Workflow for Viral and Variant Identification (COWID). COWID is a workflow based on the Common Workflow Language that realizes the full potential of sequencing technology for use in reliable SARS-CoV-2 identification and leverages cloud computing to achieve efficient parallelization. COWID outperformed other contemporary methods for identification by offering scalable identification and reliable variant findings with no false-positive results. COWID typically processed each sample of raw sequencing data within 5 min at a cost of only US$0.01. The COWID source code is publicly available (https://github.com/hendrick0403/COWID) and can be accessed on any computer with Internet access. COWID is designed to be user-friendly; it can be implemented without prior programming knowledge. Therefore, COWID is a time-efficient tool that can be used during a pandemic.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , Cloud Computing , SARS-CoV-2/genetics , Workflow , Genomics
8.
Biosystems ; 226: 104887, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36990379

ABSTRACT

Although there have been many studies revealing that biomarker genes for early cancer detection can be found in biomolecular networks, no proper tool exists to discover the cancer biomarker genes from various biomolecular networks. Accordingly, we developed a novel Cytoscape app called C-Biomarker.net, which can identify cancer biomarker genes from cores of various biomolecular networks. Derived from recent research, we designed and implemented the software based on parallel algorithms proposed in this study for working on high-performance computing devices. We tested our software on various network sizes and found the suitable size for each running mode on CPU or GPU. Interestingly, using the software for 17 cancer signaling pathways, we found that on average 70.59% of the top three nodes residing at the innermost core of each pathway are biomarker genes of the cancer respectively to the pathway. Similarly, by the software, we also found 100% of the top ten nodes at both cores of Human Gene Regulatory (HGR) network and Human Protein-Protein Interaction (HPPI) network are multi-cancer biomarkers. These case studies are reliable evidence for performance of cancer biomarker prediction function in the software. Through the case studies, we also suggest that true cores of directed complex networks should be identified by the algorithm of R-core rather than K-core as usual. Finally, we compared the prediction result of our software with those of other researchers and confirmed that our prediction method outperforms the other methods. Taken together, C-Biomarker.net is a reliable tool that efficiently detects biomarker nodes from cores of various large biomolecular networks. The software is available at https://github.com/trantd/C-Biomarker.net.


Subject(s)
Mobile Applications , Neoplasms , Humans , Biomarkers, Tumor/genetics , Software , Algorithms , Protein Interaction Maps/genetics , Gene Regulatory Networks/genetics , Neoplasms/diagnosis , Neoplasms/genetics , Computational Biology/methods
9.
Methods Mol Biol ; 2586: 35-48, 2023.
Article in English | MEDLINE | ID: mdl-36705897

ABSTRACT

The information of RNA secondary structure has been widely applied to the inference of RNA function. However, a classical prediction method is not feasible to long RNAs such as mRNA due to the problems of computational time and numerical errors. To overcome those problems, sliding window methods have been applied while their results are not directly comparable to global RNA structure prediction. In this chapter, we introduce ParasoR, a method designed for parallel computation of genome-wide RNA secondary structures. To enable genome-wide prediction, ParasoR distributes dynamic programming (DP) matrices required for structure prediction to multiple computational nodes. Using the database of not the original DP variable but the ratio of variables, ParasoR can locally compute the structure scores such as stem probability or accessibility on demand. A comprehensive analysis of local secondary structures by ParasoR is expected to be a promising way to detect the statistical constraints on long RNAs.


Subject(s)
Algorithms , RNA , RNA/genetics , RNA/chemistry , Nucleic Acid Conformation , Computational Biology/methods , RNA, Messenger
10.
Magn Reson Imaging ; 97: 13-23, 2023 04.
Article in English | MEDLINE | ID: mdl-36581213

ABSTRACT

Magnetic Resonance Fingerprinting (MRF) is a new quantitative technique of Magnetic Resonance Imaging (MRI). Conventionally, MRF requires sequential correlation of the acquired MRF signals with all the signals of (a large sized) MRF dictionary. This is a computationally intensive matching process and is a major challenge in MRF image reconstruction. This paper introduces the use of clustering techniques (to reduce the effective size of MRF dictionary) by splitting MRF dictionary into multiple small sized MRF dictionary components called MRF signal groups. The proposed method has been further optimized for parallel processing to reduce the computation time of MRF pattern matching. A multi-core GPU based parallel framework has been developed that enables the MRF algorithm to process multiple MRF signals simultaneously. Experiments have been performed on human head and phantom datasets. The results show that the proposed method accelerates the conventional MRF (MATLAB based) reconstruction time up to 25× with single-core CPU implementation, 300× with multi- core CPU implementation and 1035× with the proposed multi-core GPU based framework by keeping the SNR of the resulting images in a clinically acceptable range. Furthermore, experimental results show that the memory requirements of MRF dictionary get significantly reduced (due to efficient memory utilization) in the proposed method.


Subject(s)
Brain , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Magnetic Resonance Spectroscopy , Algorithms , Phantoms, Imaging
11.
Neural Netw ; 154: 323-332, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35930856

ABSTRACT

Group convolution has been widely used in order to reduce the computation time of convolution, which takes most of the training time of convolutional neural networks. However, it is well known that a large number of groups significantly reduce the performance of group convolution. In this paper, we propose a new convolution methodology called "two-level" group convolution that is robust with respect to the increase of the number of groups and suitable for multi-GPU parallel computation. We first observe that the group convolution can be interpreted as a one-level block Jacobi approximation of the standard convolution, which is a popular notion in the field of numerical analysis. In numerical analysis, there have been numerous studies on the two-level method that introduces an intergroup structure that resolves the performance degradation issue without disturbing parallel computation. Motivated by these, we introduce a coarse-level structure which promotes intergroup communication without being a bottleneck in the group convolution. We show that all the additional work induced by the coarse-level structure can be efficiently processed in a distributed memory system. Numerical results that verify the robustness of the proposed method with respect to the number of groups are presented. Moreover, we compare the proposed method to various approaches for group convolution in order to highlight the superiority of the proposed method in terms of execution time, memory efficiency, and performance.

12.
Distrib Comput ; 35(2): 165-183, 2022.
Article in English | MEDLINE | ID: mdl-35300185

ABSTRACT

The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle versus two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., P ≠ NP ), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by MPC ( o ( log N ) ) , and the standard space complexity classes L and NL , and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model. Specifically, our main results are as follows.Lower bounds conditioned on the one cycle versus two cycles conjecture can be instead argued under the L ⊈ MPC ( o ( log N ) ) conjecture: these two assumptions are equivalent, and refuting either of them would lead to o ( log N ) -round MPC algorithms for a large number of challenging problems, including list ranking, minimum cut, and planarity testing. In fact, we show that these problems and many others require asymptotically the same number of rounds as the seemingly much easier problem of distinguishing between a graph being one cycle or two cycles.Many lower bounds previously argued under the one cycle versus two cycles conjecture can be argued under an even more robust (thus harder to refute) conjecture, namely NL ⊈ MPC ( o ( log N ) ) . Refuting this conjecture would lead to o ( log N ) -round MPC algorithms for an even larger set of problems, including all-pairs shortest paths, betweenness centrality, and all aforementioned ones. Lower bounds under this conjecture hold for problems such as perfect matching and network flow.

13.
J Cheminform ; 14(1): 7, 2022 Feb 16.
Article in English | MEDLINE | ID: mdl-35172881

ABSTRACT

In this work we explore the properties which make many real-life global optimization problems extremely difficult to handle, and some of the common techniques used in literature to address them. We then introduce a general optimization management tool called GloMPO (Globally Managed Parallel Optimization) to help address some of the challenges faced by practitioners. GloMPO manages and shares information between traditional optimization algorithms run in parallel. We hope that GloMPO will be a flexible framework which allows for customization and hybridization of various optimization ideas, while also providing a substitute for human interventions and decisions which are a common feature of optimization processes of hard problems. GloMPO is shown to produce lower minima than traditional optimization approaches on global optimization test functions, the Lennard-Jones cluster problem, and ReaxFF reparameterizations. The novel feature of forced optimizer termination was shown to find better minima than normal optimization. GloMPO is also shown to provide qualitative benefits such a identifying degenerate minima, and providing a standardized interface and workflow manager.

14.
Neural Netw ; 144: 297-306, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34543855

ABSTRACT

The recurrent network architecture is a widely used model in sequence modeling, but its serial dependency hinders the computation parallelization, which makes the operation inefficient. The same problem was encountered in serial adder at the early stage of digital electronics. In this paper, we discuss the similarities between recurrent neural network (RNN) and serial adder. Inspired by carry-lookahead adder, we introduce carry-lookahead module to RNN, which makes it possible for RNN to run in parallel. Then, we design the method of parallel RNN computation, and finally Carry-lookahead RNN (CL-RNN) is proposed. CL-RNN takes advantages in parallelism and flexible receptive field. Through a comprehensive set of tests, we verify that CL-RNN can perform better than existing typical RNNs in sequence modeling tasks which are specially designed for RNNs. Code and models are available at: https://github.com/WinnieJiangHW/Carry-lookahead_RNN.


Subject(s)
Neural Networks, Computer
15.
MethodsX ; 8: 101437, 2021.
Article in English | MEDLINE | ID: mdl-34430326

ABSTRACT

This article describes a method for creating applications for cluster computing systems using the parallel BSF-skeleton based on the original BSF (Bulk Synchronous Farm) model of parallel computations developed by the author earlier. This model uses the master/slave paradigm. The main advantage of the BSF model is that it allows to estimate the scalability of a parallel algorithm before its implementation. Another important feature of the BSF model is the representation of problem data in the form of lists that greatly simplifies the logic of building applications. The BSF-skeleton is designed for creating parallel programs in C++ using the MPI library. The scope of the BSF-skeleton is iterative numerical algorithms of high computational complexity. The BSF-skeleton has the following distinctive features. • The BSF-skeleton completely encapsulates all aspects that are associated with parallelizing a program. • The BSF-skeleton allows error-free compilation at all stages of application development. • The BSF-skeleton supports OpenMP programming model and workflows.

16.
Math Biosci Eng ; 18(4): 4461-4476, 2021 05 24.
Article in English | MEDLINE | ID: mdl-34198448

ABSTRACT

Application Specific Internet of Things (ASIoTs) has recently been proposed to address specific requirements for IoT. The objective of this paper is to serve as a framework for the design of ASIoTs using biometrics as the application. This paper provides comprehensive discussions for an ASIoT architecture considering the requirements for biometrics-based security, multimedia content and Big data applications. A comprehensive architecture for Biometrics-based IoT (BiometricIoT) and Big data applications needs to address three challenges: 1) IoT devices are hardware-constrained and cannot afford resource-demanding cryptographic protocols; 2) Biometrics devices introduce multimedia data content due to different biometric traits; and 3) The rapid growth of biometrics-based IoT devices and content creates large amounts of data for computational processing. The proposed BiometricIoT architecture consists of seven layers which have been designed to handle the challenges for biometrics applications and decision making. The latter part of the paper gives discussions for design factors for the BiometricIoT from four perspectives: 1) parallel divide-and-conquer (D&C) computation; 2) computational complexity; 3) device security; and 4) algorithm efficacies. Experimental results are given to validate the effectiveness of the D&C approach. The paper motivates the further research towards the research and development of ASIoTs for biometrics applications.


Subject(s)
Internet of Things , Algorithms , Big Data , Computer Security , Computers , Internet
17.
PeerJ Comput Sci ; 7: e416, 2021.
Article in English | MEDLINE | ID: mdl-33834101

ABSTRACT

A microarray is a revolutionary tool that generates vast volumes of data that describe the expression profiles of genes under investigation that can be qualified as Big Data. Hadoop and Spark are efficient frameworks, developed to store and analyze Big Data. Analyzing microarray data helps researchers to identify correlated genes. Clustering has been successfully applied to analyze microarray data by grouping genes with similar expression profiles into clusters. The complex nature of microarray data obligated clustering methods to employ multiple evaluation functions to ensure obtaining solutions with high quality. This transformed the clustering problem into a Multi-Objective Problem (MOP). A new and efficient hybrid Multi-Objective Whale Optimization Algorithm with Tabu Search (MOWOATS) was proposed to solve MOPs. In this article, MOWOATS is proposed to analyze massive microarray datasets. Three evaluation functions have been developed to ensure an effective assessment of solutions. MOWOATS has been adapted to run in parallel using Spark over Hadoop computing clusters. The quality of the generated solutions was evaluated based on different indices, such as Silhouette and Davies-Bouldin indices. The obtained clusters were very similar to the original classes. Regarding the scalability, the running time was inversely proportional to the number of computing nodes.

18.
Front Big Data ; 4: 756041, 2021.
Article in English | MEDLINE | ID: mdl-35198971

ABSTRACT

Data-intensive applications are becoming commonplace in all science disciplines. They are comprised of a rich set of sub-domains such as data engineering, deep learning, and machine learning. These applications are built around efficient data abstractions and operators that suit the applications of different domains. Often lack of a clear definition of data structures and operators in the field has led to other implementations that do not work well together. The HPTMT architecture that we proposed recently, identifies a set of data structures, operators, and an execution model for creating rich data applications that links all aspects of data engineering and data science together efficiently. This paper elaborates and illustrates this architecture using an end-to-end application with deep learning and data engineering parts working together. Our analysis show that the proposed system architecture is better suited for high performance computing environments compared to the current big data processing systems. Furthermore our proposed system emphasizes the importance of efficient compact data structures such as Apache Arrow tabular data representation defined for high performance. Thus the system integration we proposed scales a sequential computation to a distributed computation retaining optimum performance along with highly usable application programming interface.

19.
Article in English | MEDLINE | ID: mdl-35664445

ABSTRACT

We propose a novel stereo laparoscopy video-based non-rigid SLAM method called EMDQ-SLAM, which can incrementally reconstruct thee-dimensional (3D) models of soft tissue surfaces in real-time and preserve high-resolution color textures. EMDQ-SLAM uses the expectation maximization and dual quaternion (EMDQ) algorithm combined with SURF features to track the camera motion and estimate tissue deformation between video frames. To overcome the problem of accumulative errors over time, we have integrated a g2o-based graph optimization method that combines the EMDQ mismatch removal and as-rigid-as-possible (ARAP) smoothing methods. Finally, the multi-band blending (MBB) algorithm has been used to obtain high resolution color textures with real-time performance. Experimental results demonstrate that our method outperforms two state-of-the-art non-rigid SLAM methods: MISSLAM and DefSLAM. Quantitative evaluation shows an average error in the range of 0.8-2.2 mm for different cases.

20.
Behav Res Methods ; 53(3): 1148-1165, 2021 06.
Article in English | MEDLINE | ID: mdl-33001382

ABSTRACT

Recent advances in Markov chain Monte Carlo (MCMC) extend the scope of Bayesian inference to models for which the likelihood function is intractable. Although these developments allow us to estimate model parameters, other basic problems such as estimating the marginal likelihood, a fundamental tool in Bayesian model selection, remain challenging. This is an important scientific limitation because testing psychological hypotheses with hierarchical models has proven difficult with current model selection methods. We propose an efficient method for estimating the marginal likelihood for models where the likelihood is intractable, but can be estimated unbiasedly. It is based on first running a sampling method such as MCMC to obtain samples for the model parameters, and then using these samples to construct the proposal density in an importance sampling (IS) framework with an unbiased estimate of the likelihood. Our method has several attractive properties: it generates an unbiased estimate of the marginal likelihood, it is robust to the quality and target of the sampling method used to form the IS proposals, and it is computationally cheap to estimate the variance of the marginal likelihood estimator. We also obtain the convergence properties of the method and provide guidelines on maximizing computational efficiency. The method is illustrated in two challenging cases involving hierarchical models: identifying the form of individual differences in an applied choice scenario, and evaluating the best parameterization of a cognitive model in a speeded decision making context. Freely available code to implement the methods is provided. Extensions to posterior moment estimation and parallelization are also discussed.


Subject(s)
Cognition , Bayes Theorem , Humans , Likelihood Functions , Markov Chains , Monte Carlo Method
SELECTION OF CITATIONS
SEARCH DETAIL