Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 120
Filter
1.
Sensors (Basel) ; 24(17)2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39275576

ABSTRACT

Wi-Fi fingerprint-based indoor localization methods are effective in static environments but encounter challenges in dynamic, real-world scenarios due to evolving fingerprint patterns and feature spaces. This study investigates the temporal variations in signal strength over a 25-month period to enhance adaptive long-term Wi-Fi localization. Key aspects explored include the significance of signal features, the effects of sampling fluctuations, and overall accuracy measured by mean absolute error. Techniques such as mean-based feature selection, principal component analysis (PCA), and functional discriminant analysis (FDA) were employed to analyze signal features. The proposed algorithm, Ada-LT IP, which incorporates data reduction and transfer learning, shows improved accuracy compared to state-of-the-art methods evaluated in the study. Additionally, the study addresses multicollinearity through PCA and covariance analysis, revealing a reduction in computational complexity and enhanced accuracy for the proposed method, thereby providing valuable insights for improving adaptive long-term Wi-Fi indoor localization systems.

2.
Neural Netw ; 178: 106462, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38901094

ABSTRACT

In this paper, the problem of time-variant optimization subject to nonlinear equation constraint is studied. To solve the challenging problem, methods based on the neural networks, such as zeroing neural network and gradient neural network, are commonly adopted due to their performance on handling nonlinear problems. However, the traditional zeroing neural network algorithm requires computing the matrix inverse during the solving process, which is a complicated and time-consuming operation. Although the gradient neural network algorithm does not require computing the matrix inverse, its accuracy is not high enough. Therefore, a novel inverse-free zeroing neural network algorithm without matrix inverse is proposed in this paper. The proposed algorithm not only avoids the matrix inverse, but also avoids matrix multiplication, greatly reducing the computational complexity. In addition, detailed theoretical analyses of the convergence performance of the proposed algorithm is provided to guarantee its excellent capability in solving time-variant optimization problems. Numerical simulations and comparative experiments with traditional zeroing neural network and gradient neural network algorithms substantiate the accuracy and superiority of the novel inverse-free zeroing neural network algorithm. To further validate the performance of the novel inverse-free zeroing neural network algorithm in practical applications, path tracking tasks of three manipulators (i.e., Universal Robot 5, Franka Emika Panda, and Kinova JACO2 manipulators) are conducted, and the results verify the applicability of the proposed algorithm.


Subject(s)
Algorithms , Neural Networks, Computer , Nonlinear Dynamics , Computer Simulation , Robotics , Time Factors , Humans
3.
Entropy (Basel) ; 26(5)2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38785634

ABSTRACT

In brain imaging segmentation, precise tumor delineation is crucial for diagnosis and treatment planning. Traditional approaches include convolutional neural networks (CNNs), which struggle with processing sequential data, and transformer models that face limitations in maintaining computational efficiency with large-scale data. This study introduces MambaBTS: a model that synergizes the strengths of CNNs and transformers, is inspired by the Mamba architecture, and integrates cascade residual multi-scale convolutional kernels. The model employs a mixed loss function that blends dice loss with cross-entropy to refine segmentation accuracy effectively. This novel approach reduces computational complexity, enhances the receptive field, and demonstrates superior performance for accurately segmenting brain tumors in MRI images. Experiments on the MICCAI BraTS 2019 dataset show that MambaBTS achieves dice coefficients of 0.8450 for the whole tumor (WT), 0.8606 for the tumor core (TC), and 0.7796 for the enhancing tumor (ET) and outperforms existing models in terms of accuracy, computational efficiency, and parameter efficiency. These results underscore the model's potential to offer a balanced, efficient, and effective segmentation method, overcoming the constraints of existing models and promising significant improvements in clinical diagnostics and planning.

4.
ISA Trans ; 149: 314-324, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38614901

ABSTRACT

Recently, there has been a strong interest in the minimum error entropy (MEE) criterion derived from information theoretic learning, which is effective in dealing with the multimodal non-Gaussian noise case. However, the kernel function is shift invariant resulting in the MEE criterion being insensitive to the error location. An existing solution is to combine the maximum correntropy (MC) with MEE criteria, leading to the MEE criterion with fiducial points (MEEF). Nevertheless, the algorithms based on the MEEF criterion usually require higher computational complexity. To remedy this problem, an improved MEEF (IMEEF) criterion is devised, aiming to avoid repetitive calculations of the aposteriori error, and an adaptive filtering algorithm based on gradient descent (GD) method is proposed, namely, GD-based IMEEF (IMEEF-GD) algorithm. In addition, we provide the convergence condition in terms of mean sense, along with an analysis of the steady-state and transient behaviors of IMEEF-GD in the mean-square sense. Its computational complexity is also analyzed. Simulation results demonstrate that the computational requirement of our algorithm does not vary significantly with the error sample number and the derived theoretical model is highly consistent with the learning curve. Ultimately, we employ the IMEEF-GD algorithm in tasks such as system identification, wind signal magnitude prediction, temperature prediction, and acoustic echo cancellation (AEC) to validate the effectiveness of the IMEEF-GD algorithm.

5.
Sci Rep ; 14(1): 4070, 2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38374350

ABSTRACT

In order to simultaneously maintain the ship magnetic field modeling accuracy, reduce the number of coefficient matrix conditions and the model computational complexity, an improved composite model is designed by introducing the magnetic dipole array model with a single-axis magnetic moment on the basis of the hybrid ellipsoid and magnetic dipole array model. First, the improved composite model of the ship's magnetic field is established based on the magnetic dipole array model with 3-axis magnetic moment, the magnetic dipole array model with only x-axis magnetic moment, and the ellipsoid model. Secondly, the set of equations for calculating the magnetic moments of the composite model is established, and for the problem of solving the pathological set of equations, the least-squares estimation, stepwise regression method, Tikhonov, and truncated singular value decomposition regularization methods are introduced in terms of the magnetic field, and generalized cross-validation is used to solve the optimal regularization parameters. Finally, a ship model test is designed to compare and analyze the effectiveness of the composite and hybrid models in four aspects: the number of coefficient matrix conditions of the model equation set, the relative error of magnetic field fitting, the relative error of magnetic field extrapolation, and the computational time complexity. The modeling results based on the ship model test data show that the composite model can be used for modeling the magnetic field of ships, and compared with the hybrid model, it reduces the number of coefficient matrix conditions and improves the computational efficiency on the basis of retaining a higher modeling accuracy, and it can be effectively applied in related scientific research and engineering.

6.
Sensors (Basel) ; 24(2)2024 Jan 09.
Article in English | MEDLINE | ID: mdl-38257502

ABSTRACT

A Global Navigation Satellite System (GNSS) is widely used today for both positioning and timing purposes. Many distinct receiver chips are available as Application-Specific Integrated Circuit (ASIC)s off-the-shelf, each tailored to the requirements of various applications. These chips deliver good performance and low energy consumption but offer customers little-to-no transparency about their internal features. This prevents modification, research in GNSS processing chain enhancement (e.g., application of Approximate Computing (AxC) techniques), and design space exploration to find the optimal receiver for a use case. In this paper, we review the GNSS processing chain using SyDR, our open-source GNSS Software-Defined Radio (SDR) designed for algorithm benchmarking, and highlight the limitations of a software-only environment. In return, we propose an evolution to our system, called Hard SyDR to become closer to the hardware layer and access new Key Performance Indicator (KPI)s, such as power/energy consumption and resource utilization. We use High-Level Synthesis (HLS) and the PYNQ platform to ease our development process and provide an overview of their advantages/limitations in our project. Finally, we evaluate the foreseen developments, including how this work can serve as the foundation for an exploration of AxC techniques in future low-power GNSS receivers.

7.
Sensors (Basel) ; 23(24)2023 Dec 13.
Article in English | MEDLINE | ID: mdl-38139643

ABSTRACT

To solve error propagation and exorbitant computational complexity of signal detection in wireless multiple-input multiple-output-orthogonal frequency division multiplexing (MIMO-OFDM) systems, a low-complex and efficient signal detection with iterative feedback is proposed via a constellation point feedback optimization of minimum mean square error-ordered successive interference cancellation (MMSE-OSIC) to approach the optimal detection. The candidate vectors are formed by selecting the candidate constellation points. Additionally, the vector most approaching received signals is chosen by the maximum likelihood (ML) criterion in formed candidate vectors to reduce the error propagation by previous erroneous decision, thus improving the detection performance. Under a large number of matrix inversion operations in the above iterative MMSE process, effective and fast signal detection is hard to be achieved. Then, a symmetric successive relaxation iterative algorithm is proposed to avoid the complex matrix inversion calculation process. The relaxation factor and initial iteration value are reasonably configured with low computational complexity to achieve good detection close to that of the MMSE with fewer iterations. Simultaneously, the error diffusion and complexity accumulation caused by the successive detection of the subsequent OSIC scheme are also improved. In addition, a method via a parallel coarse and fine detection deals with several layers to both reduce iterations and improve performance. Therefore, the proposed scheme significantly promotes the MIMO-OFDM performance and thus plays an irreplaceable role in the future sixth generation (6G) mobile communications and wireless sensor networks, and so on.

8.
BMC Bioinformatics ; 24(1): 435, 2023 Nov 16.
Article in English | MEDLINE | ID: mdl-37974081

ABSTRACT

Biclustering of biologically meaningful binary information is essential in many applications related to drug discovery, like protein-protein interactions and gene expressions. However, for robust performance in recently emerging large health datasets, it is important for new biclustering algorithms to be scalable and fast. We present a rapid unsupervised biclustering (RUBic) algorithm that achieves this objective with a novel encoding and search strategy. RUBic significantly reduces the computational overhead on both synthetic and experimental datasets shows significant computational benefits, with respect to several state-of-the-art biclustering algorithms. In 100 synthetic binary datasets, our method took [Formula: see text] s to extract 494,872 biclusters. In the human PPI database of size [Formula: see text], our method generates 1840 biclusters in [Formula: see text] s. On a central nervous system embryonic tumor gene expression dataset of size 712,940, our algorithm takes   101 min to produce 747,069 biclusters, while the recent competing algorithms take significantly more time to produce the same result. RUBic is also evaluated on five different gene expression datasets and shows significant speed-up in execution time with respect to existing approaches to extract significant KEGG-enriched bi-clustering. RUBic can operate on two modes, base and flex, where base mode generates maximal biclusters and flex mode generates less number of clusters and faster based on their biological significance with respect to KEGG pathways. The code is available at ( https://github.com/CMATERJU-BIOINFO/RUBic ) for academic use only.


Subject(s)
Algorithms , Data Management , Humans , Databases, Factual , Cluster Analysis , Gene Expression Profiling/methods
9.
Entropy (Basel) ; 25(10)2023 Oct 08.
Article in English | MEDLINE | ID: mdl-37895546

ABSTRACT

Symmetric extensions are essential in quantum mechanics, providing a lens through which to investigate the correlations of entangled quantum systems and to address challenges like the quantum marginal problem. Though semi-definite programming (SDP) is a recognized method for handling symmetric extensions, it struggles with computational constraints, especially due to the large real parameters in generalized qudit systems. In this study, we introduce an approach that adeptly leverages permutation symmetry. By fine-tuning the SDP problem for detecting k-symmetric extensions, our method markedly diminishes the searching space dimensionality and trims the number of parameters essential for positive-definiteness tests. This leads to an algorithmic enhancement, reducing the complexity from O(d2k) to O(kd2) in the qudit k-symmetric extension scenario. Additionally, our approach streamlines the process of verifying the positive definiteness of the results. These advancements pave the way for deeper insights into quantum correlations, highlighting potential avenues for refined research and innovations in quantum information theory.

10.
Sensors (Basel) ; 23(17)2023 Aug 28.
Article in English | MEDLINE | ID: mdl-37687916

ABSTRACT

This research presents a comprehensive study of the dichotomous search iterative parabolic discrete time Fourier transform (Ds-IpDTFT) estimator, a novel approach for fine frequency estimation in noisy exponential signals. The proposed estimator leverages a dichotomous search process before iterative interpolation estimation, which significantly reduces computational complexity while maintaining high estimation accuracy. An in-depth exploration of the relationship between the optimal parameter p and the unknown parameter δ forms the backbone of the methodology. Through extensive simulations and real-world experiments, the Ds-IpDTFT estimator exhibits superior performance relative to other established estimators, demonstrating robustness in noisy conditions and stability across varying frequencies. This efficient and accurate estimation method is a significant contribution to the field of signal processing and offers promising potential for practical applications.

11.
Cancers (Basel) ; 15(16)2023 Aug 17.
Article in English | MEDLINE | ID: mdl-37627172

ABSTRACT

Accurate classification of cancer images plays a crucial role in diagnosis and treatment planning. Deep learning (DL) models have shown promise in achieving high accuracy, but their performance can be influenced by variations in Hematoxylin and Eosin (H&E) staining techniques. In this study, we investigate the impact of H&E stain normalization on the performance of DL models in cancer image classification. We evaluate the performance of VGG19, VGG16, ResNet50, MobileNet, Xception, and InceptionV3 on a dataset of H&E-stained cancer images. Our findings reveal that while VGG16 exhibits strong performance, VGG19 and ResNet50 demonstrate limitations in this context. Notably, stain normalization techniques significantly improve the performance of less complex models such as MobileNet and Xception. These models emerge as competitive alternatives with lower computational complexity and resource requirements and high computational efficiency. The results highlight the importance of optimizing less complex models through stain normalization to achieve accurate and reliable cancer image classification. This research holds tremendous potential for advancing the development of computationally efficient cancer classification systems, ultimately benefiting cancer diagnosis and treatment.

12.
Entropy (Basel) ; 25(8)2023 Aug 11.
Article in English | MEDLINE | ID: mdl-37628227

ABSTRACT

Designing reasonable MAC scheduling strategies is an important means to ensure transmission quality in wireless sensor networks (WSNs). When there exist multiple available routes from the source to the destination, it is necessary to combine a data traffic allocation mechanism and design a multi-path MAC scheduling scheme in order to ensure QoS. This paper develops a multi-path resource allocation method for multi-channel wireless sensor networks, which uses random-access technology to complete MAC scheduling and selects the transmission path for each packet according to the probability. Through theoretical analysis and simulation experiments, it can be found that the proposed strategy can provide a reliable throughput capacity region. Meanwhile, due to the use of random-access technology, the computational complexity of the proposed algorithm can be independent of the number of links and channels.

13.
Sensors (Basel) ; 23(16)2023 Aug 18.
Article in English | MEDLINE | ID: mdl-37631792

ABSTRACT

Traditional encoder-decoder networks like U-Net have been extensively used for polyp segmentation. However, such networks have demonstrated limitations in explicitly modeling long-range dependencies. In such networks, local patterns are emphasized over the global context, as each convolutional kernel focuses on only a local subset of pixels in the entire image. Several recent transformer-based networks have been shown to overcome such limitations. Such networks encode long-range dependencies using self-attention methods and thus learn highly expressive representations. However, due to the computational complexity of modeling the whole image, self-attention is expensive to compute, as there is a quadratic increment in cost with the increase in pixels in the image. Thus, patch embedding has been utilized, which groups small regions of the image into single input features. Nevertheless, these transformers still lack inductive bias, even with the image as a 1D sequence of visual tokens. This results in the inability to generalize to local contexts due to limited low-level features. We introduce a hybrid transformer combined with a convolutional mixing network to overcome computational and long-range dependency issues. A pretrained transformer network is introduced as a feature-extracting encoder, and a mixing module network (MMNet) is introduced to capture the long-range dependencies with a reduced computational cost. Precisely, in the mixing module network, we use depth-wise and 1 × 1 convolution to model long-range dependencies to establish spatial and cross-channel correlation, respectively. The proposed approach is evaluated qualitatively and quantitatively on five challenging polyp datasets across six metrics. Our MMNet outperforms the previous best polyp segmentation methods.


Subject(s)
Algorithms , Benchmarking , Electric Power Supplies , Learning
14.
Cogn Sci ; 47(8): e13330, 2023 08.
Article in English | MEDLINE | ID: mdl-37641424

ABSTRACT

We study human performance in two classical NP-hard optimization problems: Set Cover and Maximum Coverage. We suggest that Set Cover and Max Coverage are related to means selection problems that arise in human problem-solving and in pursuing multiple goals: The relationship between goals and means is expressed as a bipartite graph where edges between means and goals indicate which means can be used to achieve which goals. While these problems are believed to be computationally intractable in general, they become more tractable when the structure of the network resembles a tree. Thus, our main prediction is that people should perform better with goal systems that are more tree-like. We report three behavioral experiments which confirm this prediction. Our results suggest that combinatorial parameters that are instrumental to algorithm design can also be useful for understanding when and why people struggle to choose between multiple means to achieve multiple goals.


Subject(s)
Algorithms , Goals , Humans , Problem Solving
15.
Sensors (Basel) ; 23(15)2023 Jul 28.
Article in English | MEDLINE | ID: mdl-37571550

ABSTRACT

In recent years, environmental sound classification (ESC) has prevailed in many artificial intelligence Internet of Things (AIoT) applications, as environmental sound contains a wealth of information that can be used to detect particular events. However, existing ESC methods have high computational complexity and are not suitable for deployment on AIoT devices with constrained computing resources. Therefore, it is of great importance to propose a model with both high classification accuracy and low computational complexity. In this work, a new ESC method named BSN-ESC is proposed, including a big-small network-based ESC model that can assess the classification difficulty level and adaptively activate a big or small network for classification as well as a pre-classification processing technique with logmel spectrogram refining, which prevents distortion in the frequency-domain characteristics of the sound clip at the joint part of two adjacent sound clips. With the proposed methods, the computational complexity is significantly reduced, while the classification accuracy is still high. The proposed BSN-ESC model is implemented on both CPU and FPGA to evaluate its performance on both PC and embedded systems with the dataset ESC-50, which is the most commonly used dataset. The proposed BSN-ESC model achieves the lowest computational complexity with the number of floating-point operations (FLOPs) of only 0.123G, which represents a reduction of up to 2309 times in computational complexity compared with state-of-the-art methods while delivering a high classification accuracy of 89.25%. This work can achieve the realization of ESC being applied to AIoT devices with constrained computational resources.

16.
Stat Med ; 42(23): 4207-4235, 2023 10 15.
Article in English | MEDLINE | ID: mdl-37527835

ABSTRACT

Additive frailty models are used to model correlated survival data. However, the complexity of the models increases with cluster size to the extent that practical usage becomes increasingly challenging. We present a modification of the additive genetic gamma frailty (AGGF) model, the lean AGGF (L-AGGF) model, which alleviates some of these challenges by using a leaner additive decomposition of the frailty. The performances of the models were compared and evaluated in a simulation study. The L-AGGF model was used to analyze population-wide data on clustering of melanoma in 2 391 125 two-generational Norwegian families, 1960-2015. Using this model, we could analyze the complete data set, while the original model limited the analysis to a restricted data set (with cluster sizes ≤ 7 $$ \le 7 $$ ). We found a substantial clustering of melanoma in Norwegian families and large heterogeneity in melanoma risk across the population, where 52% of the frailty was attributed to the 10% of the population at highest unobserved risk. Due to the improved scalability, the L-AGGF model enables a wider range of analyses of population-wide data compared to the AGGF model. Moreover, the methods outlined here make it possible to perform these analyses in a computationally efficient manner.


Subject(s)
Frailty , Melanoma , Humans , Models, Statistical , Frailty/epidemiology , Computer Simulation , Cluster Analysis , Melanoma/epidemiology , Melanoma/genetics , Survival Analysis
17.
Cogn Sci ; 47(6): e13304, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37325976

ABSTRACT

A central aim of cognitive science is to understand the fundamental mechanisms that enable humans to navigate and make sense of complex environments. In this letter, we argue that computational complexity theory, a foundational framework for evaluating computational resource requirements, holds significant potential in addressing this challenge. As humans possess limited cognitive resources for processing vast amounts of information, understanding how humans perform complex cognitive tasks requires comprehending the underlying factors that drive information processing demands. Computational complexity theory provides a comprehensive theoretical framework to achieve this goal. By adopting this framework, we can gain new insights into how cognitive systems work and develop a more nuanced understanding of the relation between task complexity and human behavior. We provide empirical evidence supporting our argument and identify several open research questions and challenges in applying computational complexity theory to human decision-making and cognitive science at large.


Subject(s)
Cognition , Problem Solving , Humans , Motivation , Decision Making
18.
Sensors (Basel) ; 23(9)2023 Apr 30.
Article in English | MEDLINE | ID: mdl-37177604

ABSTRACT

This work investigates the effectiveness of deep neural networks within the realm of battery charging. This is done by introducing an innovative control methodology that not only ensures safety and optimizes the charging current, but also substantially reduces the computational complexity with respect to traditional model-based approaches. In addition to their high computational costs, model-based approaches are also hindered by their need to accurately know the model parameters and the internal states of the battery, which are typically unmeasurable in a realistic scenario. In this regard, the deep learning-based methodology described in this work was been applied for the first time to the best of the authors' knowledge, to scenarios where the battery's internal states cannot be measured and an estimate of the battery's parameters is unavailable. The reported results from the statistical validation of such a methodology underline the efficacy of this approach in approximating the optimal charging policy.

19.
Math Biosci Eng ; 20(5): 7828-7844, 2023 Feb 21.
Article in English | MEDLINE | ID: mdl-37161174

ABSTRACT

To solve the equilibrium problem of the supply chain network, a new subgradient extragradient method is introduced. The proposal achieves adaptive parameter selection, and supports a one-step subgradient projection operator, which can theoretically reduce the computational complexity of the algorithm. The introduction of subgradient projection operators makes the calculation of algorithms easier, and transforms the projection difficulty problem into how to find suitable sub-differential function problems. The given convergence proof further shows the advantages of the proposed algorithm. Finally, the presented algorithm is operated to a concrete supply chain network model. The comparisons show the proposed algorithm is better than other methods in term of CPU running time and iteration steps.

20.
Sensors (Basel) ; 23(6)2023 Mar 11.
Article in English | MEDLINE | ID: mdl-36991750

ABSTRACT

Spiking neural networks (SNNs) are subjects of a topic that is gaining more and more interest nowadays. They more closely resemble actual neural networks in the brain than their second-generation counterparts, artificial neural networks (ANNs). SNNs have the potential to be more energy efficient than ANNs on event-driven neuromorphic hardware. This can yield drastic maintenance cost reduction for neural network models, as the energy consumption would be much lower in comparison to regular deep learning models hosted in the cloud today. However, such hardware is still not yet widely available. On standard computer architectures consisting mainly of central processing units (CPUs) and graphics processing units (GPUs) ANNs, due to simpler models of neurons and simpler models of connections between neurons, have the upper hand in terms of execution speed. In general, they also win in terms of learning algorithms, as SNNs do not reach the same levels of performance as their second-generation counterparts in typical machine learning benchmark tasks, such as classification. In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational complexity.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Action Potentials/physiology , Computers , Brain/physiology
SELECTION OF CITATIONS
SEARCH DETAIL