Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(1): e0292345, 2024.
Article in English | MEDLINE | ID: mdl-38180975

ABSTRACT

In the process of Canny edge detection, a large number of high complexity calculations such as Gaussian filtering, gradient calculation, non-maximum suppression, and double threshold judgment need to be performed on the image, which takes up a lot of operation time, which is a great challenge to the real-time requirements of the algorithm. The traditional Canny edge detection technology mainly uses customized equipment such as DSP and FPGA, but it has some problems, such as long development cycle, difficult debugging, resource consumption, and so on. At the same time, the adopted CUDA platform has the problem of poor cross-platform. In order to solve this problem, a fine-grained parallel Canny edge detection method is proposed, which is optimized from three aspects: task partition, vector memory access, and NDRange optimization, and CPU-GPU collaborative parallelism is realized. At the same time, the parallel Canny edge detection methods based on multi-core CPU and CUDA architecture are designed. The experimental results show that OpenCL accelerated Canny edge detection algorithm (OCL_Canny) achieves 20.68 times acceleration ratio compared with CPU serial algorithm at 7452 × 8024 image resolution. At the image resolution of 3500 × 3500, the OCL_Canny algorithm achieves 3.96 times the acceleration ratio compared with the CPU multi-threaded Canny parallel algorithm. At 1024 × 1024 image resolution, the OCL_Canny algorithm achieves 1.21 times the acceleration ratio compared with the CUDA-based Canny parallel algorithm. The effectiveness and performance portability of the proposed Canny edge detection parallel algorithm are verified, and it provides a reference for the research of fast calculation of image big data.


Subject(s)
Algorithms , Software , Acceleration , Big Data , Judgment
2.
Entropy (Basel) ; 25(10)2023 Sep 29.
Article in English | MEDLINE | ID: mdl-37895519

ABSTRACT

As one of the most critical tasks in legal artificial intelligence, legal judgment prediction (LJP) has garnered growing attention, especially in the civil law system. However, current methods often overlook the challenge of imbalanced label distributions, treating each label with equal importance, which can lead the model to be biased toward labels with high frequency. In this paper, we propose a label-enhanced prototypical network (LPN) suitable for LJP, that adopts a strategy of uniform encoding and separate decoding. Specifically, LPN adopts a multi-scale convolutional neural network to uniformly encode case factual description to capture long-distance features of the document. At the decoding end, a prototypical network incorporating label semantic features is used to guide the learning of prototype representations of high-frequency and low-frequency labels, respectively. At the same time, we also propose a prototype-prototype loss to optimize the prototypical representation. We conduct extensive experiments on two real datasets and show that our proposed method effectively improves the performance of LJP, with an average F1 of 1.23% and 1.13% higher than the state-of-the-art model on two subtasks, respectively.

3.
IEEE Trans Image Process ; 32: 4299-4313, 2023.
Article in English | MEDLINE | ID: mdl-37490375

ABSTRACT

In this paper, we address the problem of multi-view clustering (MVC), integrating the close relationships among views to learn a consistent clustering result, via triplex information maximization (TIM). TIM works by proposing three essential principles, each of which is realized by a formulation of maximization of mutual information. 1) Principle 1: Contained. The first and foremost thing for MVC is to fully employ the self-contained information in each view. 2) Principle 2: Complementary. The feature-level complementary information across pairwise views should be first quantified and then integrated for improving clustering. 3) Principle 3: Compatible. The rich cluster-level shared compatible information among individual clustering of each view is significant for ensuring a better final consistent result. Following these principles, TIM can enjoy the best of view-specific, cross-view feature-level, and cross-view cluster-level information within/among views. For principle 2, we design an automatic view correlation learning (AVCL) mechanism to quantify how much complementary information across views by learning the cross-view weights between pairwise views automatically, instead of view-specific weights as most existing MVCs do. Specifically, we propose two different strategies for AVCL, i.e., feature-based and cluster-based strategy, for effective cross-view weight learning, thus leading to two versions of our method, TIM-F and TIM-C, respectively. We further present a two-stage method for optimization of the proposed methods, followed by the theoretical convergence and complexity analysis. Extensive experimental results suggest the effectiveness and superiority of our methods over many state-of-the-art methods.

4.
Sensors (Basel) ; 23(4)2023 Feb 16.
Article in English | MEDLINE | ID: mdl-36850810

ABSTRACT

A convolutional neural network-based multiobject detection and tracking algorithm can be applied to vehicle detection and traffic flow statistics, thus enabling smart transportation. Aiming at the problems of the high computational complexity of multiobject detection and tracking algorithms, a large number of model parameters, and difficulty in achieving high throughput with a low power consumption in edge devices, we design and implement a low-power, low-latency, high-precision, and configurable vehicle detector based on a field programmable gate array (FPGA) with YOLOv3 (You-Only-Look-Once-version3), YOLOv3-tiny CNNs (Convolutional Neural Networks), and the Deepsort algorithm. First, we use a dynamic threshold structured pruning method based on a scaling factor to significantly compress the detection model size on the premise that the accuracy does not decrease. Second, a dynamic 16-bit fixed-point quantization algorithm is used to quantify the network parameters to reduce the memory occupation of the network model. Furthermore, we generate a reidentification (RE-ID) dataset from the UA-DETRAC dataset and train the appearance feature extraction network on the Deepsort algorithm to improve the vehicles' tracking performance. Finally, we implement hardware optimization techniques such as memory interlayer multiplexing, parameter rearrangement, ping-pong buffering, multichannel transfer, pipelining, Im2col+GEMM, and Winograd algorithms to improve resource utilization and computational efficiency. The experimental results demonstrate that the compressed YOLOv3 and YOLOv3-tiny network models decrease in size by 85.7% and 98.2%, respectively. The dual-module parallel acceleration meets the demand of the 6-way parallel video stream vehicle detection with the peak throughput at 168.72 fps.

5.
Sensors (Basel) ; 22(16)2022 Aug 12.
Article in English | MEDLINE | ID: mdl-36015805

ABSTRACT

Control-flow attestation (CFA) is a mechanism that securely logs software execution paths running on remote devices. It can detect whether a device is being control-flow hijacked by launching a challenge-response process. In the growing landscape of the Internet of Things, more and more peer devices need to communicate to share sensed data and conduct inter-operations without the involvement of a trusted center. Toward the scalability of CFA mechanisms and mitigating the single-point failure, it is important to design a decentralized CFA schema. This paper proposed a decentralized schema (CFRV) to verify the control flow on remote devices. Moreover, it introduces a token (asymmetric secret slices) into peer devices to make the attestation process mutual. In this case, CFRV can mitigate a particular kind of man-in-the-middle attack called response defraud. We built our prototype toolbox on Raspberry-Pi to formulate our proof of concept. In our evaluation, CFRV protects the verification process from malicious verifiers and the man-in-the-middle attack. The proposed mechanism can also limit the PKI (Public Key Infrastructure) usage to a single stage to save the peer devices' computational cost. Compared to related decentralized schemes, the cryptographic operation's duration is reduced by 40%.


Subject(s)
Computer Security , Software , Humans
6.
Entropy (Basel) ; 24(10)2022 Sep 27.
Article in English | MEDLINE | ID: mdl-37420392

ABSTRACT

Source code summarization (SCS) is a natural language description of source code functionality. It can help developers understand programs and maintain software efficiently. Retrieval-based methods generate SCS by reorganizing terms selected from source code or use SCS of similar code snippets. Generative methods generate SCS via attentional encoder-decoder architecture. However, a generative method can generate SCS for any code, but sometimes the accuracy is still far from expectation (due to the lack of numerous high-quality training sets). A retrieval-based method is considered to have a higher accurac, but usually fails to generate SCS for a source code in the absence of a similar candidate in the database. In order to effectively combine the advantages of retrieval-based methods and generative methods, we propose a new method: Re_Trans. For a given code, we first utilize the retrieval-based method to obtain its most similar code with regard to sematic and corresponding SCS (S_RM). Then, we input the given code and similar code into the trained discriminator. If the discriminator outputs onr, we take S_RM as the result; otherwise, we utilize the generate model, transformer, to generate the given code' SCS. Particularly, we use AST-augmented (AbstractSyntax Tree) and code sequence-augmented information to make the source code semantic extraction more complete. Furthermore, we build a new SCS retrieval library through the public dataset. We evaluate our method on a dataset of 2.1 million Java code-comment pairs, and experimental results show improvement over the state-of-the-art (SOTA) benchmarks, which demonstrates the effectiveness and efficiency of our method.

7.
Comput Intell Neurosci ; 2021: 8056225, 2021.
Article in English | MEDLINE | ID: mdl-34135953

ABSTRACT

Software testing is a widespread validation means of software quality assurance in industry. Intelligent optimization algorithms have been proved to be an effective way of automatic test data generation. Firefly algorithm has received extensive attention and been widely used to solve optimization problems because of less parameters and simple implement. To overcome slow convergence rate and low accuracy of the firefly algorithm, a novel firefly algorithm with deep learning is proposed to generate structural test data. Initially, the population is divided into male subgroup and female subgroup. Following the randomly attracted model, each male firefly will be attracted by another randomly selected female firefly to focus on global search in whole space. Each female firefly implements local search under the leadership of the general center firefly, constructed based on historical experience with deep learning. At the final period of searching, chaos search is conducted near the best firefly to improve search accuracy. Simulation results show that the proposed algorithm can achieve better performance in terms of success coverage rate, coverage time, and diversity of solutions.


Subject(s)
Deep Learning , Algorithms , Computer Simulation , Female , Humans , Male , Software
8.
J Healthc Eng ; 2018: 1205354, 2018.
Article in English | MEDLINE | ID: mdl-30123438

ABSTRACT

Question answering (QA) system is becoming the focus of the research in medical health in terms of providing fleetly accurate answers to users. Numerous traditional QA systems are faced to simple factual questions and do not obtain accurate answers for complex questions. In order to realize the intelligent QA system for disease diagnosis and treatment in medical informationization, in this paper, we propose a depth evidence score fusion algorithm for Chinese Medical Intelligent Question Answering System, which can measure the text information in many algorithmic ways and ensure that the QA system outputs accurately the optimal candidate answer. At the semantic level, a new text semantic evidence score based on Word2vec is proposed, which can calculate the semantic similarity between texts. Experimental results on the medical text corpus show that the depth evidence score fusion algorithm has better performance in the evidence-scoring module of the intelligent QA system.


Subject(s)
Artificial Intelligence , Information Storage and Retrieval/methods , Medical Informatics/methods , Algorithms , Databases, Factual , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...