Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
J Biomed Inform ; 156: 104668, 2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38857737

RESUMO

OBJECTIVE: The objective of this study is to integrate PICO knowledge into the clinical research text summarization process, aiming to enhance the model's comprehension of biomedical texts while capturing crucial content from the perspective of summary readers, ultimately improving the quality of summaries. METHODS: We propose a clinical research text summarization method called DKGE-PEGASUS (Domain-Knowledge and Graph Convolutional Enhanced PEGASUS), which is based on integrating domain knowledge. The model mainly consists of three components: a PICO label prediction module, a text information re-mining unit based on Graph Convolutional Neural Networks (GCN), and a pre-trained summarization model. First, the PICO label prediction module is used to identify PICO elements in clinical research texts while obtaining word embeddings enriched with PICO knowledge. Then, we use GCN to reinforce the encoder of the pre-trained summarization model to achieve deeper text information mining while explicitly injecting PICO knowledge. Finally, the outputs of the PICO label prediction module, the GCN text information re-mining unit, and the encoder of the pre-trained model are fused to produce the final coding results, which are then decoded by the decoder to generate summaries. RESULTS: Experiments conducted on two datasets, PubMed and CDSR, demonstrated the effectiveness of our method. The Rouge-1 scores achieved were 42.64 and 38.57, respectively. Furthermore, the quality of our summarization results was found to significantly outperform the baseline model in comparisons of summarization results for a segment of biomedical text. CONCLUSION: The method proposed in this paper is better equipped to identify critical elements in clinical research texts and produce a higher-quality summary.

3.
Biomimetics (Basel) ; 9(4)2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38667215

RESUMO

In today's fast-paced and ever-changing environment, the need for algorithms with enhanced global optimization capability has become increasingly crucial due to the emergence of a wide range of optimization problems. To tackle this issue, we present a new algorithm called Random Particle Swarm Optimization (RPSO) based on cosine similarity. RPSO is evaluated using both the IEEE Congress on Evolutionary Computation (CEC) 2022 test dataset and Convolutional Neural Network (CNN) classification experiments. The RPSO algorithm builds upon the traditional PSO algorithm by incorporating several key enhancements. Firstly, the parameter selection is adapted and a mechanism called Random Contrastive Interaction (RCI) is introduced. This mechanism fosters information exchange among particles, thereby improving the ability of the algorithm to explore the search space more effectively. Secondly, quadratic interpolation (QI) is incorporated to boost the local search efficiency of the algorithm. RPSO utilizes cosine similarity for the selection of both QI and RCI, dynamically updating population information to steer the algorithm towards optimal solutions. In the evaluation using the CEC 2022 test dataset, RPSO is compared with recent variations of Particle Swarm Optimization (PSO) and top algorithms in the CEC community. The results highlight the strong competitiveness and advantages of RPSO, validating its effectiveness in tackling global optimization tasks. Additionally, in the classification experiments with optimizing CNNs for medical images, RPSO demonstrated stability and accuracy comparable to other algorithms and variants. This further confirms the value and utility of RPSO in improving the performance of CNN classification tasks.

4.
Sci Rep ; 14(1): 7445, 2024 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-38548845

RESUMO

The original Harris hawks optimization (HHO) algorithm has the problems of unstable optimization effect and easy to fall into stagnation. However, most of the improved HHO algorithms can not effectively improve the ability of the algorithm to jump out of the local optimum. In this regard, an integrated improved HHO (IIHHO) algorithm is proposed. Firstly, the linear transformation escape energy used by the original HHO algorithm is relatively simple and lacks the escape law of the prey in the actual nature. Therefore, intermittent energy regulator is introduced to adjust the energy of Harris hawks, which is conducive to improving the local search ability of the algorithm while restoring the prey's rest mechanism; Secondly, to adjust the uncertainty of random vector, a more regular vector change mechanism is used instead, and the attenuation vector is obtained by modifying the composite function. Finally, the search scope of Levy flight is further clarified, which is conducive to the algorithm jumping out of the local optimum. Finally, in order to modify the calculation limitations caused by the fixed step size, Cardano formula function is introduced to adjust the step size setting and improve the accuracy of the algorithm. First, the performance of IIHHO algorithm is analyzed on the Computational Experimental Competition 2013 (CEC 2013) function test set and compared with seven improved evolutionary algorithms, and the convergence value of the iterative curve obtained is better than most of the improved algorithms, verifying the effectiveness of the proposed IIHHO algorithm. Second, the IIHHO is compared with another three state of the art (SOTA) algorithms with the Computational Experimental Competition 2022 (CEC 2022) function test set, the experiments show that the proposed IIHHO algorithm still has a strong ability to search for the optimal value. Third, IIHHO algorithm is applied in two different engineering experiments. The calculation results of minimum cost prove that IIHHO algorithm has certain advantages in dealing with the problem of search space. All these demonstrate that the proposed IIHHO is promising for numeric optimization and engineering applications.

5.
Entropy (Basel) ; 25(9)2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37761567

RESUMO

Images, as a crucial information carrier in the era of big data, are constantly generated, stored, and transmitted. Determining how to guarantee the security of images is a hot topic in the information security community. Image encryption is a simple and direct approach for this purpose. In order to cope with this issue, we propose a novel scheme based on eight-base DNA-level permutation and diffusion, termed as EDPD, for color image encryption in this paper. The proposed EDPD integrates secure hash algorithm-512 (SHA-512), a four-dimensional hyperchaotic system, and eight-base DNA-level permutation and diffusion that conducts on one-dimensional sequences and three-dimensional cubes. To be more specific, the EDPD has four main stages. First, four initial values for the proposed chaotic system are generated from plaintext color images using SHA-512, and a four-dimensional hyperchaotic system is constructed using the initial values and control parameters. Second, a hyperchaotic sequence is generated from the four-dimensional hyperchaotic system for consequent encryption operations. Third, multiple permutation and diffusion operations are conducted on different dimensions with dynamic eight-base DNA-level encoding and algebraic operation rules determined via the hyperchaotic sequence. Finally, DNA decoding is performed in order to obtain the cipher images. Experimental results from some common testing images verify that the EDPD has excellent performance in color image encryption and can resist various attacks.

6.
Entropy (Basel) ; 24(7)2022 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-35885101

RESUMO

Image steganography, which usually hides a small image (hidden image or secret image) in a large image (carrier) so that the crackers cannot feel the existence of the hidden image in the carrier, has become a hot topic in the community of image security. Recent deep-learning techniques have promoted image steganography to a new stage. To improve the performance of steganography, this paper proposes a novel scheme that uses the Transformer for feature extraction in steganography. In addition, an image encryption algorithm using recursive permutation is proposed to further enhance the security of secret images. We conduct extensive experiments to demonstrate the effectiveness of the proposed scheme. We reveal that the Transformer is superior to the compared state-of-the-art deep-learning models in feature extraction for steganography. In addition, the proposed image encryption algorithm has good attributes for image security, which further enhances the performance of the proposed scheme of steganography.

7.
Entropy (Basel) ; 24(10)2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-37420344

RESUMO

Accurate clustering is a challenging task with unlabeled data. Ensemble clustering aims to combine sets of base clusterings to obtain a better and more stable clustering and has shown its ability to improve clustering accuracy. Dense representation ensemble clustering (DREC) and entropy-based locally weighted ensemble clustering (ELWEC) are two typical methods for ensemble clustering. However, DREC treats each microcluster equally and hence, ignores the differences between each microcluster, while ELWEC conducts clustering on clusters rather than microclusters and ignores the sample-cluster relationship. To address these issues, a divergence-based locally weighted ensemble clustering with dictionary learning (DLWECDL) is proposed in this paper. Specifically, the DLWECDL consists of four phases. First, the clusters from the base clustering are used to generate microclusters. Second, a Kullback-Leibler divergence-based ensemble-driven cluster index is used to measure the weight of each microcluster. With these weights, an ensemble clustering algorithm with dictionary learning and the L2,1-norm is employed in the third phase. Meanwhile, the objective function is resolved by optimizing four subproblems and a similarity matrix is learned. Finally, a normalized cut (Ncut) is used to partition the similarity matrix and the ensemble clustering results are obtained. In this study, the proposed DLWECDL was validated on 20 widely used datasets and compared to some other state-of-the-art ensemble clustering methods. The experimental results demonstrated that the proposed DLWECDL is a very promising method for ensemble clustering.

8.
Entropy (Basel) ; 23(3)2021 Mar 17.
Artigo em Inglês | MEDLINE | ID: mdl-33802901

RESUMO

With increasing utilization of digital multimedia and the Internet, protection on this digital information from cracks has become a hot topic in the communication field. As a path for protecting digital visual information, image encryption plays a crucial role in modern society. In this paper, a novel six-dimensional (6D) hyper-chaotic encryption scheme with three-dimensional (3D) transformed Zigzag diffusion and RNA operation (HCZRNA) is proposed for color images. For this HCZRNA scheme, four phases are included. First, three pseudo-random matrices are generated from the 6D hyper-chaotic system. Second, plaintext color image would be permuted by using the first pseudo-random matrix to convert to an initial cipher image. Third, the initial cipher image is placed on cube for 3D transformed Zigzag diffusion using the second pseudo-random matrix. Finally, the diffused image is converted to RNA codons array and updated through RNA codons tables, which are generated by codons and the third pseudo-random matrix. After four phases, a cipher image is obtained, and the experimental results show that HCZRNA has high resistance against well-known attacks and it is superior to other schemes.

9.
Entropy (Basel) ; 23(5)2021 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-33922594

RESUMO

Image security is a hot topic in the era of Internet and big data. Hyperchaotic image encryption, which can effectively prevent unauthorized users from accessing image content, has become more and more popular in the community of image security. In general, such approaches conduct encryption on pixel-level, bit-level, DNA-level data or their combinations, lacking diversity of processed data levels and limiting security. This paper proposes a novel hyperchaotic image encryption scheme via multiple bit permutation and diffusion, namely MBPD, to cope with this issue. Specifically, a four-dimensional hyperchaotic system with three positive Lyapunov exponents is firstly proposed. Second, a hyperchaotic sequence is generated from the proposed hyperchaotic system for consequent encryption operations. Third, multiple bit permutation and diffusion (permutation and/or diffusion can be conducted with 1-8 or more bits) determined by the hyperchaotic sequence is designed. Finally, the proposed MBPD is applied to image encryption. We conduct extensive experiments on a couple of public test images to validate the proposed MBPD. The results verify that the MBPD can effectively resist different types of attacks and has better performance than the compared popular encryption methods.

10.
Entropy (Basel) ; 22(2)2020 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-33285915

RESUMO

Epilepsy is a common nervous system disease that is characterized by recurrent seizures. An electroencephalogram (EEG) records neural activity, and it is commonly used for the diagnosis of epilepsy. To achieve accurate detection of epileptic seizures, an automatic detection approach of epileptic seizures, integrating complementary ensemble empirical mode decomposition (CEEMD) and extreme gradient boosting (XGBoost), named CEEMD-XGBoost, is proposed. Firstly, the decomposition method, CEEMD, which is capable of effectively reducing the influence of mode mixing and end effects, was utilized to divide raw EEG signals into a set of intrinsic mode functions (IMFs) and residues. Secondly, the multi-domain features were extracted from raw signals and the decomposed components, and they were further selected according to the importance scores of the extracted features. Finally, XGBoost was applied to develop the epileptic seizure detection model. Experiments were conducted on two benchmark epilepsy EEG datasets, named the Bonn dataset and the CHB-MIT (Children's Hospital Boston and Massachusetts Institute of Technology) dataset, to evaluate the performance of our proposed CEEMD-XGBoost. The extensive experimental results indicated that, compared with some previous EEG classification models, CEEMD-XGBoost can significantly enhance the detection performance of epileptic seizures in terms of sensitivity, specificity, and accuracy.

11.
PLoS One ; 14(11): e0224382, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31738772

RESUMO

Image compression and image encryption are two essential tasks in image processing. The former aims to reduce the cost for storage or transmission of images while the latter aims to change the positions or values of pixels to protect image content. Nowadays, an increasing number of researchers are focusing on the combination of these two tasks. In this paper, we propose a novel joint image compression and encryption approach that integrates a quantum chaotic system, sparse Bayesian learning (SBL) and a bit-level 3D Arnold cat map, so-called QSBLA, for such a purpose. Specifically, the QSBLA consists of 6 stages. First, a quantum chaotic system is employed to generate chaotic sequences for subsequent compression and encryption. Second, as one method of compressive sensing, SBL is used to compress images. Third, an operation of diffusion is performed on the compressed image. Fourth, the compressed and diffused image is transformed into several bit-level cubes. Fifth, 3D Arnold cat maps are used to permute each bit-level cube. Finally, all the bit-level cubes are integrated and transformed into a 2D pixel-level image, resulting in the compressed and encrypted image. Extensive experiments on 8 publicly-accessed images demonstrate that the proposed QSBLA is superior or comparable to some state-of-the-art approaches in terms of several measurement indices, indicating that the QSBLA is promising for joint image compression and encryption.


Assuntos
Algoritmos , Segurança Computacional , Compressão de Dados/métodos , Teorema de Bayes , Dinâmica não Linear
12.
Entropy (Basel) ; 21(3)2019 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-33267033

RESUMO

Image encryption is one of the essential tasks in image security. In this paper, we propose a novel approach that integrates a hyperchaotic system, pixel-level Dynamic Filtering, DNA computing, and operations on 3D Latin Cubes, namely DFDLC, for image encryption. Specifically, the approach consists of five stages: (1) a newly proposed 5D hyperchaotic system with two positive Lyapunov exponents is applied to generate a pseudorandom sequence; (2) for each pixel in an image, a filtering operation with different templates called dynamic filtering is conducted to diffuse the image; (3) DNA encoding is applied to the diffused image and then the DNA-level image is transformed into several 3D DNA-level cubes; (4) Latin cube is operated on each DNA-level cube; and (5) all the DNA cubes are integrated and decoded to a 2D cipher image. Extensive experiments are conducted on public testing images, and the results show that the proposed DFDLC can achieve state-of-the-art results in terms of several evaluation criteria.

13.
Entropy (Basel) ; 22(1)2019 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-33285780

RESUMO

With the rapid growth of image transmission and storage, image security has become a hot topic in the community of information security. Image encryption is a direct way to ensure image security. This paper presents a novel approach that uses a hyperchaotic system, Pixel-level Filtering with kernels of variable shapes and parameters, and DNA-level Diffusion, so-called PFDD, for image encryption. The PFDD totally consists of four stages. First, a hyperchaotic system is applied to generating hyperchaotic sequences for the purpose of subsequent operations. Second, dynamic filtering is performed on pixels to change the pixel values. To increase the diversity of filtering, kernels with variable shapes and parameters determined by the hyperchaotic sequences are used. Third, a global bit-level scrambling is conducted to change the values and positions of pixels simultaneously. The bit stream is then encoded into DNA-level data. Finally, a novel DNA-level diffusion scheme is proposed to further change the image values. We tested the proposed PFDD with 15 publicly accessible images with different sizes, and the results demonstrate that the PFDD is capable of achieving state-of-the-art results in terms of the evaluation criteria, indicating that the PFDD is very effective for image encryption.

14.
IEEE J Biomed Health Inform ; 21(6): 1593-1598, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-29136608

RESUMO

Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heartbeats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1 s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as "Korotkoff." A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analyzed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analyzed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analyzed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0.001). Results reached stability for SBP (97.8%, at sixth beat below SBP) and DBP (98.1%, at sixth beat above DBP) with no significant differences between adjacent beats (SBP P = 0.74; DBP P = 0.88). There were no significant differences at high-cuff pressures, but at low pressures close to diastole there was a small difference (3.3%, P = 0.02). In addition, greater within subject variability was observed at SBP (21.4%) and DBP (28.9%), with a significant difference between both (P < 0.02). In conclusion, this study has demonstrated that Korotkoff sounds can be consistently identified during the period below SBP and above DBP, but that at systole and diastole there can be substantial variations that are associated with high variation in the three repeat measurements in each subject.


Assuntos
Auscultação/métodos , Determinação da Pressão Arterial/métodos , Pressão Sanguínea/fisiologia , Redes Neurais de Computação , Processamento de Sinais Assistido por Computador , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estetoscópios , Adulto Jovem
15.
Neurobiol Aging ; 36 Suppl 1: S185-93, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25444599

RESUMO

Regression models have been widely studied to investigate the prediction power of neuroimaging measures as biomarkers for inferring cognitive outcomes in the Alzheimer's disease study. Most of these models ignore the interrelated structures either within neuroimaging measures or between cognitive outcomes, and thus may have limited power to yield optimal solutions. To address this issue, we propose to use a new sparse multitask learning model called Group-Sparse Multi-task Regression and Feature Selection (G-SMuRFS) and demonstrate its effectiveness by examining the predictive power of detailed cortical thickness measures toward 3 types of cognitive scores in a large cohort. G-SMuRFS proposes a group-level l2,1-norm strategy to group relevant features together in an anatomically meaningful manner and use this prior knowledge to guide the learning process. This approach also takes into account the correlation among cognitive outcomes for building a more appropriate predictive model. Compared with traditional methods, G-SMuRFS not only demonstrates a superior performance but also identifies a small set of surface markers that are biologically meaningful.


Assuntos
Doença de Alzheimer/diagnóstico , Cognição , Técnicas de Diagnóstico Neurológico , Neuroimagem , Doença de Alzheimer/patologia , Doença de Alzheimer/psicologia , Biomarcadores/metabolismo , Córtex Cerebral/patologia , Estudos de Coortes , Previsões , Humanos , Aprendizagem , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...