Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
J Electromyogr Kinesiol ; 76: 102869, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38479095

RESUMO

Decomposition of EMG signals provides the decoding of motor unit (MU) discharge timings. In this study, we propose a fast gradient convolution kernel compensation (fgCKC) decomposition algorithm for high-density surface EMG decomposition and apply it to an offline and real-time estimation of MU spike trains. We modified the calculation of the cross-correlation vectors to improve the calculation efficiency of the gradient convolution kernel compensation (gCKC) algorithm. Specifically, the new fgCKC algorithm considers the past gradient in addition to the current gradient. Furthermore, the EMG signals are divided by sliding windows to simulate real-time decomposition, and the proposed algorithm was validated on simulated and experimental signals. In the offline decomposition, fgCKC has the same robustness as gCKC, with sensitivity differences of 2.6 ± 1.3 % averaged across all trials and subjects. Nevertheless, depending on the number of MUs and the signal-to-noise ratio of signals, fgCKC is approximately 3 times faster than gCKC. In the real-time part, the processing only needed 240 ms average per window of EMG signals on a regular personal computer (IIntel(R) Core(TM) i5-12490F 3 GHz, 16 GB memory). These results indicate that fgCKC achieves real-time decomposition by significantly reducing processing time, providing more possibilities for non-invasive neuronal behavior research.


Assuntos
Algoritmos , Eletromiografia , Músculo Esquelético , Processamento de Sinais Assistido por Computador , Eletromiografia/métodos , Humanos , Músculo Esquelético/fisiologia , Neurônios Motores/fisiologia , Potenciais de Ação/fisiologia , Masculino
2.
Network ; : 1-19, 2023 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-38031802

RESUMO

Leaf infection detection and diagnosis at an earlier stage can improve agricultural output and reduce monetary costs. An inaccurate segmentation may degrade the accuracy of disease classification due to some different and complex leaf diseases. Also, the disease's adhesion and dimension can overlap, causing partial under-segmentation. Therefore, a novel robust Deep Encoder-Decoder Cascaded Network (DEDCNet) model is proposed in this manuscript for leaf image segmentation that precisely segments the diseased leaf spots and differentiates similar diseases. This model is comprised of an Infected Spot Recognition Network and an Infected Spot Segmentation Network. Initially, ISRN is designed by integrating cascaded CNN with a Feature Pyramid Pooling layer to identify the infected leaf spot and avoid an impact of background details. After that, the ISSN developed using an encoder-decoder network, which uses a multi-scale dilated convolution kernel to precisely segment the infected leaf spot. Moreover, the resultant leaf segments are provided to the pre-learned CNN models to learn texture features followed by the SVM algorithm to categorize leaf disease classes. The ODEDCNet delivers exceptional performance on both the Betel Leaf Image and PlantVillage datasets. On the Betel Leaf Image dataset, it achieves an accuracy of 94.89%, with high precision (94.35%), recall (94.77%), and F-score (94.56%), while maintaining low under-segmentation (6.2%) and over-segmentation rates (2.8%). It also achieves a remarkable Dice coefficient of 0.9822, all in just 0.10 seconds. On the PlantVillage dataset, the ODEDCNet outperforms other existing models with an accuracy of 96.5%, demonstrating high precision (96.61%), recall (96.5%), and F-score (96.56%). It excels in reducing under-segmentation to just 3.12% and over-segmentation to 2.56%. Furthermore, it achieves a Dice coefficient of 0.9834 in a mere 0.09 seconds. It evident for the greater efficiency on both segmentation and categorization of leaf diseases contrasted with the existing models.

3.
Math Biosci Eng ; 20(8): 15018-15043, 2023 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-37679170

RESUMO

At present, ship detectors have many problems, such as too many hyperparameter, poor recognition accuracy and imprecise regression boundary. In this article, we designed a large kernel convolutional YOLO (Lk-YOLO) detection model based on Anchor free for one-stage ship detection. First, we discuss the introduction of large size convolution kernel in the residual module of the backbone network, so that the backbone network has a stronger feature extraction capability. Second, in order to solve the problem of conflict regression and classification fusion under the coupling of detection heads, we split the detection head into two branches, so that the detection head has better representation ability for different branches of the task and improves the accuracy of the model in regression tasks. Finally, in order to solve the problem of complex and computationally intensive anchor hyperparameter design of ship data sets, we use anchor free algorithm to predict ships. Moreover, the model adopts an improved sampling matching strategy for both positive and negative samples to expand the number of positive samples in GT (Ground Truth) while achieving high-quality sample data and reducing the imbalance between positive and negative samples caused by anchor. We used NVIDIA 1080Ti GPU as the experimental environment, and the results showed that the mAP@50 Reaching 97.7%, mAP@.5:.95 achieved 78.4%, achieving the best accuracy among all models. Therefore, the proposed method does not need to design the parameters of the anchor, and achieves better detection efficiency and robustness without hyperparameter input.

4.
Diagnostics (Basel) ; 13(10)2023 May 09.
Artigo em Inglês | MEDLINE | ID: mdl-37238160

RESUMO

In this study, the impact of reconstruction sharpness on the visualization of the appendicular skeleton in ultrahigh-resolution (UHR) photon-counting detector (PCD) CT was investigated. Sixteen cadaveric extremities (eight fractured) were examined with a standardized 120 kVp scan protocol (CTDIvol 10 mGy). Images were reconstructed with the sharpest non-UHR kernel (Br76) and all available UHR kernels (Br80 to Br96). Seven radiologists evaluated image quality and fracture assessability. Interrater agreement was assessed with the intraclass correlation coefficient. For quantitative comparisons, signal-to-noise-ratios (SNRs) were calculated. Subjective image quality was best for Br84 (median 1, interquartile range 1-3; p ≤ 0.003). Regarding fracture assessability, no significant difference was ascertained between Br76, Br80 and Br84 (p > 0.999), with inferior ratings for all sharper kernels (p < 0.001). Interrater agreement for image quality (0.795, 0.732-0.848; p < 0.001) and fracture assessability (0.880; 0.842-0.911; p < 0.001) was good. SNR was highest for Br76 (3.4, 3.0-3.9) with no significant difference to Br80 and Br84 (p > 0.999). Br76 and Br80 produced higher SNRs than all kernels sharper than Br84 (p ≤ 0.026). In conclusion, PCD-CT reconstructions with a moderate UHR kernel offer superior image quality for visualizing the appendicular skeleton. Fracture assessability benefits from sharp non-UHR and moderate UHR kernels, while ultra-sharp reconstructions incur augmented image noise.

5.
Comput Biol Med ; 158: 106892, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-37028143

RESUMO

Vessel segmentation is significant for characterizing vascular diseases, receiving wide attention of researchers. The common vessel segmentation methods are mainly based on convolutional neural networks (CNNs), which have excellent feature learning capabilities. Owing to inability to predict learning direction, CNNs generate large channels or sufficient depth to obtain sufficient features. It may engender redundant parameters. Drawing on performance ability of Gabor filters in vessel enhancement, we built Gabor convolution kernel and designed its optimization. Unlike traditional filter using and common modulation, its parameters are automatically updated using gradients in the back propagation. Since the structural shape of Gabor convolution kernels is the same as that of regular convolution kernels, it can be integrated into any CNNs architecture. We built Gabor ConvNet using Gabor convolution kernels and tested it using three vessel datasets. It scored 85.06%, 70.52% and 67.11%, respectively, ranking first on three datasets. Results shows that our method outperforms advanced models in vessel segmentation. Ablations also proved that Gabor kernel has better vessel extraction ability than the regular convolution kernel.


Assuntos
Algoritmos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos
6.
Sensors (Basel) ; 24(1)2023 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-38202924

RESUMO

Micro-crack detection is an essential task in critical equipment health monitoring. Accurate and timely detection of micro-cracks can ensure the healthy and stable service of equipment. Aiming at improving the low accuracy of the conventional target detection model during the task of detecting micro-cracks on the surface of metal structural parts, this paper built a micro-cracks dataset and explored a detection performance optimization method based on Mask R-CNN. Firstly, we improved the original FPN structure, adding a bottom-up feature fusion path to enhance the information utilization rate of the underlying feature layer. Secondly, we added the methods of deformable convolution kernel and attention mechanism to ResNet, which can improve the efficiency of feature extraction. Lastly, we modified the original loss function to optimize the network training effect and model convergence rate. The ablation comparison experiments shows that all the improvement schemes proposed in this paper have improved the performance of the original Mask R-CNN. The integration of all the improvement schemes can produce the most significant performance improvement effects in recognition, classification, and positioning simultaneously, thus proving the rationality and feasibility of the improved scheme in this paper.

7.
Artigo em Inglês | MEDLINE | ID: mdl-38545337

RESUMO

Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.

8.
Brain Sci ; 12(12)2022 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-36552093

RESUMO

Research on visual encoding models for functional magnetic resonance imaging derived from deep neural networks, especially CNN (e.g., VGG16), has been developed. However, CNNs typically use smaller kernel sizes (e.g., 3 × 3) for feature extraction in visual encoding models. Although the receptive field size of CNN can be enlarged by increasing the network depth or subsampling, it is limited by the small size of the convolution kernel, leading to an insufficient receptive field size. In biological research, the size of the neuronal population receptive field of high-level visual encoding regions is usually three to four times that of low-level visual encoding regions. Thus, CNNs with a larger receptive field size align with the biological findings. The RepLKNet model directly expands the convolution kernel size to obtain a larger-scale receptive field. Therefore, this paper proposes a mixed model to replace CNN for feature extraction in visual encoding models. The proposed model mixes RepLKNet and VGG so that the mixed model has a receptive field of different sizes to extract more feature information from the image. The experimental results indicate that the mixed model achieves better encoding performance in multiple regions of the visual cortex than the traditional convolutional model. Also, a larger-scale receptive field should be considered in building visual encoding models so that the convolution network can play a more significant role in visual representations.

9.
Front Neuroinform ; 16: 953930, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36387589

RESUMO

Creating high-quality polygonal meshes which represent the membrane surface of neurons for both visualization and numerical simulation purposes is an important yet nontrivial task, due to their irregular and complicated structures. In this paper, we develop a novel approach of constructing a watertight 3D mesh from the abstract point-and-diameter representation of the given neuronal morphology. The membrane shape of the neuron is reconstructed by progressively deforming an initial sphere with the guidance of the neuronal skeleton, which can be regarded as a digital sculpting process. To efficiently deform the surface, a local mapping is adopted to simulate the animation skinning. As a result, only the vertices within the region of influence (ROI) of the current skeletal position need to be updated. The ROI is determined based on the finite-support convolution kernel, which is convolved along the line skeleton of the neuron to generate a potential field that further smooths the overall surface at both unidirectional and bifurcating regions. Meanwhile, the mesh quality during the entire evolution is always guaranteed by a set of quasi-uniform rules, which split excessively long edges, collapse undersized ones, and adjust vertices within the tangent plane to produce regular triangles. Additionally, the local vertices density on the result mesh is decided by the radius and curvature of neurites to achieve adaptiveness.

10.
Front Bioeng Biotechnol ; 10: 923364, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35979172

RESUMO

The image fusion algorithm has great application value in the domain of computer vision, which makes the fused image have a more comprehensive and clearer description of the scene, and is beneficial to human eye recognition and automatic mechanical detection. In recent years, image fusion algorithms have achieved great success in different domains. However, it still has huge challenges in terms of the generalization of multi-modal image fusion. In reaction to this problem, this paper proposes a general image fusion framework based on an improved convolutional neural network. Firstly, the feature information of the input image is captured by the multiple feature extraction layers, and then multiple feature maps are stacked along the number of channels to acquire the feature fusion map. Finally, feature maps, which are derived from multiple feature extraction layers, are stacked in high dimensions by skip connection and convolution filtering for reconstruction to produce the final result. In this paper, multi-modal images are gained from multiple datasets to produce a large sample space to adequately train the network. Compared with the existing convolutional neural networks and traditional fusion algorithms, the proposed model not only has generality and stability but also has some strengths in subjective visualization and objective evaluation, while the average running time is at least 94% faster than the reference algorithm based on neural network.

11.
Math Biosci Eng ; 19(8): 8057-8080, 2022 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-35801457

RESUMO

A bearing is an important and easily damaged component of mechanical equipment. For early fault diagnosis of ball bearings, acoustic emission signals are more sensitive and less affected by mechanical background noise. To cope with the large amount of data brought by the high sampling frequency and high sampling points of acoustic emission signals, a compressed sensing processing framework is introduced to research data compression and feature extraction, and a wavelet sparse convolutional network is proposed for resolved diagnosis and evaluation. The main research objective of this paper is to maximize the compression rate of the signal under the constraint of ensuring the reconstruction error of the acoustic emission signal, which can reduce the data volume of the acoustic emission signal and reduce the pressure of data analysis for subsequent fault diagnosis. At the same time, a wide convolution kernel based on a continuous wavelet is introduced when designing the neural network, and the energy information of different frequency bands of the signal is extracted by the wavelet convolution kernel to characterize the fault characteristics of the equipment. The energy pooling layer is designed to enhance the deep mining ability of compressed features, and the regularized loss function is introduced to improve the diagnostic accuracy and robustness through feature sparseness. The experimental results show that the method can effectively extract the fault characteristics of the bearing acoustic emission signal, improve the analysis efficiency and accurately classify the bearing faults.

12.
Sensors (Basel) ; 22(11)2022 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-35684852

RESUMO

Image super-resolution aims to reconstruct a high-resolution image from its low-resolution counterparts. Conventional image super-resolution approaches share the same spatial convolution kernel for the whole image in the upscaling modules, which neglect the specificity of content information in different positions of the image. In view of this, this paper proposes a regularized pattern method to represent spatially variant structural features in an image and further exploits a dynamic convolution kernel generation method to match the regularized pattern and improve image reconstruction performance. To be more specific, first, the proposed approach extracts features from low-resolution images using a self-organizing feature mapping network to construct regularized patterns (RP), which describe different contents at different locations. Second, the meta-learning mechanism based on the regularized pattern predicts the weights of the convolution kernels that match the regularized pattern for each different location; therefore, it generates different upscaling functions for images with different content. Extensive experiments are conducted using the benchmark datasets Set5, Set14, B100, Urban100, and Manga109 to demonstrate that the proposed approach outperforms the state-of-the-art super-resolution approaches in terms of both PSNR and SSIM performance.

13.
Front Neurorobot ; 16: 845858, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35548778

RESUMO

The color image of the fire hole is key for the working condition identification of the aluminum electrolysis cell (AEC). However, the image of the fire hole is difficult for image segmentation due to the nonuniform distributed illuminated background and oblique beam radiation. Thus, a joint dual channel convolution kernel (DCCK) and multi-frame feature fusion (MFF) method is developed to achieve dynamic fire hole video image segmentation. Considering the invalid or extra texture disturbances in the edge feature images, the DCCK is used to select the effective edge features. Since the obtained edge features of the fire hole are not completely closed, the MFF algorithm is further applied to complement the missing portion of the edge. This method can assist to obtain the complete fire hole image of the AEC. The experiment results demonstrate that the proposed method has higher precision, recall rate, and lower boundary redundancy rate with well segmented image edge for the aid of working condition identification of the AEC.

14.
Zhongguo Yi Liao Qi Xie Za Zhi ; 46(2): 219-224, 2022 Mar 30.
Artigo em Chinês | MEDLINE | ID: mdl-35411755

RESUMO

Objective The study aims to investigate the effects of different adaptive statistical iterative reconstruction-V( ASiR-V) and convolution kernel parameters on stability of CT auto-segmentation which is based on deep learning. Method Twenty patients who have received pelvic radiotherapy were selected and different reconstruction parameters were used to establish CT images dataset. Then structures including three soft tissue organs (bladder, bowelbag, small intestine) and five bone organs (left and right femoral head, left and right femur, pelvic) were segmented automatically by deep learning neural network. Performance was evaluated by dice similarity coefficient( DSC) and Hausdorff distance, using filter back projection(FBP) as the reference. Results Auto-segmentation of deep learning is greatly affected by ASIR-V, but less affected by convolution kernel, especially in soft tissues. Conclusion The stability of auto-segmentation is affected by parameter selection of reconstruction algorithm. In practical application, it is necessary to find a balance between image quality and segmentation quality, or improve segmentation network to enhance the stability of auto-segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Redes Neurais de Computação , Doses de Radiação
15.
Biomed Phys Eng Express ; 8(4)2022 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-35276688

RESUMO

In a cone beam CT system, a bowtie filter brings in additional scatter signals with respect to object induced scatter signals, which can degrade image quality and sometimes result in artifacts. This work aims to improve the image quality of CT scans by analyzing the contribution of bowtie filter induced scatter signals and removing them from projection data. Air calibration is a very useful preprocessing step to eliminate the response variations of detector pixels. Bowtie filter induced scattered x-ray signals of air scans are recorded in air calibration tables and therefore considered as a part of primary signals. However, scattered X-rays behave differently in scanned objects compared to primary x-rays. The difference should be corrected to eliminate the impact of bowtie filter induced scatter signals. A kernel based correction algorithm based on air scan data, named bowtie filter scatter correction algorithm, is applied to estimate and to eliminate the bowtie filter induced scatter signals in object scans. The scatter signals of air scans can be measured with air scans or retrieved from air calibration tables of a CT system, and can be used as input of the correction algorithm to estimate the change of scatter signals caused by the scanned objects in the scan field. Based on the assumption that the scatter signals in the projection data scanned with narrow collimation can be neglected, the difference signals between narrow and broad collimations can be used to estimate bowtie filter induced scatter signals for air scans with the correction of extra-focal radiations (EFRs). The calculated bowtie filter induced scatter signals have been compared with the results of Monte Carlo simulations, and the parameters of correction algorithm have been determined by fitting the measured scatter signal curves of phantom scans with calculated curves. Projection data have been reconstructed using Filtered BackProjection (FBP) method with and without bowtie filter correction to check whether the image quality is improved. Scatter signals can be well approximated with the bowtie filter scatter correction algorithm together with an existing object scatter correction algorithm. After removing the bowtie filter induced scatter signals, the dark bands in reconstructed images in the regions near the edges of scanned objects can be mostly eliminated. The difference signals of air scan data between narrow and broad collimations can be used to estimate the bowtie filter induced scatter for air scans. The proposed bowtie filter scatter correction algorithm using air scan data can be applied to estimate and to remove most of the bowtie filter induced scatter signals in object scans.

16.
Magn Reson Imaging ; 88: 101-107, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35176446

RESUMO

To solve the problem of long sampling time for diffusion magnetic resonance imaging (dMRI), in this study we propose a dMRI super-resolution reconstruction network. This method not only uses a three-dimensional (3D) convolution kernel to reconstruct the dMRI data in the space and angle domains, but also introduces an adversarial learning and attention mechanism to solve the problem of the traditional loss function not fully quantifying the gap between high-dimensional data and not paying more attention to important feature maps. Experimental results from the comparison of peak signal-to-noise ratio, structural similarity, and orientation distribution function visualization show that these methods bring better results. They also prove the feasibility of using an attention mechanism in dMRI reconstruction and the use of adversarial learning in a 3D convolution kernel.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Algoritmos , Imagem de Difusão por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos , Razão Sinal-Ruído
17.
Artigo em Chinês | WPRIM (Pacífico Ocidental) | ID: wpr-928892

RESUMO

Objective The study aims to investigate the effects of different adaptive statistical iterative reconstruction-V( ASiR-V) and convolution kernel parameters on stability of CT auto-segmentation which is based on deep learning. Method Twenty patients who have received pelvic radiotherapy were selected and different reconstruction parameters were used to establish CT images dataset. Then structures including three soft tissue organs (bladder, bowelbag, small intestine) and five bone organs (left and right femoral head, left and right femur, pelvic) were segmented automatically by deep learning neural network. Performance was evaluated by dice similarity coefficient( DSC) and Hausdorff distance, using filter back projection(FBP) as the reference. Results Auto-segmentation of deep learning is greatly affected by ASIR-V, but less affected by convolution kernel, especially in soft tissues. Conclusion The stability of auto-segmentation is affected by parameter selection of reconstruction algorithm. In practical application, it is necessary to find a balance between image quality and segmentation quality, or improve segmentation network to enhance the stability of auto-segmentation.


Assuntos
Humanos , Algoritmos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Doses de Radiação , Tomografia Computadorizada por Raios X
18.
Front Genet ; 12: 639930, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33679900

RESUMO

Aiming at the limitation of the convolution kernel with a fixed receptive field and unknown prior to optimal network width in U-Net, multi-scale U-Net (MSU-Net) is proposed by us for medical image segmentation. First, multiple convolution sequence is used to extract more semantic features from the images. Second, the convolution kernel with different receptive fields is used to make features more diverse. The problem of unknown network width is alleviated by efficient integration of convolution kernel with different receptive fields. In addition, the multi-scale block is extended to other variants of the original U-Net to verify its universality. Five different medical image segmentation datasets are used to evaluate MSU-Net. A variety of imaging modalities are included in these datasets, such as electron microscopy, dermoscope, ultrasound, etc. Intersection over Union (IoU) of MSU-Net on each dataset are 0.771, 0.867, 0.708, 0.900, and 0.702, respectively. Experimental results show that MSU-Net achieves the best performance on different datasets. Our implementation is available at https://github.com/CN-zdy/MSU_Net.

19.
Entropy (Basel) ; 22(2)2020 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-33286017

RESUMO

This paper presents a novel five-dimensional three-leaf chaotic attractor and its application in image encryption. First, a new five-dimensional three-leaf chaotic system is proposed. Some basic dynamics of the chaotic system were analyzed theoretically and numerically, such as the equilibrium point, dissipative, bifurcation diagram, plane phase diagram, and three-dimensional phase diagram. Simultaneously, an analog circuit was designed to implement the chaotic attractor. The circuit simulation experiment results were consistent with the numerical simulation experiment results. Second, a convolution kernel was used to process the five chaotic sequences, respectively, and the plaintext image matrix was divided according to the row and column proportions. Lastly, each of the divided plaintext images was scrambled with five chaotic sequences that were convolved to obtain the final encrypted image. The theoretical analysis and simulation results demonstrated that the key space of the algorithm was larger than 10150 that had strong key sensitivity. It effectively resisted the attacks of statistical analysis and gray value analysis, and had a good encryption effect on the encryption of digital images.

20.
Thorac Cancer ; 10(10): 1893-1903, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31426132

RESUMO

BACKGROUND: The aim of this study was to investigate the influence of convolution kernel and iterative reconstruction on the diagnostic performance of radiomics and deep learning (DL) in lung adenocarcinomas. METHODS: A total of 183 patients with 215 lung adenocarcinomas were included in this study. All CT imaging data was reconstructed with three reconstruction algorithms (ASiR at 0%, 30%, 60% strength), each with two convolution kernels (bone and standard). A total of 171 nodules were selected as the training-validation set, whereas 44 nodules were selected as the testing set. Logistic regression and a DL framework-DenseNets were selected to tackle the task. Three logical experiments were implemented to fully explore the influence of the studied parameters on the diagnostic performance. The receiver operating characteristic curve (ROC) was used to evaluate the performance of constructed models. RESULTS: In Experiments A and B, no statistically significant results were found in the radiomic method, whereas two and six pairs were statistically significant (P < 0.05) in the DL method. In Experiment_C, significant differences in one and four models were found in the radiomics and DL methods, respectively. Moreover, models constructed with standard convolution kernel data outperformed that constructed with bone convolution kernel data in all studied ASiR levels in the DL method. In the DL method, B0 and S60 performed best in bone and standard convolution kernel, respectively. CONCLUSION: The results demonstrated that DL was more susceptible to CT parameter variability than radiomics. Standard convolution kernel images seem to be more appropriate for imaging analysis. Further investigation with a larger sample size is needed.


Assuntos
Adenocarcinoma de Pulmão/diagnóstico , Aprendizado Profundo , Computação em Informática Médica , Adenocarcinoma de Pulmão/diagnóstico por imagem , Adenocarcinoma de Pulmão/patologia , Área Sob a Curva , Humanos , Processamento de Imagem Assistida por Computador , Gradação de Tumores , Estadiamento de Neoplasias , Curva ROC , Tomografia Computadorizada por Raios X
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...