Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
J Imaging Inform Med ; 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38839673

ABSTRACT

Thyroid ultrasound video provides significant value for thyroid diseases diagnosis, but the ultrasound imaging process is often affected by the speckle noise, resulting in poor quality of the ultrasound video. Numerous video denoising methods have been proposed to remove noise while preserving texture details. However, existing methods still suffer from the following problems: (1) relevant temporal features in the low-contrast ultrasound video cannot be accurately aligned and effectively aggregated by simple optical flow or motion estimation, resulting in the artifacts and motion blur in the video; (2) fixed receptive field in spatial features integration lacks the flexibility of aggregating features in the global region of interest and is susceptible to interference from irrelevant noisy regions. In this work, we propose a deformable spatial-temporal attention denoising network to remove speckle noise in thyroid ultrasound video. The entire network follows the bidirectional feature propagation mechanism to efficiently exploit the spatial-temporal information of the whole video sequence. In this process, two modules are proposed to address the above problems: (1) a deformable temporal attention module (DTAM) is designed after optical flow pre-alignment to further capture and aggregate relevant temporal features according to the learned offsets between frames, so that inter-frame information can be better exploited even with the imprecise flow estimation under the low contrast of ultrasound video; (2) a deformable spatial attention module (DSAM) is proposed to flexibly integrate spatial features in the global region of interest through the learned intra-frame offsets, so that irrelevant noisy information can be ignored and essential information can be precisely exploited. Finally, all these refined features are rectified and merged through residual convolution blocks to recover the clean video frames. Experimental results on our thyroid ultrasound video (US-V) dataset and the DDTI dataset demonstrate that our proposed method exceeds 1.2 ∼ 1.3 dB on PSNR and has clearer texture detail compared to other state-of-the-art methods. In the meantime, the proposed model can also assist thyroid nodule segmentation methods to achieve more accurate segmentation effect, which provides an important basis for thyroid diagnosis. In the future, the proposed model can be improved and extended to other medical image sequence datasets, including CT and MRI slice denoising. The code and datasets are provided at https://github.com/Meta-MJ/DSTAN .

2.
Med Biol Eng Comput ; 62(7): 1991-2004, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38429443

ABSTRACT

Detection of suspicious pulmonary nodules from lung CT scans is a crucial task in computer-aided diagnosis (CAD) systems. In recent years, various deep learning-based approaches have been proposed and demonstrated significant potential for addressing this task. However, existing deep convolutional neural networks exhibit limited long-range dependency capabilities and neglect crucial contextual information, resulting in reduced performance on detecting small-size nodules in CT scans. In this work, we propose a novel end-to-end framework called LGDNet for the detection of suspicious pulmonary nodules in lung CT scans by fusing local features and global representations. To overcome the limited long-range dependency capabilities inherent in convolutional operations, a dual-branch module is designed to integrate the convolutional neural network (CNN) branch that extracts local features with the transformer branch that captures global representations. To further address the issue of misalignment between local features and global representations, an attention gate module is proposed in the up-sampling stage to selectively combine misaligned semantic data from both branches, resulting in more accurate detection of small-size nodules. Our experiments on the large-scale LIDC dataset demonstrate that the proposed LGDNet with the dual-branch module and attention gate module could significantly improve the nodule detection sensitivity by achieving a final competition performance metric (CPM) score of 89.49%, outperforming the state-of-the-art nodule detection methods, indicating its potential for clinical applications in the early diagnosis of lung diseases.


Subject(s)
Lung Neoplasms , Neural Networks, Computer , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Deep Learning , Diagnosis, Computer-Assisted/methods , Solitary Pulmonary Nodule/diagnostic imaging , Algorithms , Multiple Pulmonary Nodules/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
3.
Dalton Trans ; 52(47): 17981-17992, 2023 Dec 05.
Article in English | MEDLINE | ID: mdl-37982647

ABSTRACT

We studied the Ni-Cu-acid multifunctional synergism in NiCu-phyllosilicate catalysts toward 1,4-butynediol hydrogenation to 1,4-butanediol by varying the reduction temperature, which can activate different bimetal and support interactions. Compared with a monometallic Ni phyllosilicate (phy), which only showed one type of metal species when reduced at ∼750 °C, there are three types of metal species for the bimetallic Ni-Cu-phyllosilicate derived catalysts, namely Cuphy, differentiated Ni, and Niphy. Thorough structure-activity/selectivity correlation investigations showed that, although the Ni9Cu1-P catalyst matrix can produce tiny amounts of differentiated Ni0 species under the induction of reduced Cu0 at R250 condition, it could not form Ni-Cu bimetallic interactions for the collaborative hydrogenation of 1,4-butynediol, and the product stays in the semi hydrogenated state. When the reduction temperature is raised to 500 °C, stable Ni-Cu alloy active sites exist, accompanied by the strong metal support interaction and metal acid effect derived from the intimate contact between the extracted metal sites and the surviving functional phyllosilicate support; these functionalities yield a supreme hydrogenation performance of the R500 sample with a 1,4-butanediol yield larger than 91.2%.

4.
Med Biol Eng Comput ; 61(12): 3319-3333, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37668892

ABSTRACT

Eye diseases often affect human health. Accurate detection of the optic disc contour is one of the important steps in diagnosing and treating eye diseases. However, the structure of fundus images is complex, and the optic disc region is often disturbed by blood vessels. Considering that the optic disc is usually a saliency region in fundus images, we propose a weakly-supervised optic disc detection method based on the fully convolution neural network (FCN) combined with the weighted low-rank matrix recovery model (WLRR). Firstly, we extract the low-level features of the fundus image and cluster the pixels using the Simple Linear Iterative Clustering (SLIC) algorithm to generate the feature matrix. Secondly, the top-down semantic prior information provided by FCN and bottom-up background prior information of the optic disc region are used to jointly construct the prior information weighting matrix, which more accurately guides the decomposition of the feature matrix into a sparse matrix representing the optic disc and a low-rank matrix representing the background. Experimental results on the DRISHTI-GS dataset and IDRiD dataset show that our method can segment the optic disc region accurately, and its performance is better than existing weakly-supervised optic disc segmentation methods. Graphical abstract of optic disc segmentation.


Subject(s)
Glaucoma , Optic Disk , Humans , Optic Disk/diagnostic imaging , Fundus Oculi , Algorithms , Neural Networks, Computer
5.
J Digit Imaging ; 36(4): 1894-1909, 2023 08.
Article in English | MEDLINE | ID: mdl-37118101

ABSTRACT

Computer tomography (CT) has played an essential role in the field of medical diagnosis. Considering the potential risk of exposing patients to X-ray radiations, low-dose CT (LDCT) images have been widely applied in the medical imaging field. Since reducing the radiation dose may result in increased noise and artifacts, methods that can eliminate the noise and artifacts in the LDCT image have drawn increasing attentions and produced impressive results over the past decades. However, recent proposed methods mostly suffer from noise remaining, over-smoothing structures, or false lesions derived from noise. To tackle these issues, we propose a novel degradation adaption local-to-global transformer (DALG-Transformer) for restoring the LDCT image. Specifically, the DALG-Transformer is built on self-attention modules which excel at modeling long-range information between image patch sequences. Meanwhile, an unsupervised degradation representation learning scheme is first developed in medical image processing to learn abstract degradation representations of the LDCT images, which can distinguish various degradations in the representation space rather than the pixel space. Then, we introduce a degradation-aware modulated convolution and gated mechanism into the building modules (i.e., multi-head attention and feed-forward network) of each Transformer block, which can bring in the complementary strength of convolution operation to emphasize on the spatially local context. The experimental results show that the DALG-Transformer can provide superior performance in noise removal, structure preservation, and false lesions elimination compared with five existing representative deep networks. The proposed networks may be readily applied to other image processing tasks including image reconstruction, image deblurring, and image super-resolution.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Radiation Dosage , Image Processing, Computer-Assisted/methods , Computers , Artifacts , Signal-To-Noise Ratio , Algorithms
6.
Comput Methods Programs Biomed ; 232: 107449, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36871547

ABSTRACT

BACKGROUND AND OBJECTIVE: Computer tomography (CT) imaging technology has played significant roles in the diagnosis and treatment of various lung diseases, but the degradations in CT images usually cause the loss of detailed structural information and interrupt the judgement from clinicians. Therefore, reconstructing noise-free, high-resolution CT images with sharp details from degraded ones is of great importance for the computer-assisted diagnosis (CAD) system. However, current image reconstruction methods suffer from unknown parameters of multiple degradations in actual clinical images. METHODS: To solve these problems, we propose a unified framework, so called Posterior Information Learning Network (PILN), for blind reconstruction of lung CT images. The framework consists of two stages: Firstly, a noise level learning (NLL) network is proposed to quantify the Gaussian and artifact noise degradations into different levels. Inception-residual modules are designed to extract multi-scale deep features from the noisy image, and residual self-attention structures are proposed to refine deep features to essential representations of noise. Secondly, by taking the estimated noise levels as prior information, a cyclic collaborative super-resolution (CyCoSR) network is proposed to iteratively reconstruct the high-resolution CT image and estimate the blur kernel. Two convolutional modules are designed based on cross-attention transformer structure, named as Reconstructor and Parser. The high-resolution image is restored from the degraded image by the Reconstructor under the guidance of the predicted blur kernel, while the blur kernel is estimated by the Parser according to the reconstructed image and the degraded one. The NLL and CyCoSR networks are formulated as an end-to-end framework to handle multiple degradations simultaneously. RESULTS: The proposed PILN is applied to the Cancer Imaging Archive (TCIA) dataset and the Lung Nodule Analysis 2016 Challenge (LUNA16) dataset to evaluate its ability in reconstructing lung CT images. Compared with the state-of-the-art image reconstruction algorithms, it can provide high-resolution images with less noise and sharper details with respect to quantitative benchmarks. CONCLUSIONS: Extensive experimental results demonstrate that our proposed PILN can achieve better performance on blind reconstruction of lung CT images, providing noise-free, detail-sharp and high-resolution images without knowing the parameters of multiple degradation sources.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Lung/diagnostic imaging , Algorithms , Computers , Signal-To-Noise Ratio
7.
Comput Biol Med ; 153: 106453, 2023 02.
Article in English | MEDLINE | ID: mdl-36603434

ABSTRACT

Deep learning based medical image segmentation methods have been widely used for thyroid gland segmentation from ultrasound images, which is of great importance for the diagnosis of thyroid disease since it can provide various valuable sonography features. However, existing thyroid gland segmentation models suffer from: (1) low-level features that are significant in depicting thyroid boundaries are gradually lost during the feature encoding process, (2) contextual features reflecting the changes of difference between thyroid and other anatomies in the ultrasound diagnosis process are either omitted by 2D convolutions or weakly represented by 3D convolutions due to high redundancy. In this work, we propose a novel hybrid transformer UNet (H-TUNet) to segment thyroid glands in ultrasound sequences, which consists of two parts: (1) a 2D Transformer UNet is proposed by utilizing a designed multi-scale cross-attention transformer (MSCAT) module on every skipped connection of the UNet, so that the low-level features from different encoding layers are integrated and refined according to the high-level features in the decoding scheme, leading to better representation of differences between anatomies in one ultrasound frame; (2) a 3D Transformer UNet is proposed by applying a 3D self-attention transformer (SAT) module to the very bottom layer of 3D UNet, so that the contextual features representing visual differences between regions and consistencies within regions could be strengthened from successive frames in the video. The learning process of the H-TUNet is formulated as a unified end-to-end network, so the intra-frame feature extraction and inter-frame feature aggregation can be learned and optimized jointly. The proposed method was evaluated on Thyroid Segmentation in Ultrasonography Dataset (TSUD) and TG3k Dataset. Experimental results have demonstrated that our method outperformed other state-of-the-art methods with respect to the certain benchmarks for thyroid gland segmentation.


Subject(s)
Benchmarking , Thyroid Gland , Thyroid Gland/diagnostic imaging , Ultrasonography , Image Processing, Computer-Assisted
8.
BMC Med Inform Decis Mak ; 22(1): 315, 2022 12 01.
Article in English | MEDLINE | ID: mdl-36457119

ABSTRACT

BACKGROUND: Named entity recognition (NER) of electronic medical records is an important task in clinical medical research. Although deep learning combined with pretraining models performs well in recognizing entities in clinical texts, because Chinese electronic medical records have a special text structure and vocabulary distribution, general pretraining models cannot effectively incorporate entities and medical domain knowledge into representation learning; separate deep network models lack the ability to fully extract rich features in complex texts, which negatively affects the named entity recognition of electronic medical records. METHODS: To better represent electronic medical record text, we extract the text's local features and multilevel sequence interaction information to improve the effectiveness of electronic medical record named entity recognition. This paper proposes a hybrid neural network model based on medical MC-BERT, namely, the MC-BERT + BiLSTM + CNN + MHA + CRF model. First, MC-BERT is used as the word embedding model of the text to obtain the word vector, and then BiLSTM and CNN obtain the feature information of the forward and backward directions of the word vector and the local context to obtain the corresponding feature vector. After merging the two feature vectors, they are sent to multihead self-attention (MHA) to obtain multilevel semantic features, and finally, CRF is used to decode the features and predict the label sequence. RESULTS: The experiments show that the F1 values of our proposed hybrid neural network model based on MC-BERT reach 94.22%, 86.47%, and 92.28% on the CCKS-2017, CCKS-2019 and cEHRNER datasets, respectively. Compared with the general-domain BERT-based BiLSTM + CRF, our F1 values increased by 0.89%, 1.65% and 2.63%. Finally, we analyzed the effect of an unbalanced number of entities in the electronic medical records on the results of the NER experiment.


Subject(s)
Electronic Health Records , Names , Humans , Neural Networks, Computer , Asian People , China
9.
Comput Biol Med ; 150: 106112, 2022 11.
Article in English | MEDLINE | ID: mdl-36209555

ABSTRACT

Computer tomography (CT) has played an essential role in the field of medical diagnosis, but the blurry edges and unclear textures in traditional CT images usually interfere the subsequent judgement from radiologists or clinicians. Deep learning based image super-resolution methods have been applied for CT image restoration recently. However, different levels of information of CT image details are mixed and difficult to be mapped from deep features by traditional convolution operations. Moreover, features representing regions of interest (ROIs) in CT images are treated equally as those for background, resulting in low concentration of meaningful features and high redundancy of computation. To tackle these issues, a CT image super-resolution network is proposed based on hybrid attention mechanism and global feature fusion, which consists of the following three parts: 1) stacked Swin Transformer blocks are used as the backbone to extract initial features from the degraded CT image; 2) a multi-branch hierarchical self-attention module (MHSM) is proposed to adaptively map multi-level features representing different levels of image information from the initial features and establish the relationship between these features through a self-attention mechanism, where three branches apply different strategies of integrating convolution, down-sampling and up-sampling operations according to three different scale factors; 3) a multidimensional local topological feature enhancement module (MLTEM) is proposed and plugged into the end of the backbone to refine features in the channel and spatial dimension simultaneously, so that the features representing ROIs could be enhanced while meaningless ones eliminated. Experimental results demonstrate that our method outperform the state-of-the-art super-resolution methods on restoring CT images with respect to peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) indices.


Subject(s)
Computers , Image Processing, Computer-Assisted , Humans , Radiologists , Signal-To-Noise Ratio , Tomography, X-Ray Computed
10.
Front Neurorobot ; 16: 1007939, 2022.
Article in English | MEDLINE | ID: mdl-36247359

ABSTRACT

Image classification indicates that it classifies the images into a certain category according to the information in the image. Therefore, extracting image feature information is an important research content in image classification. Traditional image classification mainly uses machine learning methods to extract features. With the continuous development of deep learning, various deep learning algorithms are gradually applied to image classification. However, traditional deep learning-based image classification methods have low classification efficiency and long convergence time. The training networks are prone to over-fitting. In this paper, we present a novel CapsNet neural network based on the MobileNetV2 structure for robot image classification. Aiming at the problem that the lightweight network will sacrifice classification accuracy, the MobileNetV2 is taken as the base network architecture. CapsNet is improved by optimizing the dynamic routing algorithm to generate the feature graph. The attention module is introduced to increase the weight of the saliency feature graph learned by the convolutional layer to improve its classification accuracy. The parallel input of spatial information and channel information reduces the computation and complexity of network. Finally, the experiments are carried out in CIFAR-100 dataset. The results show that the proposed model is superior to other robot image classification models in terms of classification accuracy and robustness.

11.
Signal Process Image Commun ; 108: 116835, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35935468

ABSTRACT

Coronavirus Disease 2019 (COVID-19) has spread globally since the first case was reported in December 2019, becoming a world-wide existential health crisis with over 90 million total confirmed cases. Segmentation of lung infection from computed tomography (CT) scans via deep learning method has a great potential in assisting the diagnosis and healthcare for COVID-19. However, current deep learning methods for segmenting infection regions from lung CT images suffer from three problems: (1) Low differentiation of semantic features between the COVID-19 infection regions, other pneumonia regions and normal lung tissues; (2) High variation of visual characteristics between different COVID-19 cases or stages; (3) High difficulty in constraining the irregular boundaries of the COVID-19 infection regions. To solve these problems, a multi-input directional UNet (MID-UNet) is proposed to segment COVID-19 infections in lung CT images. For the input part of the network, we firstly propose an image blurry descriptor to reflect the texture characteristic of the infections. Then the original CT image, the image enhanced by the adaptive histogram equalization, the image filtered by the non-local means filter and the blurry feature map are adopted together as the input of the proposed network. For the structure of the network, we propose the directional convolution block (DCB) which consist of 4 directional convolution kernels. DCBs are applied on the short-cut connections to refine the extracted features before they are transferred to the de-convolution parts. Furthermore, we propose a contour loss based on local curvature histogram then combine it with the binary cross entropy (BCE) loss and the intersection over union (IOU) loss for better segmentation boundary constraint. Experimental results on the COVID-19-CT-Seg dataset demonstrate that our proposed MID-UNet provides superior performance over the state-of-the-art methods on segmenting COVID-19 infections from CT images.

12.
Front Neurorobot ; 16: 797231, 2022.
Article in English | MEDLINE | ID: mdl-35185509

ABSTRACT

Blind face restoration (BFR) from severely degraded face images is important in face image processing and has attracted increasing attention due to its wide applications. However, due to the complex unknown degradations in real-world scenarios, existing priors-based methods tend to restore faces with unstable quality. In this article, we propose a multi-prior collaboration network (MPCNet) to seamlessly integrate the advantages of generative priors and face-specific geometry priors. Specifically, we pretrain a high-quality (HQ) face synthesis generative adversarial network (GAN) and a parsing mask prediction network, and then embed them into a U-shaped deep neural network (DNN) as decoder priors to guide face restoration, during which the generative priors can provide adequate details and the parsing map priors provide geometry and semantic information. Furthermore, we design adaptive priors feature fusion (APFF) blocks to incorporate the prior features from pretrained face synthesis GAN and face parsing network in an adaptive and progressive manner, making our MPCNet exhibits good generalization in a real-world application. Experiments demonstrate the superiority of our MPCNet in comparison to state-of-the-arts and also show its potential in handling real-world low-quality (LQ) images from several practical applications.

13.
J Digit Imaging ; 35(3): 638-653, 2022 06.
Article in English | MEDLINE | ID: mdl-35212860

ABSTRACT

Automatic and accurate segmentation of optic disc (OD) and optic cup (OC) in fundus images is a fundamental task in computer-aided ocular pathologies diagnosis. The complex structures, such as blood vessels and macular region, and the existence of lesions in fundus images bring great challenges to the segmentation task. Recently, the convolutional neural network-based methods have exhibited its potential in fundus image analysis. In this paper, we propose a cascaded two-stage network architecture for robust and accurate OD and OC segmentation in fundus images. In the first stage, the U-Net like framework with an improved attention mechanism and focal loss is proposed to detect accurate and reliable OD location from the full-scale resolution fundus images. Based on the outputs of the first stage, a refined segmentation network in the second stage that integrates multi-task framework and adversarial learning is further designed for OD and OC segmentation separately. The multi-task framework is conducted to predict the OD and OC masks by simultaneously estimating contours and distance maps as auxiliary tasks, which can guarantee the smoothness and shape of object in segmentation predictions. The adversarial learning technique is introduced to encourage the segmentation network to produce an output that is consistent with the true labels in space and shape distribution. We evaluate the performance of our method using two public retinal fundus image datasets (RIM-ONE-r3 and REFUGE). Extensive ablation studies and comparison experiments with existing methods demonstrate that our approach can produce competitive performance compared with state-of-the-art methods.


Subject(s)
Glaucoma , Optic Disk , Diagnostic Techniques, Ophthalmological , Fundus Oculi , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Optic Disk/diagnostic imaging
14.
Ann Transl Med ; 9(20): 1585, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34790791

ABSTRACT

BACKGROUND: Long-term exposure to a high-altitude environment with low pressure and low oxygen can cause abnormalities in the structure and function of the heart, in particular the right ventricle. Monitoring the structure and function of the right ventricle is therefore essential for early diagnosis and prognosis of high-altitude heart-related diseases. In this study, 7.0 T MRI is used to detect cardiac structure and function indicators of rats in natural plateau and plain environments. METHODS: Rats in two groups were raised in different environments from 6 weeks of age for a period of 12 weeks. At 18 weeks of age both groups underwent 7.0 T cardiac magnetic resonance (CMR) scanning. Professional cardiac post-processing software was used to analyze right ventricular end-diastolic volume (RVEDV), right ventricular end-systolic volume (RVESV), right ventricular stroke volume (RVSV), right ventricular ejection fraction (RVEF), Right ventricular end-diastolic myocardial mass (RV Myo mass, diast), Right ventricular end-systolic myocardial mass (RV Myo mass, syst), tricuspid valve end-diastolic caliber (TVD), tricuspid valve end-systolic caliber (TVS), right ventricular end-systolic long-axis (RVESL) and right ventricular end-diastolic long-axis (RVEDL). Prior to the CMR scan, blood was collected from the two groups of rats for evaluation of blood indicators. After the scan, the rats were sacrificed and the myocardial tissue morphology observed under a light microscope. RESULTS: In the group of rats subject to chronic hypoxia at high altitude for 12 weeks (the plateau group), red blood cell (RBC) count, hemoglobin (HGB) and hematocrit (HCT) increased (P<0.05); RVEDV, RVESV, RVSV, RV Myo mass (diast), RV Myo mass (syst), TVS, RVESL, and RVEDL also increased (P<0.05). Observation of the right ventricle of rats in the plateau group using a light microscope mainly showed a slightly widened myocardial space, myocardial cell turbidity, vacuolar degeneration, myocardial interstitial edema, vascular congestion and a small amount of inflammatory cell infiltration. CONCLUSIONS: The importance of ultra-high-field MRI for monitoring the early stages of rat heart injury has been demonstrated by studying the changes in the structure and function of the right ventricle of rats subject to chronic hypoxia at high altitude over a period of 12 weeks.

15.
BMC Bioinformatics ; 22(1): 532, 2021 Oct 30.
Article in English | MEDLINE | ID: mdl-34717542

ABSTRACT

BACKGROUND: Drug repositioning has caught the attention of scholars at home and abroad due to its effective reduction of the development cost and time of new drugs. However, existing drug repositioning methods that are based on computational analysis are limited by sparse data and classic fusion methods; thus, we use autoencoders and adaptive fusion methods to calculate drug repositioning. RESULTS: In this study, a drug repositioning algorithm based on a deep autoencoder and adaptive fusion was proposed to mitigate the problems of decreased precision and low-efficiency multisource data fusion caused by data sparseness. Specifically, a drug is repositioned by fusing drug-disease associations, drug target proteins, drug chemical structures and drug side effects. First, drug feature data integrated by drug target proteins and chemical structures were processed with dimension reduction via a deep autoencoder to characterize feature representations more densely and abstractly. Then, disease similarity was computed using drug-disease association data, while drug similarity was calculated with drug feature and drug-side effect data. Predictions of drug-disease associations were also calculated using a top-k neighbor method that is commonly used in predictive drug repositioning studies. Finally, a predicted matrix for drug-disease associations was acquired after fusing a wide variety of data via adaptive fusion. Based on experimental results, the proposed algorithm achieves a higher precision and recall rate than the DRCFFS, SLAMS and BADR algorithms with the same dataset. CONCLUSION: The proposed algorithm contributes to investigating the novel uses of drugs, as shown in a case study of Alzheimer's disease. Therefore, the proposed algorithm can provide an auxiliary effect for clinical trials of drug repositioning.


Subject(s)
Alzheimer Disease , Drug-Related Side Effects and Adverse Reactions , Algorithms , Computational Biology , Drug Repositioning , Humans
16.
J Healthc Eng ; 2021: 3561134, 2021.
Article in English | MEDLINE | ID: mdl-34512935

ABSTRACT

We present in this paper a novel optic disc detection method based on a fully convolutional network and visual saliency in retinal fundus images. Firstly, we employ the morphological reconstruction-based object detection method to locate the optic disc region roughly. According to the location result, a 400 × 400 image patch that covers the whole optic disc is obtained by cropping the original retinal fundus image. Secondly, the Simple Linear Iterative Cluster approach is utilized to segment such an image patch into many smaller superpixels. Thirdly, each superpixel is assigned a uniform initial saliency value according to the background prior information based on the assumption that the superpixels located on the boundary of the image belong to the background. Meanwhile, we use a pretrained fully convolutional network to extract the deep features from different layers of the network and design the strategy to represent each superpixel by the deep features. Finally, both the background prior information and the deep features are integrated into the single-layer cellular automata framework to gain the accurate optic disc detection result. We utilize the DRISHTI-GS dataset and RIM-ONE r3 dataset to evaluate the performance of our method. The experimental results demonstrate that the proposed method can overcome the influence of intensity inhomogeneity, weak contrast, and the complex surroundings of the optic disc effectively and has superior performance in terms of accuracy and robustness.


Subject(s)
Optic Disk , Fundus Oculi , Humans , Optic Disk/diagnostic imaging
17.
Front Neurorobot ; 15: 700011, 2021.
Article in English | MEDLINE | ID: mdl-34276333

ABSTRACT

With the development of computer vision, high quality images with rich information have great research potential in both daily life and scientific research. However, due to different lighting conditions, surrounding noise and other reasons, the image quality is different, which seriously affects people's discrimination of the information in the image, thus causing unnecessary conflicts and results. Especially in the dark, the images captured by the camera are difficult to identify, and the smart system relies heavily on high-quality input images. The image collected in low-light environment has the characteristic with high noise and color distortion, which makes it difficult to utilize the image and can not fully explore the rich value information of the image. In order to improve the quality of low-light image, this paper proposes a Heterogenous low-light image enhancement method based on DenseNet generative adversarial network. Firstly, the generative network of generative adversarial network is realized by using DenseNet framework. Secondly, the feature map from low light image to normal light image is learned by using the generative adversarial network. Thirdly, the enhancement of low-light image is realized. The experimental results show that, in terms of PSNR, SSIM, NIQE, UQI, NQE and PIQE indexes, compared with the state-of-the-art enhancement algorithms, the values are ideal, the proposed method can improve the image brightness more effectively and reduce the noise of enhanced image.

18.
J Orthop Surg (Hong Kong) ; 29(2): 23094990211012846, 2021.
Article in English | MEDLINE | ID: mdl-33926334

ABSTRACT

OBJECTIVE: This study was designed to investigate the relationship between the laminar slope angle (LSA) and the lumbar disc degenerative grade, the cross-section area (CSA) of multifidus muscle, the muscle-fat index, and the thickness of the ligamentum flavum. METHODS: Retrospective analysis of 122 patients who were scheduled to undergo a lumbar operation for diagnoses associated with degenerative lumbar disease between January and December 2017. The L4-L5 disc grade was evaluated from preoperative sagittal T2-weighed magnetic resonance imaging of the lumber region; the CSA of the multifidus and muscle-fat index were measured at the L4 level, while the thickness of the ligamentum flavum was measured at the L4-L5 facet level from axis T2-weighed magnetic resonance imaging. The slope of the laminar was evaluated from preoperative three-dimensional computer tomography at the tip level of the facet joints and selected by the axis plane. Independent-sample T-tests were used to assess the association between age and measurement indices. RESULTS: Our results showed that age was positively connected with the LSA of L4 and L5 in different patients, although there was no significant difference between age and the difference of the two segment LSA. Partial correlation analysis, excluding the interference of age, revealed a strong negative relationship between the LSA of L4 and the thickness of the ligamentum flavum, irrespective of whether we considered the left or right. However, there was no correlation with lumbar disc degenerative grade, the CSA of the multifidus, and the muscle-fat index. CONCLUSION: The thickness of the ligamentum flavum showed changes with anatomical differences in the LSA, but not the lumbar disc degenerative grade, the CSA of the multifidus, and the muscle-fat index. A small change in LSA may cause large mechanical stress; this may be one of the causative factors responsible for lumbar spinal stenosis.


Subject(s)
Intervertebral Disc Degeneration/surgery , Ligamentum Flavum/diagnostic imaging , Lumbar Vertebrae , Spinal Stenosis/diagnostic imaging , Adult , Aged , Aged, 80 and over , Female , Humans , Hypertrophy/complications , Hypertrophy/diagnostic imaging , Hypertrophy/pathology , Imaging, Three-Dimensional , Intervertebral Disc Degeneration/diagnostic imaging , Ligamentum Flavum/pathology , Lumbar Vertebrae/diagnostic imaging , Lumbar Vertebrae/surgery , Magnetic Resonance Imaging , Male , Middle Aged , Retrospective Studies , Spinal Stenosis/etiology , Spinal Stenosis/surgery , Tomography, X-Ray Computed , Young Adult
19.
Spine (Phila Pa 1976) ; 46(17): E916-E925, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-33534519

ABSTRACT

STUDY DESIGN: Sequencing and experimental analysis of the expression profile of circular RNAs (circRNAs) in hypertrophic ligamentum flavum (LFH). OBJECTIVES: The aim of this study was to identify differentially expressed circRNAs between LFH and nonhypertrophic ligamentum flavum tissues from lumbar spinal stenosis (LSS) patients. SUMMARY OF BACKGROUND DATA: Hypertrophy of the ligamentum flavum (LF) can cause LSS. circRNAs are important in various diseases. However, no circRNA expression patterns related to LF hypertrophy have been reported. METHODS: A total of 33 patients with LSS participated in this study. LF tissue samples were obtained when patients underwent decompressive laminectomy during surgery. The expression profile of circRNAs was analyzed by transcriptome high-throughput sequencing and validated with quantitative real-time polymerase chain reaction (PCR). Gene Ontology and Kyoto Encyclopedia of Genes and Genomes analyses were performed for the differentially expressed circRNA-associated genes and related pathways. The connections between circRNAs and microRNAs were explored using Cytoscape. The role of hsa_circ_0052318 on LF cell fibrosis was assessed by analyzing the expression of collagen I and collagen III. RESULTS: The results showed that 2439 circRNAs of 4025 were differentially expressed between the LFH and nonhypertrophic ligamentum flavum tissues, including 1276 upregulated and 1163 downregulated circRNAs. The Gene Ontology and Kyoto Encyclopedia of Genes and Genomes analyses revealed that these differentially expressed circRNAs functioned in biological processes, cellular components, and molecular functions. Autophagy and mammalian target of rapamycin were the top two signaling pathways affected by these circRNAs. Five circRNAs (hsa_circ_0021604, hsa_circ_0025489, hsa_circ_0002599, hsa_circ_0052318, and hsa_circ_0003609) were confirmed by quantitative real-time PCR. The network indicated a strong relationship between circRNAs and miRNAs. Furthermore, hsa_circ_0052318 overexpression decreased mRNA and protein expression of collagen I and III in LF cells from LFH tissues. CONCLUSION: This study identified circRNA expression profiles characteristic of hypertrophied LF in LSS patients, and demonstrated that hsa_circ_0052318 may play an important role in the pathogenesis of LF hypertrophy.Level of Evidence: N/A.


Subject(s)
Ligamentum Flavum , MicroRNAs , Spinal Stenosis , Humans , Hypertrophy/genetics , RNA, Circular , Spinal Stenosis/genetics
20.
Sensors (Basel) ; 20(15)2020 Aug 01.
Article in English | MEDLINE | ID: mdl-32752225

ABSTRACT

Pulmonary nodule detection in chest computed tomography (CT) is of great significance for the early diagnosis of lung cancer. Therefore, it has attracted more and more researchers to propose various computer-assisted pulmonary nodule detection methods. However, these methods still could not provide convincing results because the nodules are easily confused with calcifications, vessels, or other benign lumps. In this paper, we propose a novel deep convolutional neural network (DCNN) framework for detecting pulmonary nodules in the chest CT image. The framework consists of three cascaded networks: First, a U-net network integrating inception structure and dense skip connection is proposed to segment the region of lung parenchyma from the chest CT image. The inception structure is used to replace the first convolution layer for better feature extraction with respect to multiple receptive fields, while the dense skip connection could reuse these features and transfer them through the network. Secondly, a modified U-net network where all the convolution layers are replaced by dilated convolution is proposed to detect the "suspicious nodules" in the image. The dilated convolution can increase the receptive fields to improve the ability of the network in learning global information of the image. Thirdly, a modified U-net adapting multi-scale pooling and multi-resolution convolution connection is proposed to find the true pulmonary nodule in the image with multiple candidate regions. During the detection, the result of the former step is used as the input of the latter step to follow the "coarse-to-fine" detection process. Moreover, the focal loss, perceptual loss and dice loss were used together to replace the cross-entropy loss to solve the problem of imbalance distribution of positive and negative samples. We apply our method on two public datasets to evaluate its ability in pulmonary nodule detection. Experimental results illustrate that the proposed method outperform the state-of-the-art methods with respect to accuracy, sensitivity and specificity.

SELECTION OF CITATIONS
SEARCH DETAIL
...