Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 301
Filter
1.
Arq. bras. oftalmol ; 87(2): e2021, 2024. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1527838

ABSTRACT

ABSTRACT Purpose: The purpose of this study was to evaluate the intraretinal layer thickness in the macular region and its correlation with the duration of uveitis and visual acuity in patients with Behçet uveitis. Methods: In this cross-sectional study, we included 93 eyes of 57 patients with Behçet uveitis and 100 eyes of 50 healthy individuals admitted to a tertiary center from January to September 2017. We performed macular measurements in all subjects via spectral domain-optical coherence tomography (SD-OCT) and divided the retina into layers using automated segmentation software on the SD-OCT device. We then compared layer thicknesses between the patient and control groups and evaluated the correlation between OCT parameters and the duration of uveitis and visual acuity in the patient group. Results: Our records show a mean age of 37.9 ± 10.8 (18-64) years and 37.7 ± 12.2 (21-61) years in the patient and control groups (p=0.821), respectively. Meanwhile, data reveal a mean duration of uveitis of 6.9 ± 4.7 (1-20) years. We found a reduction in the total outer layer thickness in the patient group (p<0.001). However, we did not find a statistically significant difference in the inner retinal layers except in the inner nuclear layer. The duration of uveitis negatively correlated with the outer retinal layer's thickness (correlation coefficient = -0.250). On the other hand, visual acuity positively correlated with the central macular, the total inner layer, and the outer retinal layer thicknesses (correlation coefficients: 0.194, 0.154, and 0.364, respectively). However, the inner nuclear layer negatively correlated with visual acuity. Conclusions: Using retinal segmentation via SD-OCT for follow-ups can help estimate visual loss in patients with Behçet uveitis, which can cause significant changes in intraretinal layers in the macular region.


RESUMO Objetivo: Avaliar a espessura das camadas intraretinianas na região macular e sua relação com a duração da uveíte e acuidade visual em pacientes com uveíte de Behçet. Métodos: Este estudo transversal incluiu 93 olhos de 57 pacientes com uveíte de Behçet e 100 olhos de 50 indivíduos saudáveis que foram admitidos em um hospital terciário entre janeiro de 2017 e setembro de 2017. As medições maculares foram realizadas com tomografia de coerência óptica de domínio espectral (SD-OCT) em todos os pacientes. A retina foi dividida em camadas usando software de segmentação automatizado no dispositivo SD-OCT. As espessuras da camada foram comparadas entre os pacientes e os grupos controle. No grupo de pacientes, foi avaliada a correlação entre os parâmetros obtidos na OCT e a duração da uveíte e acuidade visual. Resultados: A média de idade foi de 37,9 ± 10,8 (18-64) no grupo de pacientes e 37,7 ± 12,2 (21-61) no grupo controle (p=0,821). A duração média da uveíte foi de 6,9 ± 4,7 (1-20) anos. A espessura total das camadas externas no grupo de pacientes foi reduzida (p<0,001). Uma diferença estatisticamente significativa não foi encontrada nas camadas internas da retina, exceto na camada nuclear interna. Uma correlação negativa foi detectada entre a duração da uveíte e a espessura da camada externa da retina (coeficiente de correlação = -0,250). Uma correlação positiva significativa foi detectada entre a acuidade visual e a espessura macular central bem como a espessura total das camadas internas e externas da retina (coeficientes de correlação 0,194; 0,154 e 0,364, respectivamente). A camada nuclear interna foi negativamente correlacionada com a acuidade visual. Conclusões: A uveíte de Behçet pode causar alterações significativas nas camadas intraretinianas na região macular. A segmentação da retina com SD-OCT pode ser útil para acompanhamentos e para estimar a perda visual em pacientes com uveíte de Behçet.

2.
Acta Anatomica Sinica ; (6): 73-81, 2024.
Article in Chinese | WPRIM | ID: wpr-1015147

ABSTRACT

Objective Hippocampal atrophy is a clinically important marker for the diagnosis of many psychiatric disorders such as Alzheimer’s disease‚ so accurate segmentation of the hippocampus is an important scientific issue. With the development of deep learning‚ a large number of advanced automatic segmentation method have been proposed. However‚ 3D hippocampal segmentation is still challenging due to the effects of various noises in MRI and unclear boundaries between various classes of the hippocampus. Therefore‚ the aim of this paper is to propose new method to segment the hippocampal head‚ body‚ and tail more accurately. Methods To overcome these challenges‚ this paper proposed two strategies. One was the spatial and frequency domain features adaptive fusion strategy‚ which reduced the influence of noise on feature extraction by automatically selecting the appropriate frequency combination through fast Fourier transform and convolution. The other was an inter-class boundary region enhancement strategy‚ which allowed the network to focus on learning the boundary regions by weighting the loss function of the boundary regions between each class to achieve the goal of pinpointing the boundaries and regulating the size of the hippocampal head‚ body and tail. Results Experiments performed on a 50-case teenager brain MRI dataset show that our method achieves state-of-the-art hippocampal segmentation. Hippocampal head‚ body and tail had been improved compared to the existing method. Ablation experiments demonstrated the effectiveness of our two proposed strategies‚ and we also validated that the network had a strong generalization ability on a 260-case Task04_Hippocampus dataset. It was shown that the method proposed in this paper could be used in more hippocampal segmentation scenarios. Conclusion The method proposed in this paper can help clinicians to observe hippocampal atrophy more clearly and accomplish more accurate diagnosis and follow-up of the condition.

3.
Journal of Prevention and Treatment for Stomatological Diseases ; (12): 673-678, 2023.
Article in Chinese | WPRIM | ID: wpr-974754

ABSTRACT

@#Three-dimensional tooth segmentation is the segmentation of single-tooth models from a digital dental model. It is an important foundation for diagnosis, planning, treatment and customized appliance manufacturing in digital orthodontics. With the deep integration of artificial intelligence technology and big data from stomatology, the use of deep learning algorithms to assist 3D tooth segmentation has gradually become mainstream. This review summarizes the current situation of deep learning algorithms that assist 3D tooth segmentation from the aspects of dataset establishment, algorithm architecture, algorithm performance, innovation and advantages, deficiencies of current research and prospects. The results of the literature review showed that deep learning tooth segmentation methods could obtain an accuracy of more than 95% and had good robustness. However, the segmentation of complex dental models, operation time and richness of the training database still need to be improved. Research and development of the "consumption reduction and strong core" algorithm, establishment of an authoritative data sample base with multiple centers, and expansion of data application depth and breadth will lead to further development in this field.

4.
Journal of Southern Medical University ; (12): 985-993, 2023.
Article in Chinese | WPRIM | ID: wpr-987012

ABSTRACT

OBJECTIVE@#To propose a tissue- aware contrast enhancement network (T- ACEnet) for CT image enhancement and validate its accuracy in CT image organ segmentation tasks.@*METHODS@#The original CT images were mapped to generate low dynamic grayscale images with lung and soft tissue window contrasts, and the supervised sub-network learned to recognize the optimal window width and level setting of the lung and abdominal soft tissues via the lung mask. The self-supervised sub-network then used the extreme value suppression loss function to preserve more organ edge structure information. The images generated by the T-ACEnet were fed into the segmentation network to segment multiple abdominal organs.@*RESULTS@#The images obtained by T-ACEnet were capable of providing more window setting information in a single image, which allowed the physicians to conduct preliminary screening of the lesions. Compared with the suboptimal methods, T-ACE images achieved improvements by 0.51, 0.26, 0.10, and 14.14 in SSIM, QABF, VIFF, and PSNR metrics, respectively, with a reduced MSE by an order of magnitude. When T-ACE images were used as input for segmentation networks, the organ segmentation accuracy could be effectively improved without changing the model as compared with the original CT images. All the 5 segmentation quantitative indices were improved, with the maximum improvement of 4.16%.@*CONCLUSION@#The T-ACEnet can perceptually improve the contrast of organ tissues and provide more comprehensive and continuous diagnostic information, and the T-ACE images generated using this method can significantly improve the performance of organ segmentation tasks.


Subject(s)
Learning , Image Enhancement , Tomography, X-Ray Computed
5.
Journal of Southern Medical University ; (12): 815-824, 2023.
Article in Chinese | WPRIM | ID: wpr-986993

ABSTRACT

OBJECTIVE@#We propose a novel region- level self-supervised contrastive learning method USRegCon (ultrastructural region contrast) based on the semantic similarity of ultrastructures to improve the performance of the model for glomerular ultrastructure segmentation on electron microscope images.@*METHODS@#USRegCon used a large amount of unlabeled data for pre- training of the model in 3 steps: (1) The model encoded and decoded the ultrastructural information in the image and adaptively divided the image into multiple regions based on the semantic similarity of the ultrastructures; (2) Based on the divided regions, the first-order grayscale region representations and deep semantic region representations of each region were extracted by region pooling operation; (3) For the first-order grayscale region representations, a grayscale loss function was proposed to minimize the grayscale difference within regions and maximize the difference between regions. For deep semantic region representations, a semantic loss function was introduced to maximize the similarity of positive region pairs and the difference of negative region pairs in the representation space. These two loss functions were jointly used for pre-training of the model.@*RESULTS@#In the segmentation task for 3 ultrastructures of the glomerular filtration barrier based on the private dataset GlomEM, USRegCon achieved promising segmentation results for basement membrane, endothelial cells, and podocytes, with Dice coefficients of (85.69 ± 0.13)%, (74.59 ± 0.13)%, and (78.57 ± 0.16)%, respectively, demonstrating a good performance of the model superior to many existing image-level, pixel-level, and region-level self-supervised contrastive learning methods and close to the fully- supervised pre-training method based on the large- scale labeled dataset ImageNet.@*CONCLUSION@#USRegCon facilitates the model to learn beneficial region representations from large amounts of unlabeled data to overcome the scarcity of labeled data and improves the deep model performance for glomerular ultrastructure recognition and boundary segmentation.


Subject(s)
Humans , Electrons , Endothelial Cells , Learning , Podocytes , Kidney Diseases
6.
Chinese Journal of Radiation Oncology ; (6): 319-324, 2023.
Article in Chinese | WPRIM | ID: wpr-993194

ABSTRACT

Objective:To develop a multi-scale fusion and attention mechanism based image automatic segmentation method of organs at risk (OAR) from head and neck carcinoma radiotherapy.Methods:We proposed a new OAR segmentation method for medical images of heads and necks based on the U-Net convolution neural network. Spatial and channel squeeze excitation (csSE) attention block were combined with the U-Net, aiming to enhance the feature expression ability. We also proposed a multi-scale block in the U-Net encoding stage to supplement characteristic information. Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) were used as evaluation criteria for deep learning performance.Results:The segmentation of 22 OAR in the head and neck was performed according to the medical image computing computer assisted intervention (MICCAI) StructSeg2019 dataset. The proposed method improved the average segmentation accuracy by 3%-6% compared with existing methods. The average DSC in the segmentation of 22 OAR in the head and neck was 78.90% and the average 95%HD was 6.23 mm.Conclusion:Automatic segmentation of OAR from the head and neck CT using multi-scale fusion and attention mechanism achieves high segmentation accuracy, which is promising for enhancing the accuracy and efficiency of radiotherapy in clinical practice.

7.
Chinese Journal of Radiological Medicine and Protection ; (12): 73-77, 2023.
Article in Chinese | WPRIM | ID: wpr-993054

ABSTRACT

Image-guided radiation therapy (IGRT) is a visual image-guided radiotherapy technique that has many advantages such as increasing the dose of tumor target area and reducing the dose of normal organ exposure. Cone beam CT (CBCT) is one of the most commonly used medical images in IGRT, and the rapid and accurate targeting of CBCT and the segmentation of dangerous organs are of great significance for radiotherapy. The current research method mainly includes partitioning method based on registration and segmentation method based on deep learning. This study reviews the CBCT image segmentation method, existing problems and development directions.

8.
Chinese Journal of Radiology ; (12): 522-527, 2023.
Article in Chinese | WPRIM | ID: wpr-992982

ABSTRACT

Objective:To explore the effect of joint segmentation model of myocardial-fibrotic region based on deep learning in quantitative analysis of myocardial fibrosis in patients with dilated cardiomyopathy(DCM).Methods:The data of 200 patients with confirmed DCM and myocardial fibrosis in the left ventricle detected by cardiac MR-late gadolinium enhancement (CMR-LGE) in Xuzhou Central Hospital from January 2015 to April 2022 were retrospectively analyzed. Using a complete randomized design, the patients were divided into training set ( n=120), validation set ( n=30) and test set ( n=50). The left ventricle myocardium was outlined and the normal myocardial region was selected by radiologists. Fibrotic myocardium was extracted through calculating the threshold with standard deviation (SD) as a reference standard for left ventricle segmentation and fibrosis quantification. The left ventricular myocardium was segmented by convex prior U-Net network. Then the normal myocardial image block was recognized by VGG image classification network, and the fibrosis myocardium was extracted by SD threshold. The myocardial segmentation effect was evaluated using precision, recall, intersection over union (IOU) and Dice coefficient. The consistency of myocardial fibrosis ratio in left ventricle obtained by joint segmentation model and manual extraction was evaluated with intra-class correlation coefficient (ICC). According to the median of fibrosis rate, the samples were divided into mild and severe fibrosis, and the quantitative effect of fibrosis was compared by Mann-Whitney U test. Results:In the test set, the precision of myocardial segmentation was 0.827 (0.799, 0.854), the recall was 0.849 (0.822, 0.876), the IOU was 0.788 (0.760, 0.816), and the Dice coefficient was 0.832 (0.807, 0.857). The consistency of fibrosis ratio between joint segmentation model and manual extraction was high (ICC=0.991, P<0.001). No statistically significant difference was found in the ratio error between mild and severe fibrosis ( P>0.05). Conclusions:The joint segmentation model realizes the automatic calculation of myocardial fibrosis ratio in left ventricle, which is highly consistent with the results of manual extraction. Therefore, it can accurately realize the automatic quantitative analysis of myocardial fibrosis in patients with dilated cardiomyopathy.

9.
Acta Anatomica Sinica ; (6): 553-559, 2023.
Article in Chinese | WPRIM | ID: wpr-1015188

ABSTRACT

Objective The navigation system of robot-assisted knee arthroplasty uses a laser scanner to acquire intraoperative cartilage point clouds and align them with the preoperative model for automatic non-contact space registration. The intraoperative patient knee lesion point cloud contains a large number of irrelevant background point clouds of muscles, tendons, ligaments and surgical instruments. Manual removal of irrelevant point clouds takes up surgery time due to human-computer interaction, so in this study we proposed a novel method for automatic extraction of point clouds from the knee cartilage surface for fast and accurate intraoperative registration. Methods Due to the lack of adequate description of cartilage surface and geometric local information, PointNet cannot extract cartilage point clouds with high precision. In this paper, a fast point feature histogram(FPFH)-PointNet method combined with fast point feature histogram was proposed, which effectively described the appearance of cartilage point cloud and achieved the automatic and efficient segmentation of cartilage point cloud. Results The point clouds of distal femoral cartilage of 10 cadaveric knee specimens and 1 human leg model were scanned from different directions as data sets. The accuracy of cartilage point cloud segmentation by PointNet and FPFH-PointNet were 0.94 ±0.003 and 0.98 ±0, and mean intersection over union(mIOU) were 0.83 ±0.015 and 0.93 ±0.005, respectively. Compared with PointNet, FPFH-PointNet improved accuracy and mIOU by 4% and 10% respectively, while the elapsed time was only about 1.37 s. Conclusion FPFH-PointNet can accurately and automatically extract the knee cartilage point cloud, which meets the performance requirement for intraoperative navigation.

10.
Journal of Clinical Otorhinolaryngology Head and Neck Surgery ; (12): 632-641, 2023.
Article in Chinese | WPRIM | ID: wpr-1011020

ABSTRACT

Objective:To explore the effect of fully automatic image segmentation of adenoid and nasopharyngeal airway by deep learning model based on U-Net network. Methods:From March 2021 to March 2022, 240 children underwent cone beam computed tomography(CBCT) in the Department of Otolaryngology, Head and Neck Surgery, General Hospital of Shenzhen University. 52 of them were selected for manual labeling of nasopharynx airway and adenoid, and then were trained and verified by the deep learning model. After applying the model to the remaining data, compare the differences between conventional two-dimensional indicators and deep learning three-dimensional indicators in 240 datasets. Results:For the 52 cases of modeling and training data sets, there was no significant difference between the prediction results of deep learning and the manual labeling results of doctors(P>0.05). The model evaluation index of nasopharyngeal airway volume: Mean Intersection over Union(MIOU) s (86.32±0.54)%; Dice Similarity Coefficient(DSC): (92.91±0.23)%; Accuracy: (95.92±0.25)%; Precision: (91.93±0.14)%; and the model evaluation index of Adenoid volume: MIOU: (86.28±0.61)%; DSC: (92.88±0.17)%; Accuracy: (95.90±0.29)%; Precision: (92.30±0.23)%. There was a positive correlation between the two-dimensional index A/N and the three-dimensional index AV/(AV+NAV) in 240 children of different age groups(P<0.05), and the correlation coefficient of 9-13 years old was 0.74. Conclusion:The deep learning model based on U-Net network has a good effect on the automatic image segmentation of adenoid and nasopharynx airway, and has high application value. The model has a certain generalization ability.


Subject(s)
Child , Humans , Adolescent , Adenoids/diagnostic imaging , Image Processing, Computer-Assisted/methods , Pharynx , Cone-Beam Computed Tomography , Nose
11.
Journal of Forensic Medicine ; (6): 151-160, 2023.
Article in English | WPRIM | ID: wpr-981849

ABSTRACT

OBJECTIVES@#To establish an LC-MS/MS method based on single hair micro-segmental technique, and verify the detection of 42 psychoactive substances in 0.4 mm hair segments.@*METHODS@#Each piece of single hair was cut into 0.4 mm segments and extracted by sonication and the segments were immersed in dithiothreitol-containing extraction medium. Mobile phase A was the aqueous solution containing 20 mmol/L ammonium acetate, 0.1% formic acid, and 5% acetonitrile. Mobile phase B was acetonitrile. An electrospray ionization source in positive ion mode was used for data acquisition in multiple reaction monitoring (MRM) mode.@*RESULTS@#The 42 psychoactive substances in hair had a good linear relationship within their respective linear ranges (r>0.99), the limits of detection were 0.2-10 pg/mm, the limits of quantification were 0.5-20 pg/mm, the intra-day and inter-day precisions were 1.5%-12.7%, the intra-day and inter-day accuracies were 86.5%-109.2%, the recovery rates were 68.1%-98.2%, and the matrix effects were 71.3%-111.7%. The method was applied to hair samples collected from one volunteer at 28 d after a single dose of zolpidem, with zolpidem detected in 5 hairs was 1.08-1.60 cm near the root tip, and the concentration range was 0.62-20.5 pg/mm.@*CONCLUSIONS@#The micro-segmental technique of single hair analysis can be applied to the investigation of drug-facilitated sexual assault cases.


Subject(s)
Humans , Chromatography, Liquid/methods , Zolpidem , Tandem Mass Spectrometry/methods , Hair , Acetonitriles , Chromatography, High Pressure Liquid
12.
Journal of Biomedical Engineering ; (6): 392-400, 2023.
Article in Chinese | WPRIM | ID: wpr-981555

ABSTRACT

Medical image segmentation based on deep learning has become a powerful tool in the field of medical image processing. Due to the special nature of medical images, image segmentation algorithms based on deep learning face problems such as sample imbalance, edge blur, false positive, false negative, etc. In view of these problems, researchers mostly improve the network structure, but rarely improve from the unstructured aspect. The loss function is an important part of the segmentation method based on deep learning. The improvement of the loss function can improve the segmentation effect of the network from the root, and the loss function is independent of the network structure, which can be used in various network models and segmentation tasks in plug and play. Starting from the difficulties in medical image segmentation, this paper first introduces the loss function and improvement strategies to solve the problems of sample imbalance, edge blur, false positive and false negative. Then the difficulties encountered in the improvement of the current loss function are analyzed. Finally, the future research directions are prospected. This paper provides a reference for the reasonable selection, improvement or innovation of loss function, and guides the direction for the follow-up research of loss function.


Subject(s)
Algorithms , Image Processing, Computer-Assisted
13.
Journal of Biomedical Engineering ; (6): 234-243, 2023.
Article in Chinese | WPRIM | ID: wpr-981534

ABSTRACT

In order to address the issues of spatial induction bias and lack of effective representation of global contextual information in colon polyp image segmentation, which lead to the loss of edge details and mis-segmentation of lesion areas, a colon polyp segmentation method that combines Transformer and cross-level phase-awareness is proposed. The method started from the perspective of global feature transformation, and used a hierarchical Transformer encoder to extract semantic information and spatial details of lesion areas layer by layer. Secondly, a phase-aware fusion module (PAFM) was designed to capture cross-level interaction information and effectively aggregate multi-scale contextual information. Thirdly, a position oriented functional module (POF) was designed to effectively integrate global and local feature information, fill in semantic gaps, and suppress background noise. Fourthly, a residual axis reverse attention module (RA-IA) was used to improve the network's ability to recognize edge pixels. The proposed method was experimentally tested on public datasets CVC-ClinicDB, Kvasir, CVC-ColonDB, and EITS, with Dice similarity coefficients of 94.04%, 92.04%, 80.78%, and 76.80%, respectively, and mean intersection over union of 89.31%, 86.81%, 73.55%, and 69.10%, respectively. The simulation experimental results show that the proposed method can effectively segment colon polyp images, providing a new window for the diagnosis of colon polyps.


Subject(s)
Humans , Colonic Polyps/diagnostic imaging , Computer Simulation , Electric Power Supplies , Semantics , Image Processing, Computer-Assisted
14.
Journal of Biomedical Engineering ; (6): 226-233, 2023.
Article in Chinese | WPRIM | ID: wpr-981533

ABSTRACT

Magnetic resonance (MR) imaging is an important tool for prostate cancer diagnosis, and accurate segmentation of MR prostate regions by computer-aided diagnostic techniques is important for the diagnosis of prostate cancer. In this paper, we propose an improved end-to-end three-dimensional image segmentation network using a deep learning approach to the traditional V-Net network (V-Net) network in order to provide more accurate image segmentation results. Firstly, we fused the soft attention mechanism into the traditional V-Net's jump connection, and combined short jump connection and small convolutional kernel to further improve the network segmentation accuracy. Then the prostate region was segmented using the Prostate MR Image Segmentation 2012 (PROMISE 12) challenge dataset, and the model was evaluated using the dice similarity coefficient (DSC) and Hausdorff distance (HD). The DSC and HD values of the segmented model could reach 0.903 and 3.912 mm, respectively. The experimental results show that the algorithm in this paper can provide more accurate three-dimensional segmentation results, which can accurately and efficiently segment prostate MR images and provide a reliable basis for clinical diagnosis and treatment.


Subject(s)
Male , Humans , Prostate/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Imaging, Three-Dimensional/methods , Prostatic Neoplasms/diagnostic imaging
15.
Journal of Biomedical Engineering ; (6): 193-201, 2023.
Article in Chinese | WPRIM | ID: wpr-981529

ABSTRACT

When applying deep learning algorithms to magnetic resonance (MR) image segmentation, a large number of annotated images are required as data support. However, the specificity of MR images makes it difficult and costly to acquire large amounts of annotated image data. To reduce the dependence of MR image segmentation on a large amount of annotated data, this paper proposes a meta-learning U-shaped network (Meta-UNet) for few-shot MR image segmentation. Meta-UNet can use a small amount of annotated image data to complete the task of MR image segmentation and obtain good segmentation results. Meta-UNet improves U-Net by introducing dilated convolution, which can increase the receptive field of the model to improve the sensitivity to targets of different scales. We introduce the attention mechanism to improve the adaptability of the model to different scales. We introduce the meta-learning mechanism, and employ a composite loss function for well-supervised and effective bootstrapping of model training. We use the proposed Meta-UNet model to train on different segmentation tasks, and then use the trained model to evaluate on a new segmentation task, where the Meta-UNet model achieves high-precision segmentation of target images. Meta-UNet has a certain improvement in mean Dice similarity coefficient (DSC) compared with voxel morph network (VoxelMorph), data augmentation using learned transformations (DataAug) and label transfer network (LT-Net). Experiments show that the proposed method can effectively perform MR image segmentation using a small number of samples. It provides a reliable aid for clinical diagnosis and treatment.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Magnetic Resonance Imaging
16.
International Eye Science ; (12): 1007-1011, 2023.
Article in Chinese | WPRIM | ID: wpr-973795

ABSTRACT

In recent years, ophthalmology, as one of the medical fields highly dependent on auxiliary imaging, has been at the forefront of the application of deep learning algorithm. The morphological changes of the choroid are closely related to the occurrence, development, treatment and prognosis of fundus diseases. The rapid development of optical coherence tomography has greatly promoted the accurate analysis of choroidal morphology and structure. Choroidal segmentation and related analysis are crucial for determining the pathogenesis and treatment strategies of eye diseases. However, currently, choroidal mainly relies on tedious, time-consuming, and low-reproducibility manual segmentation. To overcome these difficulties, deep learning methods for choroidal segmentation have been developed in recent years, greatly improving the accuracy and efficiency of choroidal segmentation. The purpose of this paper is to review the features of choroidal thickness in different eye diseases, explore the latest applications and advantages of deep learning models in measuring choroidal thickness, and focus on the challenges faced by deep learning models.

17.
Chinese Journal of Medical Instrumentation ; (6): 61-65, 2023.
Article in Chinese | WPRIM | ID: wpr-971304

ABSTRACT

In order to alleviate the conflict between medical supply and demand, and to improve the efficiency of medical image transmission, this study proposes an intelligent method for large-volume medical image transmission. This method extracts and generates keyword pairs by analyzing medical diagnostic reports, and uses a 3D-UNet to segment original image data into various sub-area based on its anatomy structure. Then, the sub-areas are scored through keyword pairs and preset scoring criteria, and transmitted to user frontend in the order of prioritization score. Experiments show that this method can fulfill physicians' requirements of radiology reading and diagnosis with only ten percent of data transmitted, which efficiently optimized traditional transmission procedures.

18.
Chinese Journal of Medical Instrumentation ; (6): 38-42, 2023.
Article in Chinese | WPRIM | ID: wpr-971300

ABSTRACT

Accurate segmentation of retinal blood vessels is of great significance for diagnosing, preventing and detecting eye diseases. In recent years, the U-Net network and its various variants have reached advanced level in the field of medical image segmentation. Most of these networks choose to use simple max pooling to down-sample the intermediate feature layer of the image, which is easy to lose part of the information, so this study proposes a simple and effective new down-sampling method Pixel Fusion-pooling (PF-pooling), which can well fuse the adjacent pixel information of the image. The down-sampling method proposed in this study is a lightweight general module that can be effectively integrated into various network architectures based on convolutional operations. The experimental results on the DRIVE and STARE datasets show that the F1-score index of the U-Net model using PF-pooling on the STARE dataset improved by 1.98%. The accuracy rate is increased by 0.2%, and the sensitivity is increased by 3.88%. And the generalization of the proposed module is verified by replacing different algorithm models. The results show that PF-pooling has achieved performance improvement in both Dense-UNet and Res-UNet models, and has good universality.


Subject(s)
Algorithms , Retinal Vessels , Image Processing, Computer-Assisted
19.
Journal of Biomedical Engineering ; (6): 70-78, 2023.
Article in Chinese | WPRIM | ID: wpr-970675

ABSTRACT

Accurate segmentation of whole slide images is of great significance for the diagnosis of pancreatic cancer. However, developing an automatic model is challenging due to the complex content, limited samples, and high sample heterogeneity of pathological images. This paper presented a multi-tissue segmentation model for whole slide images of pancreatic cancer. We introduced an attention mechanism in building blocks, and designed a multi-task learning framework as well as proper auxiliary tasks to enhance model performance. The model was trained and tested with the pancreatic cancer pathological image dataset from Shanghai Changhai Hospital. And the data of TCGA, as an external independent validation cohort, was used for external validation. The F1 scores of the model exceeded 0.97 and 0.92 in the internal dataset and external dataset, respectively. Moreover, the generalization performance was also better than the baseline method significantly. These results demonstrate that the proposed model can accurately segment eight kinds of tissue regions in whole slide images of pancreatic cancer, which can provide reliable basis for clinical diagnosis.


Subject(s)
Humans , China , Pancreatic Neoplasms/diagnostic imaging , Learning
20.
Journal of Biomedical Engineering ; (6): 60-69, 2023.
Article in Chinese | WPRIM | ID: wpr-970674

ABSTRACT

Hepatocellular carcinoma (HCC) is the most common liver malignancy, where HCC segmentation and prediction of the degree of pathological differentiation are two important tasks in surgical treatment and prognosis evaluation. Existing methods usually solve these two problems independently without considering the correlation of the two tasks. In this paper, we propose a multi-task learning model that aims to accomplish the segmentation task and classification task simultaneously. The model consists of a segmentation subnet and a classification subnet. A multi-scale feature fusion method is proposed in the classification subnet to improve the classification accuracy, and a boundary-aware attention is designed in the segmentation subnet to solve the problem of tumor over-segmentation. A dynamic weighted average multi-task loss is used to make the model achieve optimal performance in both tasks simultaneously. The experimental results of this method on 295 HCC patients are superior to other multi-task learning methods, with a Dice similarity coefficient (Dice) of (83.9 ± 0.88)% on the segmentation task, while the average recall is (86.08 ± 0.83)% and an F1 score is (80.05 ± 1.7)% on the classification task. The results show that the multi-task learning method proposed in this paper can perform the classification task and segmentation task well at the same time, which can provide theoretical reference for clinical diagnosis and treatment of HCC patients.


Subject(s)
Humans , Carcinoma, Hepatocellular , Liver Neoplasms , Learning
SELECTION OF CITATIONS
SEARCH DETAIL