Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.478
Filter
1.
Network ; : 1-39, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-38975771

ABSTRACT

Early detection of lung cancer is necessary to prevent deaths caused by lung cancer. But, the identification of cancer in lungs using Computed Tomography (CT) scan based on some deep learning algorithms does not provide accurate results. A novel adaptive deep learning is developed with heuristic improvement. The proposed framework constitutes three sections as (a) Image acquisition, (b) Segmentation of Lung nodule, and (c) Classifying lung cancer. The raw CT images are congregated through standard data sources. It is then followed by nodule segmentation process, which is conducted by Adaptive Multi-Scale Dilated Trans-Unet3+. For increasing the segmentation accuracy, the parameters in this model is optimized by proposing Modified Transfer Operator-based Archimedes Optimization (MTO-AO). At the end, the segmented images are subjected to classification procedure, namely, Advanced Dilated Ensemble Convolutional Neural Networks (ADECNN), in which it is constructed with Inception, ResNet and MobileNet, where the hyper parameters is tuned by MTO-AO. From the three networks, the final result is estimated by high ranking-based classification. Hence, the performance is investigated using multiple measures and compared among different approaches. Thus, the findings of model demonstrate to prove the system's efficiency of detecting cancer and help the patient to get the appropriate treatment.

2.
Chin Med ; 19(1): 90, 2024 Jun 29.
Article in English | MEDLINE | ID: mdl-38951913

ABSTRACT

BACKGROUND: Given the high cost of endoscopy in gastric cancer (GC) screening, there is an urgent need to explore cost-effective methods for the large-scale prediction of precancerous lesions of gastric cancer (PLGC). We aim to construct a hierarchical artificial intelligence-based multimodal non-invasive method for pre-endoscopic risk screening, to provide tailored recommendations for endoscopy. METHODS: From December 2022 to December 2023, a large-scale screening study was conducted in Fujian, China. Based on traditional Chinese medicine theory, we simultaneously collected tongue images and inquiry information from 1034 participants, considering the potential of these data for PLGC screening. Then, we introduced inquiry information for the first time, forming a multimodality artificial intelligence model to integrate tongue images and inquiry information for pre-endoscopic screening. Moreover, we validated this approach in another independent external validation cohort, comprising 143 participants from the China-Japan Friendship Hospital. RESULTS: A multimodality artificial intelligence-assisted pre-endoscopic screening model based on tongue images and inquiry information (AITonguequiry) was constructed, adopting a hierarchical prediction strategy, achieving tailored endoscopic recommendations. Validation analysis revealed that the area under the curve (AUC) values of AITonguequiry were 0.74 for overall PLGC (95% confidence interval (CI) 0.71-0.76, p < 0.05) and 0.82 for high-risk PLGC (95% CI 0.82-0.83, p < 0.05), which were significantly and robustly better than those of the independent use of either tongue images or inquiry information alone. In addition, AITonguequiry has superior performance compared to existing PLGC screening methodologies, with the AUC value enhancing 45% in terms of PLGC screening (0.74 vs. 0.51, p < 0.05) and 52% in terms of high-risk PLGC screening (0.82 vs. 0.54, p < 0.05). In the independent external verification, the AUC values were 0.69 for PLGC and 0.76 for high-risk PLGC. CONCLUSION: Our AITonguequiry artificial intelligence model, for the first time, incorporates inquiry information and tongue images, leading to a higher precision and finer-grained pre-endoscopic screening of PLGC. This enhances patient screening efficiency and alleviates patient burden.

3.
Front Plant Sci ; 15: 1381367, 2024.
Article in English | MEDLINE | ID: mdl-38966144

ABSTRACT

Introduction: Pine wilt disease spreads rapidly, leading to the death of a large number of pine trees. Exploring the corresponding prevention and control measures for different stages of pine wilt disease is of great significance for its prevention and control. Methods: To address the issue of rapid detection of pine wilt in a large field of view, we used a drone to collect multiple sets of diseased tree samples at different times of the year, which made the model trained by deep learning more generalizable. This research improved the YOLO v4(You Only Look Once version 4) network for detecting pine wilt disease, and the channel attention mechanism module was used to improve the learning ability of the neural network. Results: The ablation experiment found that adding the attention mechanism SENet module combined with the self-designed feature enhancement module based on the feature pyramid had the best improvement effect, and the mAP of the improved model was 79.91%. Discussion: Comparing the improved YOLO v4 model with SSD, Faster RCNN, YOLO v3, and YOLO v5, it was found that the mAP of the improved YOLO v4 model was significantly higher than the other four models, which provided an efficient solution for intelligent diagnosis of pine wood nematode disease. The improved YOLO v4 model enables precise location and identification of pine wilt trees under changing light conditions. Deployment of the model on a UAV enables large-scale detection of pine wilt disease and helps to solve the challenges of rapid detection and prevention of pine wilt disease.

4.
Data Brief ; 55: 110569, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38966660

ABSTRACT

The dataset contains RGB, depth, segmentation images of the scenes and information about the camera poses that can be used to create a full 3D model of the scene and develop methods that reconstruct objects from a single RGB-D camera view. Data were collected in the custom simulator that loads random graspable objects and random tables from the ShapeNet dataset. The graspable object is placed above the table in a random position. Then, the scene is simulated using the PhysX engine to make sure that the scene is physically plausible. The simulator captures images of the scene from a random pose and then takes the second image from the camera pose that is on the opposite side of the scene. The second subset was created using Kinect Azure and a set of real objects located on the ArUco board that was used to estimate the camera pose.

5.
Proc Natl Acad Sci U S A ; 121(29): e2318465121, 2024 Jul 16.
Article in English | MEDLINE | ID: mdl-38968094

ABSTRACT

Media exposure to graphic images of violence has proliferated in contemporary society, particularly with the advent of social media. Extensive exposure to media coverage immediately after the 9/11 attacks and the Boston Marathon bombings (BMB) was associated with more early traumatic stress symptoms; in fact, several hours of BMB-related daily media exposure was a stronger correlate of distress than being directly exposed to the bombings themselves. Researchers have replicated these findings across different traumatic events, extending this work to document that exposure to graphic images is independently and significantly associated with stress symptoms and poorer functioning. The media exposure-distress association also appears to be cyclical over time, with increased exposure predicting greater distress and greater distress predicting more media exposure following subsequent tragedies. The war in Israel and Gaza, which began on October 7, 2023, provides a current, real-time context to further explore these issues as journalists often share graphic images of death and destruction, making media-based graphic images once again ubiquitous and potentially challenging public well-being. For individuals sharing an identity with the victims or otherwise feeling emotionally connected to the Middle East, it may be difficult to avoid viewing these images. Through a review of research on the association between exposure to graphic images and public health, we discuss differing views on the societal implications of viewing such images and advocate for media literacy campaigns to educate the public to identify mis/disinformation and understand the risks of viewing and sharing graphic images with others.


Subject(s)
Mass Media , Terrorism , Humans , Terrorism/psychology , Israel , Warfare , Social Media , Stress Disorders, Post-Traumatic/psychology , Stress, Psychological/psychology
6.
Sci Bull (Beijing) ; 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38969538

ABSTRACT

Urban landscape is directly perceived by residents and is a significant symbol of urbanization development. A comprehensive assessment of urban landscapes is crucial for guiding the development of inclusive, resilient, and sustainable cities and human settlements. Previous studies have primarily analyzed two-dimensional landscape indicators derived from satellite remote sensing, potentially overlooking the valuable insights provided by the three-dimensional configuration of landscapes. This limitation arises from the high cost of acquiring large-area three-dimensional data and the lack of effective assessment indicators. Here, we propose four urban landscapes indicators in three dimensions (UL3D): greenness, grayness, openness, and crowding. We construct the UL3D using 4.03 million street view images from 303 major cities in China, employing a deep learning approach. We combine urban background and two-dimensional urban landscape indicators with UL3D to predict the socioeconomic profiles of cities. The results show that UL3D indicators differs from two-dimensional landscape indicators, with a low average correlation coefficient of 0.31 between them. Urban landscapes had a changing point in 2018-2019 due to new urbanization initiatives, with grayness and crowding rates slowing, while openness increased. The incorporation of UL3D indicators significantly enhances the explanatory power of the regression model for predicting socioeconomic profiles. Specifically, GDP per capita, urban population rate, built-up area per capita, and hospital count correspond to improvements of 25.0%, 19.8%, 35.5%, and 19.2%, respectively. These findings indicate that UL3D indicators have the potential to reflect the socioeconomic profiles of cities.

7.
Phys Med Biol ; 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38986480

ABSTRACT

OBJECTIVE: Automated detection and segmentation of breast masses in ultrasound images are critical for breast cancer diagnosis, but remain challenging due to limited image quality and complex breast tissues. This study aims to develop a deep learning-based method that enables accurate breast mass detection and segmentation in ultrasound images. Approach. A novel convolutional neural network-based framework that combines the You Only Look Once (YOLO) v5 network and the Global-Local (GOLO) strategy was developed. First, YOLOv5 was applied to locate the mass regions of interest (ROIs). Second, a Global Local-Connected Multi-Scale Selection (GOLO-CMSS) network was developed to segment the masses. The GOLO-CMSS operated on both the entire images globally and mass ROIs locally, and then integrated the two branches for a final segmentation output. Particularly, in global branch, CMSS applied Multi-Scale Selection (MSS) modules to automatically adjust the receptive fields, and Multi-Input (MLI) modules to enable fusion of shallow and deep features at different resolutions. The USTC dataset containing 28,477 breast ultrasound images was collected for training and test. The proposed method was also tested on three public datasets, UDIAT, BUSI and TUH. The segmentation performance of GOLO-CMSS was compared with others networks and three experienced radiologists. Main results. YOLOv5 outperformed other detection models with average precisions of 99.41%, 95.15%, 93.69% and 96.42% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The proposed GOLO-CMSS showed superior segmentation performance over other state-of-the-art networks, with Dice similarity coefficients (DSCs) of 93.19%, 88.56%, 87.58% and 90.37% on the USTC, UDIAT, BUSI and TUH datasets, respectively. The mean DSC between GOLO-CMSS and each radiologist was significantly better than that between radiologists (p < 0.001). Significance. Our proposed method can accurately detect and segment breast masses with a decent performance comparable to radiologists, highlighting its great potential for clinical implementation in breast ultrasound examination.

8.
Biomed Eng Lett ; 14(4): 785-800, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38946824

ABSTRACT

The aim of this study is to propose a new diagnostic model based on "segmentation + classification" to improve the routine screening of Thyroid nodule ultrasonography by utilizing the key domain knowledge of medical diagnostic tasks. A Multi-scale segmentation network based on a pyramidal pooling structure of multi-parallel void spaces is proposed. First, in the segmentation network, the exact information of the underlying feature space is obtained by an Attention Gate. Second, the inflated convolutional part of Atrous Spatial Pyramid Pooling (ASPP) is cascaded for multiple downsampling. Finally, a three-branch classification network combined with expert knowledge is designed, drawing on doctors' clinical diagnosis experience, to extract features from the original image of the nodule, the regional image of the nodule, and the edge image of the nodule, respectively, and to improve the classification accuracy of the model by utilizing the Coordinate attention (CA) mechanism and cross-level feature fusion. The Multi-scale segmentation network achieves 94.27%, 93.90% and 88.85% of mean precision (mPA), Dice value (Dice) and mean joint intersection (MIoU), respectively, and the accuracy, specificity and sensitivity of the classification network reaches 86.07%, 81.34% and 90.19%, respectively. Comparison tests show that this method outperforms the U-Net, AGU-Net and DeepLab V3+ classical models as well as the nnU-Net, Swin UNetr and MedFormer models that have emerged in recent years. This algorithm, as an auxiliary diagnostic tool, can help physicians more accurately assess the benign or malignant nature of Thyroid nodules. It can provide objective quantitative indicators, reduce the bias of subjective judgment, and improve the consistency and accuracy of diagnosis. Codes and models are available at https://github.com/enheliang/Thyroid-Segmentation-Network.git.

9.
Theranostics ; 14(9): 3708-3718, 2024.
Article in English | MEDLINE | ID: mdl-38948061

ABSTRACT

Purpose: This study aims to elucidate the role of quantitative SSTR-PET metrics and clinicopathological biomarkers in the progression-free survival (PFS) and overall survival (OS) of neuroendocrine tumors (NETs) treated with peptide receptor radionuclide therapy (PRRT). Methods: A retrospective analysis including 91 NET patients (M47/F44; age 66 years, range 34-90 years) who completed four cycles of standard 177Lu-DOTATATE was conducted. SSTR-avid tumors were segmented from pretherapy SSTR-PET images using a semiautomatic workflow with the tumors labeled based on the anatomical regions. Multiple image-based features including total and organ-specific tumor volume and SSTR density along with clinicopathological biomarkers including Ki-67, chromogranin A (CgA) and alkaline phosphatase (ALP) were analyzed with respect to the PRRT response. Results: The median OS was 39.4 months (95% CI: 33.1-NA months), while the median PFS was 23.9 months (95% CI: 19.3-32.4 months). Total SSTR-avid tumor volume (HR = 3.6; P = 0.07) and bone tumor volume (HR = 1.5; P = 0.003) were associated with shorter OS. Also, total tumor volume (HR = 4.3; P = 0.01), liver tumor volume (HR = 1.8; P = 0.05) and bone tumor volume (HR = 1.4; P = 0.01) were associated with shorter PFS. Furthermore, the presence of large lesion volume with low SSTR uptake was correlated with worse OS (HR = 1.4; P = 0.03) and PFS (HR = 1.5; P = 0.003). Among the biomarkers, elevated baseline CgA and ALP showed a negative association with both OS (CgA: HR = 4.9; P = 0.003, ALP: HR = 52.6; P = 0.004) and PFS (CgA: HR = 4.2; P = 0.002, ALP: HR = 9.4; P = 0.06). Similarly, number of prior systemic treatments was associated with shorter OS (HR = 1.4; P = 0.003) and PFS (HR = 1.2; P = 0.05). Additionally, tumors originating from the midgut primary site demonstrated longer PFS, compared to the pancreas (HR = 1.6; P = 0.16), and those categorized as unknown primary (HR = 3.0; P = 0.002). Conclusion: Image-based features such as SSTR-avid tumor volume, bone tumor involvement, and the presence of large tumors with low SSTR expression demonstrated significant predictive value for PFS, suggesting potential clinical utility in NETs management. Moreover, elevated CgA and ALP, along with an increased number of prior systemic treatments, emerged as significant factors associated with worse PRRT outcomes.


Subject(s)
Biomarkers, Tumor , Neuroendocrine Tumors , Octreotide , Organometallic Compounds , Humans , Neuroendocrine Tumors/radiotherapy , Neuroendocrine Tumors/diagnostic imaging , Neuroendocrine Tumors/pathology , Neuroendocrine Tumors/metabolism , Aged , Middle Aged , Organometallic Compounds/therapeutic use , Male , Female , Octreotide/analogs & derivatives , Octreotide/therapeutic use , Adult , Retrospective Studies , Aged, 80 and over , Biomarkers, Tumor/metabolism , Positron-Emission Tomography/methods , Receptors, Somatostatin/metabolism , Radiopharmaceuticals , Treatment Outcome , Chromogranin A/metabolism , Alkaline Phosphatase/metabolism , Ki-67 Antigen/metabolism , Progression-Free Survival , Tumor Burden
10.
Front Oncol ; 14: 1396887, 2024.
Article in English | MEDLINE | ID: mdl-38962265

ABSTRACT

Pathological images are considered the gold standard for clinical diagnosis and cancer grading. Automatic segmentation of pathological images is a fundamental and crucial step in constructing powerful computer-aided diagnostic systems. Medical microscopic hyperspectral pathological images can provide additional spectral information, further distinguishing different chemical components of biological tissues, offering new insights for accurate segmentation of pathological images. However, hyperspectral pathological images have higher resolution and larger area, and their annotation requires more time and clinical experience. The lack of precise annotations limits the progress of research in pathological image segmentation. In this paper, we propose a novel semi-supervised segmentation method for microscopic hyperspectral pathological images based on multi-consistency learning (MCL-Net), which combines consistency regularization methods with pseudo-labeling techniques. The MCL-Net architecture employs a shared encoder and multiple independent decoders. We introduce a Soft-Hard pseudo-label generation strategy in MCL-Net to generate pseudo-labels that are closer to real labels for pathological images. Furthermore, we propose a multi-consistency learning strategy, treating pseudo-labels generated by the Soft-Hard process as real labels, by promoting consistency between predictions of different decoders, enabling the model to learn more sample features. Extensive experiments in this paper demonstrate the effectiveness of the proposed method, providing new insights for the segmentation of microscopic hyperspectral tissue pathology images.

11.
Vision Res ; 222: 108451, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38964163

ABSTRACT

This study investigates human expectations towards naturalistic colour changes under varying illuminations. Understanding colour expectations is key to both scientific research on colour constancy and applications of colour and lighting in art and industry. We reanalysed data from asymmetric colour matches of a previous study and found that colour adjustments tended to align with illuminant-induced colour shifts predicted by naturalistic, rather than artificial, illuminants and reflectances. We conducted three experiments using hyperspectral images of naturalistic scenes to test if participants judged colour changes based on naturalistic illuminant and reflectance spectra as more plausible than artificial ones, which contradicted their expectations. When we consistently manipulated the illuminant (Experiment 1) and reflectance (Experiment 2) spectra across the whole scene, observers chose the naturalistic renderings significantly above the chance level (>25 %) but barely more often than any of the three artificial ones, collectively (>50 %). However, when we manipulated only one object/area's reflectance (Experiment 3), observers more reliably identified the version in which the object had a naturalistic reflectance like the rest of the scene. Results from Experiments 2-3 and additional analyses suggested that relational colour constancy strongly contributed to observer expectations, and stable cone-excitation ratios are not limited to naturalistic illuminants and reflectances but also occur for our artificial renderings. Our findings indicate that relational colour constancy and prior knowledge about surface colour shifts help to disambiguate surface colour identity under illumination changes, enabling human observers to recognise surface colours reliably in naturalistic conditions. Additionally, relational colour constancy may even be effective in many artificial conditions.

12.
Comput Methods Programs Biomed ; 254: 108285, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38964248

ABSTRACT

BACKGROUND AND OBJECTIVE: In renal disease research, precise glomerular disease diagnosis is crucial for treatment and prognosis. Currently reliant on invasive biopsies, this method bears risks and pathologist-dependent variability, yielding inconsistent results. There is a pressing need for innovative diagnostic tools that enhance traditional methods, streamline processes, and ensure accurate and consistent disease detection. METHODS: In this study, we present an innovative Convolutional Neural Networks-Vision Transformer (CVT) model leveraging Transformer technology to refine glomerular disease diagnosis by fusing spectral and spatial data, surpassing traditional diagnostic limitations. Using interval sampling, preprocessing, and wavelength optimization, we also introduced the Gramian Angular Field (GAF) method for a unified representation of spectral and spatial characteristics. RESULTS: We captured hyperspectral images ranging from 385.18 nm to 1009.47 nm and employed various methods to extract sample features. Initial models based solely on spectral features achieved a accuracy of 85.24 %. However, the CVT model significantly outperformed these, achieving an average accuracy of 94 %. This demonstrates the model's superior capability in utilizing sample data and learning joint feature representations. CONCLUSIONS: The CVT model not only breaks through the limitations of existing diagnostic techniques but also showcases the vast potential of non-invasive, high-precision diagnostic technology in supporting the classification and prognosis of complex glomerular diseases. This innovative approach could significantly impact future diagnostic strategies in renal disease research. CONCISE ABSTRACT: This study introduces a transformative hyperspectral image classification model leveraging a Transformer to significantly improve glomerular disease diagnosis accuracy by synergizing spectral and spatial data, surpassing conventional methods. Through a rigorous comparative analysis, it was determined that while spectral features alone reached a peak accuracy of 85.24 %, the novel Convolutional Neural Network-Transformer (CVT) model's integration of spatial-spectral features via the Gramian Angular Field (GAF) method markedly enhanced diagnostic precision, achieving an average accuracy of 94 %. This methodological innovation not only overcomes traditional diagnostic limitations but also underscores the potential of non-invasive, high-precision technologies in advancing the classification and prognosis of complex renal diseases, setting a new benchmark in the field.

13.
Sci Rep ; 14(1): 14994, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38951207

ABSTRACT

Artificially extracted agricultural phenotype information exhibits high subjectivity and low accuracy, while the utilization of image extraction information is susceptible to interference from haze. Furthermore, the effectiveness of the agricultural image dehazing method used for extracting such information is limited due to unclear texture details and color representation in the images. To address these limitations, we propose AgriGAN (unpaired image dehazing via a cycle-consistent generative adversarial network) for enhancing the dehazing performance in agricultural plant phenotyping. The algorithm incorporates an atmospheric scattering model to improve the discriminator model and employs a whole-detail consistent discrimination approach to enhance discriminator efficiency, thereby accelerating convergence towards Nash equilibrium state within the adversarial network. Finally, by training with network adversarial loss + cycle consistent loss, clear images are obtained after dehazing process. Experimental evaluations and comparative analysis were conducted to assess this algorithm's performance, demonstrating improved accuracy in dehazing agricultural images while preserving detailed texture information and mitigating color deviation issues.

14.
Child Abuse Negl ; 154: 106936, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39002252

ABSTRACT

BACKGROUND: Most research examining the consumption of online child sexual abuse material (CSAM) has focused on offenders' demographic and psychological characteristics. While such research may assist in the development of therapeutic interventions with known offenders, it has little to offer the development of interventions for the vast majority of offenders who are never caught. OBJECTIVE: To learn more about the offending strategies of CSAM offenders, in order to inform prevention efforts to deter, disrupt, and divert individuals from their pursuit of CSAM. PARTICIPANTS & SETTING: Seventy-five male CSAM offenders, who were living in the community and were voluntarily participating in a treatment programme. METHODS: Participants completed a detailed self-report questionnaire focussing on their pathways to offending and their online behaviour. RESULTS: Most participants reported that they did not initially seek out CSAM but that they first encountered it inadvertently or became curious after viewing legal pornography. Their involvement in CSAM subsequently progressed over time and their offending generally became more serious. The most notable feature of participants' online behaviour was the relative lack of sophisticated technical expertise. Opportunity and other situational factors emerged as mediators of offending frequency. Offending patterns were affected by participants' psychological states (e.g., depression, anger, stress), offline relationships and commitments (e.g., arguments with spouse, loss of job), and online experiences (e.g., blocked sites, viruses, warning messages). CONCLUSIONS: Findings suggest that many offenders are receptive to change and may be potentially diverted from their offending pathway.

15.
Oral Oncol ; 156: 106946, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-39002299

ABSTRACT

OBJECTIVES: This study aims to address the critical gap of unavailability of publicly accessible oral cavity image datasets for developing machine learning (ML) and artificial intelligence (AI) technologies for the diagnosis and prognosis of oral cancer (OCA) and oral potentially malignant disorders (OPMD), with a particular focus on the high prevalence and delayed diagnosis in Asia. MATERIALS AND METHODS: Following ethical approval and informed written consent, images of the oral cavity were obtained from mobile phone cameras and clinical data was extracted from hospital records from patients attending to the Dental Teaching Hospital, Peradeniya, Sri Lanka. After data management and hosting, image categorization and annotations were done by clinicians using a custom-made software tool developed by the research team. RESULTS: A dataset comprising 3000 high-quality, anonymized images obtained from 714 patients were classified into four distinct categories: healthy, benign, OPMD, and OCA. Images were annotated with polygonal shaped oral cavity and lesion boundaries. Each image is accompanied by patient metadata, including age, sex, diagnosis, and risk factor profiles such as smoking, alcohol, and betel chewing habits. CONCLUSION: Researchers can utilize the annotated images in the COCO format, along with the patients' metadata, to enhance ML and AI algorithm development.

16.
Article in English | MEDLINE | ID: mdl-38997866

ABSTRACT

OBJECTIVE: In May 2009, we created a Facebook page for radiology education. While we shared a host of learning materials such as case images, quiz questions, and medical illustrations, we also posted world news, music, and memes. In February 2023, we eliminated everything from the site not related to radiology education. Our aim was to determine how focusing on radiology education alone would affect audience growth for our Facebook page. MATERIALS AND METHODS: We exported our Facebook post data for the dates March 1, 2023 through February 29, 2024, to represent the full calendar year after we revised our content presentation, which we compared to data from November 1, 2020 to October 31, 2021. The mean and standard deviation for each post type's reach for 2023/24 were analyzed and compared against the 2020/21 statistics, and Wilcoxon rank sum tests were used to obtain p-values. Linear regressions for each year were performed to understand the relationship between reach and engagement. RESULTS: A total of 4,270 posts were included in our new analysis. Our average number of posts per day decreased from 24.8 to 11.71, reducing by more than half the amount of content shared to our social media page. Our posts had a mean overall reach of 4,660-compared to 1,743 in 2021 (p=0.0000). There was a statistically significant increase in reach for posts on artificial intelligence, case images, medical illustrations, pearls, quiz images, quiz videos, slideshow images, and both types of instructional videos (p<0.005). For both 2021 and 2024, the linear regression slopes were positive (y=0.0687x-65.0279 and y=0.006334x+21.3425, respectively). CONCLUSIONS: Facebook and other social media have been found to be helpful sources for radiology education. Our experience and statistics with radiology education via social media may help other radiology educators better curate their own pages. To optimize experiences for students, professionals, and other users, and to reach more people, we found that providing readily accessible radiology education is preferred to the social aspects of social media.

18.
Article in English | MEDLINE | ID: mdl-39003438

ABSTRACT

PURPOSE: Differentiating pulmonary lymphoma from lung infections using CT images is challenging. Existing deep neural network-based lung CT classification models rely on 2D slices, lacking comprehensive information and requiring manual selection. 3D models that involve chunking compromise image information and struggle with parameter reduction, limiting performance. These limitations must be addressed to improve accuracy and practicality. METHODS: We propose a transformer sequential feature encoding structure to integrate multi-level information from complete CT images, inspired by the clinical practice of using a sequence of cross-sectional slices for diagnosis. We incorporate position encoding and cross-level long-range information fusion modules into the feature extraction CNN network for cross-sectional slices, ensuring high-precision feature extraction. RESULTS: We conducted comprehensive experiments on a dataset of 124 patients, with respective sizes of 64, 20 and 40 for training, validation and testing. The results of ablation experiments and comparative experiments demonstrated the effectiveness of our approach. Our method outperforms existing state-of-the-art methods in the 3D CT image classification problem of distinguishing between lung infections and pulmonary lymphoma, achieving an accuracy of 0.875, AUC of 0.953 and F1 score of 0.889. CONCLUSION: The experiments verified that our proposed position-enhanced transformer-based sequential feature encoding model is capable of effectively performing high-precision feature extraction and contextual feature fusion in the lungs. It enhances the ability of a standalone CNN network or transformer to extract features, thereby improving the classification performance. The source code is accessible at https://github.com/imchuyu/PTSFE .

19.
Mar Pollut Bull ; 205: 116644, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38959569

ABSTRACT

The cleanup of marine debris is an urgent problem in marine environmental protection. AUVs with visual recognition technology have gradually become a central research issue. However, existing recognition algorithms have slow inference speeds and high computational overhead. They are also affected by blurred images and interference information. To solve these problems, a real-time semantic segmentation network is proposed, called WaterBiSeg-Net. First, we propose the Multi-scale Information Enhancement Module to solve the impact of low-definition and blurred images. Then, to suppress the interference of background information, the Gated Aggregation Layer is proposed. In addition, we propose a method that can extract boundary information directly. Finally, extensive experiments on SUIM and TrashCan datasets show that WaterBiSeg-Net can better complete the task of marine debris segmentation and provide accurate segmentation results for AUVs in real-time. This research offers a low computational cost and real-time solution for AUVs to identify marine debris.

20.
Comput Biol Med ; 179: 108793, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38955126

ABSTRACT

Skin tumors are the most common tumors in humans and the clinical characteristics of three common non-melanoma tumors (IDN, SK, BCC) are similar, resulting in a high misdiagnosis rate. The accurate differential diagnosis of these tumors needs to be judged based on pathological images. However, a shortage of experienced dermatological pathologists leads to bias in the diagnostic accuracy of these skin tumors in China. In this paper, we establish a skin pathological image dataset, SPMLD, for three non-melanoma to achieve automatic and accurate intelligent identification for them. Meanwhile, we propose a lesion-area-based enhanced classification network with the KLS module and an attention module. Specifically, we first collect thousands of H&E-stained tissue sections from patients with clinically and pathologically confirmed IDN, SK, and BCC from a single-center hospital. Then, we scan them to construct a pathological image dataset of these three skin tumors. Furthermore, we mark the complete lesion area of the entire pathology image to better learn the pathologist's diagnosis process. In addition, we applied the proposed network for lesion classification prediction on the SPMLD dataset. Finally, we conduct a series of experiments to demonstrate that this annotation and our network can effectively improve the classification results of various networks. The source dataset and code are available at https://github.com/efss24/SPMLD.git.

SELECTION OF CITATIONS
SEARCH DETAIL
...