Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.448
Filtrar
1.
Accid Anal Prev ; 205: 107693, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38955107

RESUMO

Examining the relationship between streetscape features and road traffic accidents is pivotal for enhancing roadway safety. While previous studies have primarily focused on the influence of street design characteristics, sociodemographic features, and land use features on crash occurrence, the impact of streetscape features on pedestrian crashes has not been thoroughly investigated. Furthermore, while machine learning models demonstrate high accuracy in prediction and are increasingly utilized in traffic safety research, understanding the prediction results poses challenges. To address these gaps, this study extracts streetscape environment characteristics from street view images (SVIs) using a combination of semantic segmentation and object detection deep learning networks. These characteristics are then incorporated into the eXtreme Gradient Boosting (XGBoost) algorithm, along with a set of control variables, to model the occurrence of pedestrian crashes at intersections. Subsequently, the SHapley Additive exPlanations (SHAP) method is integrated with XGBoost to establish an interpretable framework for exploring the association between pedestrian crash occurrence and the surrounding streetscape built environment. The results are interpreted from global, local, and regional perspectives. The findings indicate that, from a global perspective, traffic volume and commercial land use are significant contributors to pedestrian-vehicle collisions at intersections, while road, person, and vehicle elements extracted from SVIs are associated with higher risks of pedestrian crash onset. At a local level, the XGBoost-SHAP framework enables quantification of features' local contributions for individual intersections, revealing spatial heterogeneity in factors influencing pedestrian crashes. From a regional perspective, similar intersections can be grouped to define geographical regions, facilitating the formulation of spatially responsive strategies for distinct regions to reduce traffic accidents. This approach can potentially enhance the quality and accuracy of local policy making. These findings underscore the underlying relationship between streetscape-level environmental characteristics and vehicle-pedestrian crashes. The integration of SVIs and deep learning techniques offers a visually descriptive portrayal of the streetscape environment at locations where traffic crashes occur at eye level. The proposed framework not only achieves excellent prediction performance but also enhances understanding of traffic crash occurrences, offering guidance for optimizing traffic accident prevention and treatment programs.

2.
Comput Biol Med ; 179: 108793, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38955126

RESUMO

Skin tumors are the most common tumors in humans and the clinical characteristics of three common non-melanoma tumors (IDN, SK, BCC) are similar, resulting in a high misdiagnosis rate. The accurate differential diagnosis of these tumors needs to be judged based on pathological images. However, a shortage of experienced dermatological pathologists leads to bias in the diagnostic accuracy of these skin tumors in China. In this paper, we establish a skin pathological image dataset, SPMLD, for three non-melanoma to achieve automatic and accurate intelligent identification for them. Meanwhile, we propose a lesion-area-based enhanced classification network with the KLS module and an attention module. Specifically, we first collect thousands of H&E-stained tissue sections from patients with clinically and pathologically confirmed IDN, SK, and BCC from a single-center hospital. Then, we scan them to construct a pathological image dataset of these three skin tumors. Furthermore, we mark the complete lesion area of the entire pathology image to better learn the pathologist's diagnosis process. In addition, we applied the proposed network for lesion classification prediction on the SPMLD dataset. Finally, we conduct a series of experiments to demonstrate that this annotation and our network can effectively improve the classification results of various networks. The source dataset and code are available at https://github.com/efss24/SPMLD.git.

3.
Chin Med ; 19(1): 90, 2024 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-38951913

RESUMO

BACKGROUND: Given the high cost of endoscopy in gastric cancer (GC) screening, there is an urgent need to explore cost-effective methods for the large-scale prediction of precancerous lesions of gastric cancer (PLGC). We aim to construct a hierarchical artificial intelligence-based multimodal non-invasive method for pre-endoscopic risk screening, to provide tailored recommendations for endoscopy. METHODS: From December 2022 to December 2023, a large-scale screening study was conducted in Fujian, China. Based on traditional Chinese medicine theory, we simultaneously collected tongue images and inquiry information from 1034 participants, considering the potential of these data for PLGC screening. Then, we introduced inquiry information for the first time, forming a multimodality artificial intelligence model to integrate tongue images and inquiry information for pre-endoscopic screening. Moreover, we validated this approach in another independent external validation cohort, comprising 143 participants from the China-Japan Friendship Hospital. RESULTS: A multimodality artificial intelligence-assisted pre-endoscopic screening model based on tongue images and inquiry information (AITonguequiry) was constructed, adopting a hierarchical prediction strategy, achieving tailored endoscopic recommendations. Validation analysis revealed that the area under the curve (AUC) values of AITonguequiry were 0.74 for overall PLGC (95% confidence interval (CI) 0.71-0.76, p < 0.05) and 0.82 for high-risk PLGC (95% CI 0.82-0.83, p < 0.05), which were significantly and robustly better than those of the independent use of either tongue images or inquiry information alone. In addition, AITonguequiry has superior performance compared to existing PLGC screening methodologies, with the AUC value enhancing 45% in terms of PLGC screening (0.74 vs. 0.51, p < 0.05) and 52% in terms of high-risk PLGC screening (0.82 vs. 0.54, p < 0.05). In the independent external verification, the AUC values were 0.69 for PLGC and 0.76 for high-risk PLGC. CONCLUSION: Our AITonguequiry artificial intelligence model, for the first time, incorporates inquiry information and tongue images, leading to a higher precision and finer-grained pre-endoscopic screening of PLGC. This enhances patient screening efficiency and alleviates patient burden.

4.
Mar Pollut Bull ; 205: 116644, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38959569

RESUMO

The cleanup of marine debris is an urgent problem in marine environmental protection. AUVs with visual recognition technology have gradually become a central research issue. However, existing recognition algorithms have slow inference speeds and high computational overhead. They are also affected by blurred images and interference information. To solve these problems, a real-time semantic segmentation network is proposed, called WaterBiSeg-Net. First, we propose the Multi-scale Information Enhancement Module to solve the impact of low-definition and blurred images. Then, to suppress the interference of background information, the Gated Aggregation Layer is proposed. In addition, we propose a method that can extract boundary information directly. Finally, extensive experiments on SUIM and TrashCan datasets show that WaterBiSeg-Net can better complete the task of marine debris segmentation and provide accurate segmentation results for AUVs in real-time. This research offers a low computational cost and real-time solution for AUVs to identify marine debris.

5.
Sci Rep ; 14(1): 14994, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38951207

RESUMO

Artificially extracted agricultural phenotype information exhibits high subjectivity and low accuracy, while the utilization of image extraction information is susceptible to interference from haze. Furthermore, the effectiveness of the agricultural image dehazing method used for extracting such information is limited due to unclear texture details and color representation in the images. To address these limitations, we propose AgriGAN (unpaired image dehazing via a cycle-consistent generative adversarial network) for enhancing the dehazing performance in agricultural plant phenotyping. The algorithm incorporates an atmospheric scattering model to improve the discriminator model and employs a whole-detail consistent discrimination approach to enhance discriminator efficiency, thereby accelerating convergence towards Nash equilibrium state within the adversarial network. Finally, by training with network adversarial loss + cycle consistent loss, clear images are obtained after dehazing process. Experimental evaluations and comparative analysis were conducted to assess this algorithm's performance, demonstrating improved accuracy in dehazing agricultural images while preserving detailed texture information and mitigating color deviation issues.

6.
Front Plant Sci ; 15: 1381367, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38966144

RESUMO

Introduction: Pine wilt disease spreads rapidly, leading to the death of a large number of pine trees. Exploring the corresponding prevention and control measures for different stages of pine wilt disease is of great significance for its prevention and control. Methods: To address the issue of rapid detection of pine wilt in a large field of view, we used a drone to collect multiple sets of diseased tree samples at different times of the year, which made the model trained by deep learning more generalizable. This research improved the YOLO v4(You Only Look Once version 4) network for detecting pine wilt disease, and the channel attention mechanism module was used to improve the learning ability of the neural network. Results: The ablation experiment found that adding the attention mechanism SENet module combined with the self-designed feature enhancement module based on the feature pyramid had the best improvement effect, and the mAP of the improved model was 79.91%. Discussion: Comparing the improved YOLO v4 model with SSD, Faster RCNN, YOLO v3, and YOLO v5, it was found that the mAP of the improved YOLO v4 model was significantly higher than the other four models, which provided an efficient solution for intelligent diagnosis of pine wood nematode disease. The improved YOLO v4 model enables precise location and identification of pine wilt trees under changing light conditions. Deployment of the model on a UAV enables large-scale detection of pine wilt disease and helps to solve the challenges of rapid detection and prevention of pine wilt disease.

7.
Data Brief ; 55: 110569, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38966660

RESUMO

The dataset contains RGB, depth, segmentation images of the scenes and information about the camera poses that can be used to create a full 3D model of the scene and develop methods that reconstruct objects from a single RGB-D camera view. Data were collected in the custom simulator that loads random graspable objects and random tables from the ShapeNet dataset. The graspable object is placed above the table in a random position. Then, the scene is simulated using the PhysX engine to make sure that the scene is physically plausible. The simulator captures images of the scene from a random pose and then takes the second image from the camera pose that is on the opposite side of the scene. The second subset was created using Kinect Azure and a set of real objects located on the ArUco board that was used to estimate the camera pose.

8.
Proc Natl Acad Sci U S A ; 121(29): e2318465121, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-38968094

RESUMO

Media exposure to graphic images of violence has proliferated in contemporary society, particularly with the advent of social media. Extensive exposure to media coverage immediately after the 9/11 attacks and the Boston Marathon bombings (BMB) was associated with more early traumatic stress symptoms; in fact, several hours of BMB-related daily media exposure was a stronger correlate of distress than being directly exposed to the bombings themselves. Researchers have replicated these findings across different traumatic events, extending this work to document that exposure to graphic images is independently and significantly associated with stress symptoms and poorer functioning. The media exposure-distress association also appears to be cyclical over time, with increased exposure predicting greater distress and greater distress predicting more media exposure following subsequent tragedies. The war in Israel and Gaza, which began on October 7, 2023, provides a current, real-time context to further explore these issues as journalists often share graphic images of death and destruction, making media-based graphic images once again ubiquitous and potentially challenging public well-being. For individuals sharing an identity with the victims or otherwise feeling emotionally connected to the Middle East, it may be difficult to avoid viewing these images. Through a review of research on the association between exposure to graphic images and public health, we discuss differing views on the societal implications of viewing such images and advocate for media literacy campaigns to educate the public to identify mis/disinformation and understand the risks of viewing and sharing graphic images with others.


Assuntos
Meios de Comunicação de Massa , Terrorismo , Humanos , Terrorismo/psicologia , Israel , Guerra , Mídias Sociais , Transtornos de Estresse Pós-Traumáticos/psicologia , Estresse Psicológico/psicologia
9.
Sci Bull (Beijing) ; 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38969538

RESUMO

Urban landscape is directly perceived by residents and is a significant symbol of urbanization development. A comprehensive assessment of urban landscapes is crucial for guiding the development of inclusive, resilient, and sustainable cities and human settlements. Previous studies have primarily analyzed two-dimensional landscape indicators derived from satellite remote sensing, potentially overlooking the valuable insights provided by the three-dimensional configuration of landscapes. This limitation arises from the high cost of acquiring large-area three-dimensional data and the lack of effective assessment indicators. Here, we propose four urban landscapes indicators in three dimensions (UL3D): greenness, grayness, openness, and crowding. We construct the UL3D using 4.03 million street view images from 303 major cities in China, employing a deep learning approach. We combine urban background and two-dimensional urban landscape indicators with UL3D to predict the socioeconomic profiles of cities. The results show that UL3D indicators differs from two-dimensional landscape indicators, with a low average correlation coefficient of 0.31 between them. Urban landscapes had a changing point in 2018-2019 due to new urbanization initiatives, with grayness and crowding rates slowing, while openness increased. The incorporation of UL3D indicators significantly enhances the explanatory power of the regression model for predicting socioeconomic profiles. Specifically, GDP per capita, urban population rate, built-up area per capita, and hospital count correspond to improvements of 25.0%, 19.8%, 35.5%, and 19.2%, respectively. These findings indicate that UL3D indicators have the potential to reflect the socioeconomic profiles of cities.

10.
Front Oncol ; 14: 1396887, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38962265

RESUMO

Pathological images are considered the gold standard for clinical diagnosis and cancer grading. Automatic segmentation of pathological images is a fundamental and crucial step in constructing powerful computer-aided diagnostic systems. Medical microscopic hyperspectral pathological images can provide additional spectral information, further distinguishing different chemical components of biological tissues, offering new insights for accurate segmentation of pathological images. However, hyperspectral pathological images have higher resolution and larger area, and their annotation requires more time and clinical experience. The lack of precise annotations limits the progress of research in pathological image segmentation. In this paper, we propose a novel semi-supervised segmentation method for microscopic hyperspectral pathological images based on multi-consistency learning (MCL-Net), which combines consistency regularization methods with pseudo-labeling techniques. The MCL-Net architecture employs a shared encoder and multiple independent decoders. We introduce a Soft-Hard pseudo-label generation strategy in MCL-Net to generate pseudo-labels that are closer to real labels for pathological images. Furthermore, we propose a multi-consistency learning strategy, treating pseudo-labels generated by the Soft-Hard process as real labels, by promoting consistency between predictions of different decoders, enabling the model to learn more sample features. Extensive experiments in this paper demonstrate the effectiveness of the proposed method, providing new insights for the segmentation of microscopic hyperspectral tissue pathology images.

11.
Vision Res ; 222: 108451, 2024 Jul 03.
Artigo em Inglês | MEDLINE | ID: mdl-38964163

RESUMO

This study investigates human expectations towards naturalistic colour changes under varying illuminations. Understanding colour expectations is key to both scientific research on colour constancy and applications of colour and lighting in art and industry. We reanalysed data from asymmetric colour matches of a previous study and found that colour adjustments tended to align with illuminant-induced colour shifts predicted by naturalistic, rather than artificial, illuminants and reflectances. We conducted three experiments using hyperspectral images of naturalistic scenes to test if participants judged colour changes based on naturalistic illuminant and reflectance spectra as more plausible than artificial ones, which contradicted their expectations. When we consistently manipulated the illuminant (Experiment 1) and reflectance (Experiment 2) spectra across the whole scene, observers chose the naturalistic renderings significantly above the chance level (>25 %) but barely more often than any of the three artificial ones, collectively (>50 %). However, when we manipulated only one object/area's reflectance (Experiment 3), observers more reliably identified the version in which the object had a naturalistic reflectance like the rest of the scene. Results from Experiments 2-3 and additional analyses suggested that relational colour constancy strongly contributed to observer expectations, and stable cone-excitation ratios are not limited to naturalistic illuminants and reflectances but also occur for our artificial renderings. Our findings indicate that relational colour constancy and prior knowledge about surface colour shifts help to disambiguate surface colour identity under illumination changes, enabling human observers to recognise surface colours reliably in naturalistic conditions. Additionally, relational colour constancy may even be effective in many artificial conditions.

12.
Comput Methods Programs Biomed ; 254: 108285, 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38964248

RESUMO

BACKGROUND AND OBJECTIVE: In renal disease research, precise glomerular disease diagnosis is crucial for treatment and prognosis. Currently reliant on invasive biopsies, this method bears risks and pathologist-dependent variability, yielding inconsistent results. There is a pressing need for innovative diagnostic tools that enhance traditional methods, streamline processes, and ensure accurate and consistent disease detection. METHODS: In this study, we present an innovative Convolutional Neural Networks-Vision Transformer (CVT) model leveraging Transformer technology to refine glomerular disease diagnosis by fusing spectral and spatial data, surpassing traditional diagnostic limitations. Using interval sampling, preprocessing, and wavelength optimization, we also introduced the Gramian Angular Field (GAF) method for a unified representation of spectral and spatial characteristics. RESULTS: We captured hyperspectral images ranging from 385.18 nm to 1009.47 nm and employed various methods to extract sample features. Initial models based solely on spectral features achieved a accuracy of 85.24 %. However, the CVT model significantly outperformed these, achieving an average accuracy of 94 %. This demonstrates the model's superior capability in utilizing sample data and learning joint feature representations. CONCLUSIONS: The CVT model not only breaks through the limitations of existing diagnostic techniques but also showcases the vast potential of non-invasive, high-precision diagnostic technology in supporting the classification and prognosis of complex glomerular diseases. This innovative approach could significantly impact future diagnostic strategies in renal disease research. CONCISE ABSTRACT: This study introduces a transformative hyperspectral image classification model leveraging a Transformer to significantly improve glomerular disease diagnosis accuracy by synergizing spectral and spatial data, surpassing conventional methods. Through a rigorous comparative analysis, it was determined that while spectral features alone reached a peak accuracy of 85.24 %, the novel Convolutional Neural Network-Transformer (CVT) model's integration of spatial-spectral features via the Gramian Angular Field (GAF) method markedly enhanced diagnostic precision, achieving an average accuracy of 94 %. This methodological innovation not only overcomes traditional diagnostic limitations but also underscores the potential of non-invasive, high-precision technologies in advancing the classification and prognosis of complex renal diseases, setting a new benchmark in the field.

13.
Network ; : 1-39, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38975771

RESUMO

Early detection of lung cancer is necessary to prevent deaths caused by lung cancer. But, the identification of cancer in lungs using Computed Tomography (CT) scan based on some deep learning algorithms does not provide accurate results. A novel adaptive deep learning is developed with heuristic improvement. The proposed framework constitutes three sections as (a) Image acquisition, (b) Segmentation of Lung nodule, and (c) Classifying lung cancer. The raw CT images are congregated through standard data sources. It is then followed by nodule segmentation process, which is conducted by Adaptive Multi-Scale Dilated Trans-Unet3+. For increasing the segmentation accuracy, the parameters in this model is optimized by proposing Modified Transfer Operator-based Archimedes Optimization (MTO-AO). At the end, the segmented images are subjected to classification procedure, namely, Advanced Dilated Ensemble Convolutional Neural Networks (ADECNN), in which it is constructed with Inception, ResNet and MobileNet, where the hyper parameters is tuned by MTO-AO. From the three networks, the final result is estimated by high ranking-based classification. Hence, the performance is investigated using multiple measures and compared among different approaches. Thus, the findings of model demonstrate to prove the system's efficiency of detecting cancer and help the patient to get the appropriate treatment.

14.
Comput Struct Biotechnol J ; 24: 434-450, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38975287

RESUMO

A medical data integration center integrates a large volume of medical images from clinical departments, including X-rays, CT scans, and MRI scans. Ideally, all images should be indexed appropriately with standard clinical terms. However, some images have incorrect or missing annotations, which creates challenges in searching and integrating data centrally. To address this issue, accurate and meaningful descriptors are needed for indexing fields, enabling users to efficiently search for desired images and integrate them with international standards. This paper aims to provide concise annotation for missing or incorrectly indexed fields, incorporating essential instance-level information such as radiology modalities (e.g., X-rays), anatomical regions (e.g., chest), and body orientations (e.g., lateral) using a Deep Learning classification model - ResNet50. To demonstrate the capabilities of our algorithm in generating annotations for indexing fields, we conducted three experiments using two open-source datasets, the ROCO dataset, and the IRMA dataset, along with a custom dataset featuring SNOMED CT labels. While the outcomes of these experiments are satisfactory (Precision of >75%) for less critical tasks and serve as a valuable testing ground for image retrieval, they also underscore the need for further exploration of potential challenges. This essay elaborates on the identified issues and presents well-founded recommendations for refining and advancing our proposed approach.

15.
Biomed Eng Lett ; 14(4): 785-800, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38946824

RESUMO

The aim of this study is to propose a new diagnostic model based on "segmentation + classification" to improve the routine screening of Thyroid nodule ultrasonography by utilizing the key domain knowledge of medical diagnostic tasks. A Multi-scale segmentation network based on a pyramidal pooling structure of multi-parallel void spaces is proposed. First, in the segmentation network, the exact information of the underlying feature space is obtained by an Attention Gate. Second, the inflated convolutional part of Atrous Spatial Pyramid Pooling (ASPP) is cascaded for multiple downsampling. Finally, a three-branch classification network combined with expert knowledge is designed, drawing on doctors' clinical diagnosis experience, to extract features from the original image of the nodule, the regional image of the nodule, and the edge image of the nodule, respectively, and to improve the classification accuracy of the model by utilizing the Coordinate attention (CA) mechanism and cross-level feature fusion. The Multi-scale segmentation network achieves 94.27%, 93.90% and 88.85% of mean precision (mPA), Dice value (Dice) and mean joint intersection (MIoU), respectively, and the accuracy, specificity and sensitivity of the classification network reaches 86.07%, 81.34% and 90.19%, respectively. Comparison tests show that this method outperforms the U-Net, AGU-Net and DeepLab V3+ classical models as well as the nnU-Net, Swin UNetr and MedFormer models that have emerged in recent years. This algorithm, as an auxiliary diagnostic tool, can help physicians more accurately assess the benign or malignant nature of Thyroid nodules. It can provide objective quantitative indicators, reduce the bias of subjective judgment, and improve the consistency and accuracy of diagnosis. Codes and models are available at https://github.com/enheliang/Thyroid-Segmentation-Network.git.

16.
Theranostics ; 14(9): 3708-3718, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38948061

RESUMO

Purpose: This study aims to elucidate the role of quantitative SSTR-PET metrics and clinicopathological biomarkers in the progression-free survival (PFS) and overall survival (OS) of neuroendocrine tumors (NETs) treated with peptide receptor radionuclide therapy (PRRT). Methods: A retrospective analysis including 91 NET patients (M47/F44; age 66 years, range 34-90 years) who completed four cycles of standard 177Lu-DOTATATE was conducted. SSTR-avid tumors were segmented from pretherapy SSTR-PET images using a semiautomatic workflow with the tumors labeled based on the anatomical regions. Multiple image-based features including total and organ-specific tumor volume and SSTR density along with clinicopathological biomarkers including Ki-67, chromogranin A (CgA) and alkaline phosphatase (ALP) were analyzed with respect to the PRRT response. Results: The median OS was 39.4 months (95% CI: 33.1-NA months), while the median PFS was 23.9 months (95% CI: 19.3-32.4 months). Total SSTR-avid tumor volume (HR = 3.6; P = 0.07) and bone tumor volume (HR = 1.5; P = 0.003) were associated with shorter OS. Also, total tumor volume (HR = 4.3; P = 0.01), liver tumor volume (HR = 1.8; P = 0.05) and bone tumor volume (HR = 1.4; P = 0.01) were associated with shorter PFS. Furthermore, the presence of large lesion volume with low SSTR uptake was correlated with worse OS (HR = 1.4; P = 0.03) and PFS (HR = 1.5; P = 0.003). Among the biomarkers, elevated baseline CgA and ALP showed a negative association with both OS (CgA: HR = 4.9; P = 0.003, ALP: HR = 52.6; P = 0.004) and PFS (CgA: HR = 4.2; P = 0.002, ALP: HR = 9.4; P = 0.06). Similarly, number of prior systemic treatments was associated with shorter OS (HR = 1.4; P = 0.003) and PFS (HR = 1.2; P = 0.05). Additionally, tumors originating from the midgut primary site demonstrated longer PFS, compared to the pancreas (HR = 1.6; P = 0.16), and those categorized as unknown primary (HR = 3.0; P = 0.002). Conclusion: Image-based features such as SSTR-avid tumor volume, bone tumor involvement, and the presence of large tumors with low SSTR expression demonstrated significant predictive value for PFS, suggesting potential clinical utility in NETs management. Moreover, elevated CgA and ALP, along with an increased number of prior systemic treatments, emerged as significant factors associated with worse PRRT outcomes.


Assuntos
Biomarcadores Tumorais , Tumores Neuroendócrinos , Octreotida , Compostos Organometálicos , Humanos , Tumores Neuroendócrinos/radioterapia , Tumores Neuroendócrinos/diagnóstico por imagem , Tumores Neuroendócrinos/patologia , Tumores Neuroendócrinos/metabolismo , Idoso , Pessoa de Meia-Idade , Compostos Organometálicos/uso terapêutico , Masculino , Feminino , Octreotida/análogos & derivados , Octreotida/uso terapêutico , Adulto , Estudos Retrospectivos , Idoso de 80 Anos ou mais , Biomarcadores Tumorais/metabolismo , Tomografia por Emissão de Pósitrons/métodos , Receptores de Somatostatina/metabolismo , Compostos Radiofarmacêuticos , Resultado do Tratamento , Cromogranina A/metabolismo , Fosfatase Alcalina/metabolismo , Antígeno Ki-67/metabolismo , Intervalo Livre de Progressão , Carga Tumoral
17.
Acad Radiol ; 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38944632

RESUMO

PURPOSE: Isocitrate dehydrogenase (IDH) and cyclin-dependent kinase inhibitor (CDKN) 2A/B status holds important prognostic value in diffuse gliomas. We aimed to construct prediction models using clinically available and reproducible characteristics for predicting IDH-mutant and CDKN2A/B homozygous deletion in adult-type diffuse glioma patients. MATERIALS AND METHODS: This retrospective, two-center study analysed 272 patients with adult-type diffuse glioma (230 for primary cohort and 42 for external validation cohort). Two radiologists independently assessed the patients' images according to the Visually AcceSAble Rembrandt Images (VASARI) feature set. Least absolute shrinkage and selection operator (LASSO) regression analysis was used to optimise variable selection. Multivariable logistic regression analysis was used to develop the prediction models. Calibration plots, receiver operating characteristic (ROC) curves, and decision curve analysis (DCA) were used to validate the models. Nomograms were developed visually based on the prediction models. RESULTS: The interobserver agreement between the two radiologists for VASARI features was excellent (κ range, 0.813-1). For the IDH-mutant prediction model, the area under the curves (AUCs) was 0.88-0.96 in the internal and external validation sets, For the CDKN2A/B homozygous deletion model, the AUCs were 0.80-0.86 in the internal and external validation sets. The decision curves show that both prediction models had good net benefits. CONCLUSION: The prediction models which basing on VASARI and clinical features provided a reliable and clinically meaningful preoperative prediction for IDH and CDKN2A/B status in diffuse glioma patients. These findings provide a foundation for precise preoperative non-invasive diagnosis and personalised treatment approaches for adult-type diffuse glioma patients.

18.
Rev Fac Cien Med Univ Nac Cordoba ; 81(2): 432-452, 2024 06 28.
Artigo em Espanhol | MEDLINE | ID: mdl-38941220

RESUMO

The diagnosis of Cirrhotic Cardiomyopathy is based on severe hepatic cirrosis with deterioration of cardiac function without previous cardiopathy, but this is subclinical during a long time. In this second part we review the non-invasive diagnostic methods and their prognostic value in patients with or without hepatic transplant, from ECG to cardiac images of magnetic resonance.


El diagnóstico de Cardiomiopatía Cirrótica está basado en la presencia de cirrosis hepática avanzada con alteraciones de la función cardíaca sin cardiopatía pre-existente, pero en gran parte de su evolución natural ésta es subclínica. Por ello son imprescindibles los estudios complementarios no invasivos para confirmar el diagnóstico y su rol pronóstico en pacientes con o sin trasplante hepático. En esta segunda parte revisamos los métodos de diagnóstico desde el ECG hasta las imágenes de resonancia magnética cardíaca.


Assuntos
Cardiomiopatias , Cirrose Hepática , Imageamento por Ressonância Magnética , Humanos , Cardiomiopatias/etiologia , Cardiomiopatias/fisiopatologia , Cirrose Hepática/complicações , Eletrocardiografia , Prognóstico
19.
Sci Rep ; 14(1): 14951, 2024 06 28.
Artigo em Inglês | MEDLINE | ID: mdl-38942817

RESUMO

Prostate cancer is one of the most common and fatal diseases among men, and its early diagnosis can have a significant impact on the treatment process and prevent mortality. Since it does not have apparent clinical symptoms in the early stages, it is difficult to diagnose. In addition, the disagreement of experts in the analysis of magnetic resonance images is also a significant challenge. In recent years, various research has shown that deep learning, especially convolutional neural networks, has appeared successfully in machine vision (especially in medical image analysis). In this research, a deep learning approach was used on multi-parameter magnetic resonance images, and the synergistic effect of clinical and pathological data on the accuracy of the model was investigated. The data were collected from Trita Hospital in Tehran, which included 343 patients (data augmentation and learning transfer methods were used during the process). In the designed model, four different types of images are analyzed with four separate ResNet50 deep convolutional networks, and their extracted features are transferred to a fully connected neural network and combined with clinical and pathological features. In the model without clinical and pathological data, the maximum accuracy reached 88%, but by adding these data, the accuracy increased to 96%, which shows the significant impact of clinical and pathological data on the accuracy of diagnosis.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Masculino , Pessoa de Meia-Idade , Idoso , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Processamento de Imagem Assistida por Computador/métodos , Interpretação de Imagem Assistida por Computador/métodos , Irã (Geográfico)
20.
Bioengineering (Basel) ; 11(6)2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38927807

RESUMO

Ameloblastoma (AM), periapical cyst (PC), and chronic suppurative osteomyelitis (CSO) are prevalent maxillofacial diseases with similar imaging characteristics but different treatments, thus making preoperative differential diagnosis crucial. Existing deep learning methods for diagnosis often require manual delineation in tagging the regions of interest (ROIs), which triggers some challenges in practical application. We propose a new model of Wavelet Extraction and Fusion Module with Vision Transformer (WaveletFusion-ViT) for automatic diagnosis using CBCT panoramic images. In this study, 539 samples containing healthy (n = 154), AM (n = 181), PC (n = 102), and CSO (n = 102) were acquired by CBCT for classification, with an additional 2000 healthy samples for pre-training the domain-adaptive network (DAN). The WaveletFusion-ViT model was initialized with pre-trained weights obtained from the DAN and further trained using semi-supervised learning (SSL) methods. After five-fold cross-validation, the model achieved average sensitivity, specificity, accuracy, and AUC scores of 79.60%, 94.48%, 91.47%, and 0.942, respectively. Remarkably, our method achieved 91.47% accuracy using less than 20% labeled samples, surpassing the fully supervised approach's accuracy of 89.05%. Despite these promising results, this study's limitations include a low number of CSO cases and a relatively lower accuracy for this condition, which should be addressed in future research. This research is regarded as an innovative approach as it deviates from the fully supervised learning paradigm typically employed in previous studies. The WaveletFusion-ViT model effectively combines SSL methods to effectively diagnose three types of CBCT panoramic images using only a small portion of labeled data.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...