ABSTRACT
Introducción. La formación integral de los residentes excede el conocimiento teórico y la técnica operatoria. Frente a la complejidad de la cirugía moderna, su incertidumbre y dinamismo, es necesario redefinir la comprensión de la educación quirúrgica y promover capacidades adaptativas en los futuros cirujanos para manejar efectivamente el entorno. Estos aspectos se refieren a la experticia adaptativa. Métodos. La presente revisión narrativa propone una definición de la educación quirúrgica con énfasis en la experticia adaptativa, y un enfoque para su adopción en la práctica. Resultados. Con base en la literatura disponible, la educación quirúrgica representa un proceso dinámico que se sitúa en la intersección de la complejidad de la cultura quirúrgica, del aprendizaje en el sitio de trabajo y de la calidad en el cuidado de la salud, dirigido a la formación de capacidades cognitivas, manuales y adaptativas en el futuro cirujano, que le permitan proveer cuidado de alto valor en un sistema de trabajo colectivo, mientras se fortalece su identidad profesional. La experticia adaptativa del residente es una capacidad fundamental para maximizar su desempeño frente a estas características de la educación quirúrgica. En la literatura disponible se encuentran seis estrategias para fortalecer esta capacidad. Conclusión. La experticia adaptativa es una capacidad esperada y necesaria en el médico residente de cirugía, para hacer frente a la complejidad de la educación quirúrgica. Existen estrategias prácticas que pueden ayudar a fortalecerla, las cuales deben ser evaluadas en nuevos estudios.
Introduction. The comprehensive training of residents exceeds theoretical knowledge and operative technique. Faced with the complexity of modern surgery, its uncertainty and dynamism, it is necessary to redefine the understanding of surgical education and promote adaptive capabilities in future surgeons for the effective management of the environment. These aspects refer to adaptive expertise. Methods. The present narrative review proposes a definition of surgical education with an emphasis on adaptive expertise, and an approach for its adoption in practice. Results. Based on the available literature, surgical education represents a dynamic process that is situated at the intersection of the complexity of surgical culture, learning in the workplace, and quality in health care, aimed at training of cognitive, manual, and adaptive capacities in the future surgeon, which allow them to provide high-value care in a collective work system, while strengthening their professional identity. Resident's adaptive expertise is a fundamental capacity to maximize his or her performance in the face of these characteristics of surgical education. In the available literature there are six strategies to strengthen this capacity. Conclusion. Adaptive expertise is an expected and necessary capacity in the surgical resident to deal with the complexity of surgical education. There are practical strategies that can help strengthen it, which must be evaluated in new studies.
Subject(s)
Humans , Education, Medical, Graduate , Deep Learning , Professional Competence , General Surgery , Vocational Education , MetacognitionABSTRACT
This article provides an in-depth review of various methods employed in the identification and sequencing of spiders, highlighting the advancements and challenges in the field. With the increasing importance of spiders in ecological studies, medical research, and biodiversity conservation, accurate identification and genetic analysis have become crucial. This review discusses traditional and modern techniques, shedding light on their applications, limitations, and future prospects. The exploration begins with an analysis of taxonomists' etymological choices, examining patterns in naming conventions across continents and centuries. Traditional morphological identification, anchored in backbone taxonomy, dichotomous keys, and statistical analyses, highlights the advantages and challenges of relying on observable features. The study transitions to molecular techniques, elucidating the applications and challenges of DNA barcoding, Next-Generation Sequencing (NGS), and metabarcoding in spider identification. The integration of deep learning models, exemplified by the YOLOv7-based Spider Identification APP, represents a landmark in computer vision for efficient and user-friendly spider species recognition. The study's multifaceted approach provides a nuanced understanding of spider taxonomy, bridging historical practices with state-of-the-art technologies, and lays the groundwork for future advancements in the field.
ABSTRACT
Artificial intelligence (AI) is revolutionizing radiology, oncology, and other medicine and veterinary care areas. Adopting deep learning algorithms has significantly advanced image analysis and disease detection. This study explores how AI is reshaping the roles of radiologists and radiographers. It highlights its vital function in infection detection and control, as evidenced by its impact during the coronavirus disease 2019 (COVID-19) pandemic. In veterinary radiation oncology, AI supports complex contouring and treatment planning. However, while AI offers numerous advantages, its implementation must be cautiously approached. Radiologists face challenges, particularly the overwhelming volume of imaging data, which AI helps manage through artificial neural networks and machine learning (ML) algorithms—two significant innovations in this field. In veterinary radiation oncology, AI facilitates collaboration, standardization of data, and the creation of standard operating procedures. Early disease detection, enabled by AI, is essential for initiating treatments that can improve patient outcomes and prognosis. AI is crucial in analyzing large medical datasets, including imaging and clinical data, through advanced algorithms and ML techniques. In veterinary medicine, AI is key to addressing complex challenges in host–pathogen interactions, precision medicine, and predictive epidemiology. AI-powered solutions for continuous monitoring ensure that at-risk patients receive ongoing observation, enabling the rapid detection of changes in health markers. This approach is especially advantageous in managing chronic conditions, enabling proactive healthcare, and facilitating early intervention.
ABSTRACT
SUMMARY: The study aims to demonstrate the success of deep learning methods in sex prediction using hyoid bone. The images of people aged 15-94 years who underwent neck Computed Tomography (CT) were retrospectively scanned in the study. The neck CT images of the individuals were cleaned using the RadiAnt DICOM Viewer (version 2023.1) program, leaving only the hyoid bone. A total of 7 images in the anterior, posterior, superior, inferior, right, left, and right-anterior-upward directions were obtained from a patient's cut hyoid bone image. 2170 images were obtained from 310 hyoid bones of males, and 1820 images from 260 hyoid bones of females. 3990 images were completed to 5000 images by data enrichment. The dataset was divided into 80 % for training, 10 % for testing, and another 10 % for validation. It was compared with deep learning models DenseNet121, ResNet152, and VGG19. An accuracy rate of 87 % was achieved in the ResNet152 model and 80.2 % in the VGG19 model. The highest rate among the classified models was 89 % in the DenseNet121 model. This model had a specificity of 0.87, a sensitivity of 0.90, an F1 score of 0.89 in women, a specificity of 0.90, a sensitivity of 0.87, and an F1 score of 0.88 in men. It was observed that sex could be predicted from the hyoid bone using deep learning methods DenseNet121, ResNet152, and VGG19. Thus, a method that had not been tried on this bone before was used. This study also brings us one step closer to strengthening and perfecting the use of technologies, which will reduce the subjectivity of the methods and support the expert in the decision-making process of sex prediction.
El estudio tuvo como objetivo demostrar el éxito de los métodos de aprendizaje profundo en la predicción del sexo utilizando el hueso hioides. En el estudio se escanearon retrospectivamente las imágenes de personas de entre 15 y 94 años que se sometieron a una tomografía computarizada (TC) de cuello. Las imágenes de TC del cuello de los individuos se limpiaron utilizando el programa RadiAnt DICOM Viewer (versión 2023.1), dejando solo el hueso hioides. Se obtuvieron un total de 7 imágenes en las direcciones anterior, posterior, superior, inferior, derecha, izquierda y derecha-anterior-superior a partir de una imagen seccionada del hueso hioides de un paciente. Se obtuvieron 2170 imágenes de 310 huesos hioides de hombres y 1820 imágenes de 260 huesos hioides de mujeres. Se completaron 3990 imágenes a 5000 imágenes mediante enriquecimiento de datos. El conjunto de datos se dividió en un 80 % para entrenamiento, un 10 % para pruebas y otro 10 % para validación. Se comparó con los modelos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. Se logró una tasa de precisión del 87 % en el modelo ResNet152 y del 80,2 % en el modelo VGG19. La tasa más alta entre los modelos clasificados fue del 89 % en el modelo DenseNet121. Este modelo tenía una especificidad de 0,87, una sensibilidad de 0,90, una puntuación F1 de 0,89 en mujeres, una especificidad de 0,90, una sensibilidad de 0,87 y una puntuación F1 de 0,88 en hombres. Se observó que se podía predecir el sexo a partir del hueso hioides utilizando los métodos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. De esta manera, se utilizó un método que no se había probado antes en este hueso. Este estudio también nos acerca un paso más al fortalecimiento y perfeccionamiento del uso de tecnologías, que reducirán la subjetividad de los métodos y apoyarán al experto en el proceso de toma de decisiones de predicción del sexo.
Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Aged , Aged, 80 and over , Young Adult , Tomography, X-Ray Computed , Sex Determination by Skeleton , Deep Learning , Hyoid Bone/diagnostic imaging , Predictive Value of Tests , Sensitivity and Specificity , Hyoid Bone/anatomy & histologyABSTRACT
Diagnosis at an early stage is the most crucial and decisive outcome in oral cancers. The main objective of this study was to give a brief summary of various emerging optical imaging artificial intelligence (AI) based techniques with their application and implications for the improvements in oral cancer detection. Early diagnosis of oral cancer helps in facilitating early treatment outcome and to predict overall prognosis of the patient. The review talks about the usage of convolution neural networks (CNN) being used for classification of oral cancer images. Further which morphological operations are used for image assembly segmentation of the cancer regions and then deep learning algorithm is utilized to differentiate the cancer lesion regions into mild or severe lesion regions.
ABSTRACT
The rapid digitalization of various aspects of life has significantly transformed dentistry, improving the quality of dental care through advanced technologies like artificial intelligence (AI). AI, which replicates human cognitive processes, has revolutionized dental practices by automating timeconsuming tasks and offering precise diagnostics and treatment plans. Despite being in early development stages, AI in dentistry signifies a disruptive technology poised to redesign clinical care. Innovations such as CAD/CAM systems, intraoral imaging, and digital radiography illustrate AI's applications in caries diagnosis, implant design, etc. Historical milestones, from conceptualization of AI to advancements in machine learning and neural networks, have paved the way for sophisticated AI models used in various dental specialties, including pediatric dentistry. AI's potential extends to patient education and practice management, promising a future where dentistry is increasingly efficient, accurate, and patient-centered. This review highlights role of AI in pediatric dentistry with special mention of review of literature.
ABSTRACT
Super-resolution is the process of creating high-resolution images from low-resolution images, where the goal is to recover one high-resolution image from one low-resolution image, challenging because high-frequency image content typically cannot be recovered from the low-resolution image. Without high-frequency information, the quality of the high-resolution image is limited. Super-resolution is a technique used in image processing to enhance the resolution of low-resolution images and create higher quality images. The goal of super-resolution is to recover a high-resolution image from a low-resolution image, we can achieve through deep learning techniques. By using super-resolution, it is possible to improve the visual quality of images, making them clearer, sharper, and more detailed, Super-resolution is a challenging task in image processing because it involves recovering missing information from low-resolution images. There are several approaches to super-resolution, but they can be broadly categorized into two groups: interpolation- based methods and reconstruction-based methods. Interpolation-based methods involve the use of mathematical algorithms to estimate the missing high-resolution pixels based on the values of the surrounding low-resolution pixels. While these methods are relatively simple and computationally efficient, they may not always produce high-quality results.
ABSTRACT
People frequently struggle to juggle their work, family, and social life in today’s fast-paced environment, which can leave them exhausted and worn out. The development of technologies for detecting fatigue while driving is an important field of research since driving when fatigued poses concerns to road safety. In order to throw light on the most recent advancements in this field of research, this paper provides an extensive review of fatigue driving detection approaches based on electroencephalography (EEG) data. The process of fatigue driving detection based on EEG signals encompasses signal acquisition, preprocessing, feature extraction, and classification. Each step plays a crucial role in accurately identifying driver fatigue. In this review, we delve into the signal acquisition techniques, including the use of portable EEG devices worn on the scalp that capture brain signals in real-time. Preprocessing techniques, such as artifact removal, filtering, and segmentation, are explored to ensure that the extracted EEG signals are of high quality and suitable for subsequent analysis. A crucial stage in the fatigue driving detection process is feature extraction, which entails taking pertinent data out of the EEG signals and using it to distinguish between tired and non-fatigued states. We give a thorough rundown of several feature extraction techniques, such as topology features, frequency-domain analysis, and time-domain analysis. Techniques for frequency-domain analysis, such wavelet transform and power spectral density, allow the identification of particular frequency bands linked to weariness. Temporal patterns in the EEG signals are captured by time-domain features such autoregressive modeling and statistical moments. Furthermore, topological characteristics like brain area connection and synchronization provide light on how the brain’s functional network alters with weariness. Furthermore, the review includes an analysis of different classifiers used in fatigue driving detection, such as support vector machine (SVM), artificial neural network (ANN), and Bayesian classifier. We discuss the advantages and limitations of each classifier, along with their applications in EEG-based fatigue driving detection. Evaluation metrics and performance assessment are crucial aspects of any detection system. We discuss the commonly used evaluation criteria, including accuracy, sensitivity, specificity, and receiver operating characteristic (ROC) curves. Comparative analyses of existing models are conducted, highlighting their strengths and weaknesses. Additionally, we emphasize the need for a standardized data marking protocol and an increased number of test subjects to enhance the robustness and generalizability of fatigue driving detection models. The review also discusses the challenges and potential solutions in EEG-based fatigue driving detection. These challenges include variability in EEG signals across individuals, environmental factors, and the influence of different driving scenarios. To address these challenges, we propose solutions such as personalized models, multi-modal data fusion, and real-time implementation strategies. In conclusion, this comprehensive review provides an extensive overview of the current state of fatigue driving detection based on EEG signals. It covers various aspects, including signal acquisition, preprocessing, feature extraction, classification, performance evaluation, and challenges. The review aims to serve as a valuable resource for researchers, engineers, and practitioners in the field of driving safety, facilitating further advancements in fatigue detection technologies and ultimately enhancing road safety.
ABSTRACT
Objective To study the feasibility on automatic contouring of pelvic intestinal tube based on deep learning for radiotherapy images.Methods A total of 100 patients with diagnosis of rectal cancer,received radiotherapy in Zhongshan Hospital,Fudan University from 2019 to 2021,were randomly selected.Sixty cases were randomly enrolled to train the models,and the other 40 cases were applied to test.Based on the original small intestine model in automatic segmentation software AccuContour,60,40 and 20(2 groups)cases in the model cases were used to train the models Rec60,Rec40,Rec20A and Rec20B with manual contouring as ground truth.Other 40 cases for test were applied to evaluate the Dice similarity coefficient(DSC),95%Hausdorff distance(HD95)and average symmetric surface distance(ASSD)between the manual contouring and original model along with model Rec60.The DSC of the 5 groups of auto-segmentations were compared as well.The paired t tests were performed for each pair of the original model and 4 trained models.Results The small bowel contoured by trained models were more similar to the manual contouring.They could distinguish the boundary of the intestinal tube better and distinguish the small bowel from the colon.The average DSC,HD95 and ASSD of Rec60 were 0.16 higher(P<0.001),12.4 lower(P<0.001)and 5.14 lower(P<0.001)than the original model respectively.According to the paired t tests,there were no statistical differences in DSC between the 4 training models and the original model.No statistical difference was observed between Rec60 and Rec40,while they were both significantly different from the two Rec20 models.There was no statistical difference between Rec20B and Rec20B.Conclusion For radiotherapy images,model training can effectively improve the accuracy of intestinal tube delineation.Forty cases were enough for training an optimal model of automatic segmentation for pelvic intestinal tube in AccuContour software.
ABSTRACT
Children′s bronchial lumen is relatively narrow, pulmonary interstitial development is superior to elastic tissue, and ciliary clearance is weak, which makes children more prone to pulmonary infection and pneumonia.The development of artificial intelligence (AI) and its application in medicine is changing the traditional disease diagnosis, assessment and treatment.AI with deep learning as the core is increasingly used in the diagnosis and prognosis evaluation of pneumonia in children, which is conducive to the early diagnosis and accurate assessment of the disease.In addition to novel coronavirus pneumonia and acute respiratory distress syndrome, researchers rarely pay attention to other viral pneumonia, bacterial pneumonia, mycoplasmal pneumonia, and fungal pneumonia.Meanwhile, there are still problems, such as small datasets, small sample sizes, incomplete algorithms, and little attention paid to pneumonia types and subtypes.In the future, a large-sample dataset of children′s pulmonary infections should be established, and learning about AI should be promoted among medical students and medical staff, so as to explore the value of AI in children′s pulmonary infection and play its auxiliary role in clinical decision-making related to diagnosis and treatment.
ABSTRACT
BACKGROUND:MRI is important for the diagnosis of early knee osteoarthritis.MRI image recognition and intelligent segmentation of knee osteoarthritis using deep learning method is a hot topic in image diagnosis of artificial intelligence. OBJECTIVE:Through deep learning of MRI images of knee osteoarthritis,the segmentation of femur,tibia,patella,cartilage,meniscus,ligaments,muscles and effusion of knee can be automatically divided,and then volume of knee fluid and muscle content were measured. METHODS:100 normal knee joints and 100 knee osteoarthritis patients were selected and randomly divided into training dataset(n=160),validation dataset(n=20),and test dataset(n=20)according to the ratio of 8:1:1.The Coarse-to-Fine sequential training method was used to train the 3D-UNET network deep learning model.A Coarse MRI segmentation model of the knee sagittal plane was trained first,and the rough segmentation results were used as a mask,and then the fine segmentation model was trained.The T1WI and T2WI images of the sagittal surface of the knee joint and the marking files of each structure were input,and DeepLab v3 was used to segment bone,cartilage,ligament,meniscus,muscle,and effusion of knee,and 3D reconstruction was finally displayed and automatic measurement results(muscle content and volume of knee fluid)were displayed to complete the deep learning application program.The MRI data of 26 normal subjects and 38 patients with knee osteoarthritis were screened for validation. RESULTS AND CONCLUSION:(1)The 26 normal subjects were selected,including 13 females and 13 males,with a mean age of(34.88±11.75)years old.The mean muscle content of the knee joint was(1 051 322.94±2 007 249.00)mL,the mean median was 631 165.21 mL,and the mean volume of effusion was(291.85±559.59)mL.The mean median was 0 mL.(2)There were 38 patients with knee osteoarthritis,including 30 females and 8 males.The mean age was(68.53±9.87)years old.The mean muscle content was(782 409.18±331 392.56)mL,the mean median was 689 105.66 mL,and the mean volume of effusion was(1 625.23±5 014.03)mL.The mean median was 178.72 mL.(3)There was no significant difference in muscle content between normal people and knee osteoarthritis patients.The volume of effusion in patients with knee osteoarthritis was higher than that in normal subjects,and the difference was significant(P<0.05).(4)It is indicated that the intelligent segmentation of MRI images by deep learning can discard the defects of manual segmentation in the past.The more accuracy evaluation of knee osteoarthritis was necessary,and the image segmentation was processed more precisely in the future to improve the accuracy of the results.
ABSTRACT
In recent years, the incidence of dry eye disease has been increasing year by year due to environmental changes and some people's overuse of eyes.As the main type of dry eye disease, hyperevaporative dry eye disease is mostly caused by meibomian gland dysfunction (MGD) resulted from abnormal quality or quantity of lipid layer.Due to differences in diagnosis and classification, there is no unified standard for the treatment of this disease at present.The clinician's judgment of the diagnosis and treatment effect and follow-up management are limited.With the availability of big data, improvements in computer graphics processing and mathematical models, artificial intelligence (AI) is widely used in the medical field.AI systems can utilize technologies such as machine learning and deep learning to exert advanced problem-solving capabilities, making diagnosis more objective and improving diagnosis and treatment efficiency.The application of AI in ophthalmology is mainly based on the auxiliary diagnosis of eye images and the screening of eye diseases, which reduces the dependence of the medical system on manual labor, makes the screening and diagnosis of eye diseases faster, more convenient and more consistent, alleviates the medical burden, and thus significantly improves the efficiency and cost-effectiveness of medical services.At present, the application of AI in cataract, glaucoma, diabetic retinopathy and other fields is becoming more and more mature, and the research in the field of MGD-related dry eye has also made certain progress.This article reviewed the application status and progress of AI in MGD-related dry eye.
ABSTRACT
Purpose/Significance To improve the classification and evaluation mode of medical safety incidents,and to improve work efficiency and timeliness.Method/Process The data of previous medical safety incidents are pre-processed,BERT model is used for training,testing and iterative optimization,and an intelligent classification and prediction model for medical safety incidents is built.Re-sult/Conclusion The model is used to classify 466 medical safety incidents reported by clinical departments from January to November 2022,and F1 value reaches 0.66.The application of BERT model in the classification and evaluation of medical safety incidents can im-prove work efficiency and timeliness,and help timely intervene in medical safety risks.
ABSTRACT
Objective To develop an accurate deep learning prediction model of YOLO-V5 capable of accurately iden-tifying medication packaging boxes in outpatient and emergency pharmacies,aiming to assist pharmacists in achieving"zero dis-pensing error".Methods A total of 2 560 images of packaging boxes from 136 different drugs were collected and labeled to form the deep learning dataset.The dataset was split into training and validation sets at a ratio of 4∶1.YOLO-V5 deep-learning algorithm was employed for training the data using images from our dataset(train epochs:500,batch size:4,learning rate:0.01).The values of the precision(Pr)and mean average precision(mAP)were used as measures for model performance evaluation.Results The Pr of the four sub-models of YOLO-V5 in the training set all reached 1.00.The mAP_0.5 of YOLO-V5x was 0.95,which was higher than those of YOLO-V5s(0.94),YOLO-V5l(0.94),and YOLO-V5m(0.94).The mAP_0.5:0.95 of YOLO-V5l and YOLO-V5x were 0.85 which were higher than those of YOLO-V5s(0.84)and YOLO-V5m(0.84).Training time and model size were 82.56 hours and 166.00MB for YOLO-V5x which were the highest among the four models.The speed of detection in one im-age was 11ms for YOLO-V5s which was the fastest among the four models.Conclusion YOLO-V5 can accurately identify the packaging of drugs in outpatient and emergency pharmacies.Implementing an artificial-intelligence-assisted drug dispensation sys-tem is feasible for pharmacists to achieve"zero dispensing error".
ABSTRACT
Objective To observe the value of a YOLOX target detection model for automatically identifying endovascular interventional instruments on images of digital subtract angiography(DSA).Methods DSA data of 37 patients who underwent abdominal endovascular interventional therapy were retrospectively analyzed.Totally 4 435 DSA images were captured and taken as data set,which were divided into training set(n=3 991)and verification set(n=444)at the ratio of 9∶1.Six kinds of endovascular interventional instruments were labeled.YOLOX algorithm was applied for deep learning of data in training set in order to build a target detection model,and the efficacy of the model for automatically identifying endovascular interventional instruments on DSA images was evaluated based on varification set.Results A total of 6 668 labels were put on 4 435 DSA images,aimed on Terumo 0.035in loach guide wire(n=587),Cook Lunderquist super hard guide wire(n=990),Optimed 5F with graduated pig tail catheter(n=1 680),Cordis MPA multi-functional catheter(n=667),Boston Scientific V-18 controllable guide wire(n=1 330)and Terumo 6F long sheath(n= 1 414),respectively.The training set contained 527,875,1 466,598,1 185 and 1 282,while the verification set contained 60,115,214,69,145 and 132 the above labels,respectively.The pixel accuracy of YOLOX target detection model for automatically identifying the above instruments in the verification set was 95.23%,97.32%,99.18%,98.97%,97.60%and 98.19%,respectively,with a mean pixel accuracy of 97.75%.Conclusion YOLOX target detection model could automatically identify endovascular interventional instruments on images of DSA.
ABSTRACT
Video-based intelligent action recognition remains challenging in the field of computer vision.The review analyzes the state-of-the-art methods of video-based intelligent action recognition,including machine learning methods with handcrafted features,deep learning methods with automatically extracted features,and multi-information fusion methods.In addition,the important medical applications and limitations of these technologies in the past decade are introduced,and the interdisciplinary views on the future application to improve human health are also shared.
ABSTRACT
A non-invasive deep learning method is proposed for reconstructing arterial blood pressure signals from photoplethysmography signals.The method employs U-Net as a feature extractor,and a module referred to as bidirectional temporal processor is designed to extract time-dependent information on an individual model basis.The bidirectional temporal processor module utilizes a BiLSTM network to effectively analyze time series data in both forward and backward directions.Furthermore,a deep supervision approach which involves training the model to focus on various aspects of data features is adopted to enhance the accuracy of the predicted waveforms.The differences between actual and predicted values are 2.89±2.43,1.55±1.79 and 1.52±1.47 mmHg on systolic blood pressure,diastolic blood pressure and mean arterial pressure,respectively,suggesting the superiority of the proposed method over the existing techniques,and demonstrating its application potential.
ABSTRACT
Objective To propose a novel algorithm model based on YOLOv7 for detecting small lesions in ultrasound images of hepatic cystic echinococcosis.Methods The original feature extraction backbone was replaced with a lightweight feature extraction backbone network GhostNet for reducing the quantity of model parameters.To address the problem of low detection accuracy when the evaluation index CIoU of YOLOv7 was used as a loss function,ECIoU was substituting for CIoU,which further improved the model detection accuracy.Results The model was trained on a self-built dataset of small lesion ultrasound images of hepatic cystic echinococcosis.The results showed that the improved model had a size of 59.4 G and a detection accuracy of 88.1%for mAP@0.5,outperforming the original model and surpassing other mainstream detection methods.Conclusion The proposed model can detect and classify the location and category of lesions in ultrasound images of hepatic cystic echinococcosis more efficiently.
ABSTRACT
Objective @#To construct a traditional Chinese medicine (TCM) knowledge base using knowledge graph based on deep learning methods, and to explore the application of joint models in intelligent question answering systems for TCM.@*Methods@#Textbooks Prescriptions of Chinese Materia Medica and Chinese Materia Medica were applied to construct a comprehensive knowledge graph serving as the foundation for the intelligent question answering system. In the study, a BERT+Slot-Gated (BSG) deep learning model was applied for the identification of TCM entities and question intentions presented by users in their questions. Answers retrieved from the knowledge graph based on the identified entities and intentions were then returned to the user. The Flask framework and BSG model were utilized to develop the intelligent question answering system of TCM.@*Result@#A TCM knowledge map encompassing 3 149 entities and 6 891 relational triples based on the prescriptions and Chinese materia medica was drawn. In the question answering test assisted by a question corpus, the F1 value for recognizing entities when answering 20 types of TCM questions was 0.996 9, and the accuracy rate for identifying intentions was 99.75%. This indicates that the system is both feasible and practical. Users can interact with the system through the WeChat Official Account platform.@*Conclusion@#The BSG model proposed in this paper achieved good results in experiments by increasing the vector dimension, indicating the effectiveness of the joint model method and providing new research ideas for the implementation of intelligent question answering systems in TCM.
ABSTRACT
@#Image-based intelligent diagnosis represents a trending research direction in the field of tongue diagnosis in traditional Chinese medicine (TCM). In recent years, machine learning techniques, including convolutional neural networks (CNNs) and Transformers, have been widely used in the analysis of medical images, such as computed tomography (CT) and nuclear magnetic resonance imaging (MRI). These techniques have significantly enhanced the efficiency and accuracy of decision-making in TCM practices. Advanced artificial intelligence (AI) technologies have also provided new opportunities for the research and development of medical equipment and TCM tongue diagnosis, resulting in improved standardization and intelligence of the tongue diagnostic procedures. Although traditional image analysis methods could transform tongue images into scientific and analyzable data, recognizing and analyzing images that capture complicated tongue features such as tooth-marked tongue, tongue spots and prickles, fissured tongue, variations in coating thickness, tongue texture (curdy and greasy), and tongue presence (peeled coating) continues posing significant challenges in contemporary tongue diagnosis research. Therefore, the employment of machine learning techniques in the analysis of tongue shape and texture features as well as their applications in TCM diagnosis is the focus of this study. In the study, both traditional and deep learning image analysis techniques were summarized and analyzed to figure out their value in predicting disease risks by observing the tongue shapes and textures, aiming to open a new chapter for the development and application of AI in TCM tongue diagnosis research. In short, the combination of TCM tongue diagnosis and AI technologies, will not only enhance the scientific basis of tongue diagnosis but also improve its clinical applicability, thereby advancing the modernization of TCM diagnostic and therapeutic practices.