Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Healthcare (Basel) ; 12(11)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38891145

RESUMO

Dental wear arises from mechanical (attrition or abrasion) and chemical (erosion) factors. Despite its prevalence and clinical significance, accurately measuring and understanding its causes remain challenging in everyday practice. This one-year study with 39 participants involved comprehensive examinations and full-arch intraoral scans at the start and after 12 months. Volume loss exceeding 100 µ on each tooth's surfaces (buccal, lingual/palatine and incisal/occlusal) was measured by comparing three-dimensional scans from both time points. This study also assessed factors such as abrasion and erosion through clinical exams and questionnaires. There were no significant differences in dental wear in participants with sleep bruxism. However, noticeable wear occurred in the front teeth of those with waking bruxism and joint-related symptoms. Increased wear was associated with frequent consumption of acidic drinks, regular swimming, dry mouth, nocturnal drooling and heartburn, while no significant wear was found in patients with reflux. The used methodology proved effective in accurately assessing the progression of dental wear, which is important as many patients may initially be asymptomatic. The variability observed in dental wear patterns underscores the need to develop specific software applications that allow immediate and efficient comparison of wear areas based on extensive analysis of patient databases.

2.
Med Image Anal ; 97: 103248, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38941859

RESUMO

The conventional pretraining-and-finetuning paradigm, while effective for common diseases with ample data, faces challenges in diagnosing data-scarce occupational diseases like pneumoconiosis. Recently, large language models (LLMs) have exhibits unprecedented ability when conducting multiple tasks in dialogue, bringing opportunities to diagnosis. A common strategy might involve using adapter layers for vision-language alignment and diagnosis in a dialogic manner. Yet, this approach often requires optimization of extensive learnable parameters in the text branch and the dialogue head, potentially diminishing the LLMs' efficacy, especially with limited training data. In our work, we innovate by eliminating the text branch and substituting the dialogue head with a classification head. This approach presents a more effective method for harnessing LLMs in diagnosis with fewer learnable parameters. Furthermore, to balance the retention of detailed image information with progression towards accurate diagnosis, we introduce the contextual multi-token engine. This engine is specialized in adaptively generating diagnostic tokens. Additionally, we propose the information emitter module, which unidirectionally emits information from image tokens to diagnosis tokens. Comprehensive experiments validate the superiority of our methods.

3.
Int J Cosmet Sci ; 2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38802700

RESUMO

OBJECTIVE: Hair beauty treatments glorify human life. As a side effect, there is a risk of deteriorating the health of the hair. Optically polarized microscopy has been used for many decades to evaluate hair conditions owing to its ease of use and low operating costs. However, the low biopermeability of light hinders the observation of detailed structures inside hair. The aim of this study is to establish an evaluation technique of internal damages in a hair by utilizing a near-infrared (NIR) light with a wavelength of 1000-1600 nm, called "second NIR window". METHODS: We built a laser scanning transmission microscope system with an indium gallium arsenide detector, a 1064 nm laser source, and optical circular polarization to visualize the anisotropy characterization of keratin fibres in hair. Samples of Asian black hair before and after bleaching, after permanent-waving, after lithium bromide (LiBr) treatment, and after heating was observed. Some parameters reflecting intra-hair damage were quantitatively compared with the parameters in digitally recorded images with analytical developments. RESULTS: The light transmittance of black hair was dramatically improved by utilizing the second NIR window. Numerical analysis of circular polarization in hair quantified the internal damage in chemically or thermally treated hair and found two different types of damage. The present method enabled quantitative evaluation of the condition changes in the cortex; for example, a decrease in circular polarizability by LiBr treatment and restoration by replacing the LiBr solution with water. In addition, black speckles were observed after the heat treatment. Longer heating and wetting times increased the appearance probability and size of the speckles. According to quantitative analyses, the emergence of black spots was independent of polarizability changes, indicating that they were not pores. CONCLUSION: Circular polarization microscopy based on near-infrared optics in the second NIR window provides an effective evaluation method for quantifying intra-hair damage caused by cosmetic treatments. The present method provides noninvasive, easy, and inexpensive hair evaluation and has potential as a gold standard in hair care research/medical fields.


OBJECTIF: les soins capillaires glorifient la vie humaine. Comme effet secondaire, il existe un risque de détérioration de la santé du cheveu. La microscopie en lumière polarisée est utilisée depuis de nombreuses décennies pour évaluer la santé capillaire en raison de sa facilité d'utilisation et de son faible coût d'exploitation. Cependant, la faible bioperméabilité de la lumière empêche l'observation des structures détaillées à l'intérieur du cheveu. Pour résoudre ce problème, cette étude tente d'établir une technique d'évaluation des atteintes internes d'un cheveu en utilisant une lumière proche infrarouge (NIR) d'une longueur d'onde de 1000 à 1600 nm, appelée « deuxième fenêtre NIR ¼. MÉTHODES: nous avons construit un système de microscope de transmission à balayage laser équipé d'un capteur indium gallium arsenide, d'une source laser de 1064 nm et d'une polarisation circulaire optique pour visualiser la caractérisation de l'anisotropie des fibres de kératine dans les cheveux. Des échantillons de cheveux noirs asiatiques ont subi un traitement avant et après la décoloration, l'ondulation permanente, le bromure de lithium (LiBr) et la chaleur. Certains paramètres reflétant les dommages intra­cheveu ont été comparés quantitativement aux paramètres des images enregistrées numériquement avec des développements analytiques. RÉSULTATS: la transmission de la lumière des cheveux noirs a été considérablement améliorée en utilisant la deuxième fenêtre NIR. L'analyse numérique de la polarisation circulaire des cheveux a quantifié les dommages internes des cheveux traités chimiquement ou thermiquement et a mis en évidence deux types de dommages différents. La présente méthode a permis d'évaluer quantitativement les changements de condition dans le cortex; par exemple, une diminution de la polarisation circulaire par le traitement par LiBr et la restauration en remplaçant la solution LiBr par de l'eau. En outre, des taches noires ont été observées après le traitement thermique. Des temps de chauffage et de mouillage plus longs ont augmenté la fréquence d'apparition et la taille des taches. D'après des analyses quantitatives, l'émergence de points noirs était indépendante des changements de polarisation, indiquant qu'il ne s'agissait pas de pores. CONCLUSION: La microscopie par polarisation circulaire basée sur l'optique proche infrarouge dans la deuxième fenêtre NIR fournit une méthode d'évaluation efficace pour quantifier les dommages intra­cheveu causés par les traitements cosmétiques. La présente méthode fournit une évaluation des cheveux non invasive, facile et peu coûteuse et a un potentiel de référence dans la recherche sur les soins capillaires/les domaines médicaux.

4.
Comput Biol Med ; 165: 107403, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37688992

RESUMO

Given the significant changes in human lifestyle, the incidence of colon cancer has rapidly increased. The diagnostic process can often be complicated due to symptom similarities between colon cancer and other colon-related diseases. In an effort to minimize misdiagnosis, deep learning-based approaches for colon cancer diagnosis have notably progressed within the field of clinical medicine, offering more precise detection and improved patient outcomes. Despite these advancements, practical application of these techniques continues to encounter two major challenges: 1) due to the need for expert annotation, only a limited number of labels are utilized for diagnosis; and 2) the existence of diverse disease types can lead to misdiagnosis when the model encounters unfamiliar disease categories. To overcome these hurdles, we present a method incorporating Universal Domain Adaptation (UniDA). By optimizing the divergence of samples in the source domain, our method detects noise. Furthermore, to identify categories that are not present in the source domain, we optimize the divergence of unlabeled samples in the target domain. Experimental validation on two gastrointestinal datasets demonstrates that our method surpasses current state-of-the-art domain adaptation techniques in identifying unknown disease classes. It is worth noting that our proposed method is the first work of medical image diagnosis aimed at the identification of unknown categories of diseases.


Assuntos
Neoplasias do Colo , Diagnóstico por Imagem , Humanos , Radiografia , Erros de Diagnóstico/prevenção & controle
5.
Ultrasound Med Biol ; 49(10): 2291-2301, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37532633

RESUMO

OBJECTIVE: The utilization of computer-aided diagnosis (CAD) in breast ultrasound image classification has been limited by small sample sizes and domain shift. Current ultrasound classification methods perform inadequately when exposed to cross-domain scenarios, as they struggle with data sets from unobserved domains. In the medical field, there are situations in which all images must share the same networks as they capture the same symptom of the same participant, implying that they share identical structural content. Nevertheless, most domain adaptation methods are not suitable for medical images as they overlook the common features among the images. METHODS: To overcome these challenges, we propose a novel diverse-domain 2-D feature selection network (FSN), which uses the similarities among medical images and extracts features with a reconstruction network with shared weights. Additionally, it penalizes the feature domain distance through two adversarial learning modules that align the feature space and select common features. Our experiments illustrate that the proposed method is robust and can be applied to ultrasound images of various diseases. RESULTS: Compared with the latest domain adaptive methods, 2-D FSN markedly enhances the accuracy of classification of breast, thyroid and endoscopic ultrasound images, achieving accuracies of 82.4%, 96.4% and 89.7%, respectively. Furthermore, the model was evaluated on an unsupervised domain adaptation task using ultrasound images from multiple sources and achieved an average accuracy of 77.3% across widely varying domains. CONCLUSION: In general, 2-D FSN improves the classification ability of the model on multidomain ultrasound data sets through the learning of common features and the combination of multimodule intelligence. The algorithm has good clinical guidance value.


Assuntos
Neoplasias , Ultrassonografia Mamária , Feminino , Humanos , Ultrassonografia/métodos , Diagnóstico por Computador/métodos , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
6.
Comput Biol Med ; 164: 107298, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37573722

RESUMO

Amid the unfolding Covid-19 pandemic, there is a critical need for rapid and accurate diagnostic methods. In this context, the field of deep learning-based medical image diagnosis has witnessed a swift evolution. However, the prevailing methodologies often rely on large amounts of labeled data and require comprehensive medical knowledge. Both of these prerequisites pose significant challenges in real clinical settings, given the high cost of data labeling and the complexities of disease representations. Addressing this gap, we propose a novel problem setting, the Open-Set Single-Domain Generalization for Medical Image Diagnosis (OSSDG-MID). In OSSDG-MID, our aim is to train a model exclusively on a single source domain, so it can classify samples from the target domain accurately, designating them as 'unknown' if they don't belong to the source domain sample category space. Our innovative solution, the Multiple Cross-Matching method (MCM), enhances the identification of these 'unknown' categories by generating auxiliary samples that fall outside the category space of the source domain. Experimental evaluations on two diverse cross-domain image classification tasks demonstrate that our approach outperforms existing methodologies in both single-domain generalization and open-set image classification.


Assuntos
COVID-19 , Humanos , Pandemias
7.
Plant Cell Physiol ; 64(11): 1323-1330, 2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-37225398

RESUMO

Deep neural network (DNN) techniques, as an advanced machine learning framework, have allowed various image diagnoses in plants, which often achieve better prediction performance than human experts in each specific field. Notwithstanding, in plant biology, the application of DNNs is still mostly limited to rapid and effective phenotyping. The recent development of explainable CNN frameworks has allowed visualization of the features in the prediction by a convolutional neural network (CNN), which potentially contributes to the understanding of physiological mechanisms in objective phenotypes. In this study, we propose an integration of explainable CNN and transcriptomic approach to make a physiological interpretation of a fruit internal disorder in persimmon, rapid over-softening. We constructed CNN models to accurately predict the fate to be rapid softening in persimmon cv. Soshu, only with photo images. The explainable CNNs, such as Gradient-weighted Class Activation Mapping (Grad-Class Activation Mapping (CAM)) and guided Grad-CAM, visualized specific featured regions relevant to the prediction of rapid softening, which would correspond to the premonitory symptoms in a fruit. Transcriptomic analyses to compare the featured regions of the predicted rapid-softening and control fruits suggested that rapid softening is triggered by precocious ethylene signal-dependent cell wall modification, despite exhibiting no direct phenotypic changes. Further transcriptomic comparison between the featured and non-featured regions in the predicted rapid-softening fruit suggested that premonitory symptoms reflected hypoxia and the related stress signals finally to induce ethylene signals. These results would provide a good example for the collaboration of image analysis and omics approaches in plant physiology, which uncovered a novel aspect of fruit premonitory reactions in the rapid-softening fate.


Assuntos
Diospyros , Frutas , Humanos , Diospyros/genética , Intuição , Etilenos/farmacologia , Perfilação da Expressão Gênica
8.
Biomedicines ; 11(3)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36979738

RESUMO

Pectus excavatum (PE), a chest-wall deformity that can compromise cardiopulmonary function, cannot be detected by a radiologist through frontal chest radiography without a lateral view or chest computed tomography. This study aims to train a convolutional neural network (CNN), a deep learning architecture with powerful image processing ability, for PE screening through frontal chest radiography, which is the most common imaging test in current hospital practice. Posteroanterior-view chest images of PE and normal patients were collected from our hospital to build the database. Among them, 80% were used as the training set used to train the established CNN algorithm, Xception, whereas the remaining 20% were a test set for model performance evaluation. The performance of our diagnostic artificial intelligence model ranged between 0.976-1 under the receiver operating characteristic curve. The test accuracy of the model reached 0.989, and the sensitivity and specificity were 96.66 and 96.64, respectively. Our study is the first to prove that a CNN can be trained as a diagnostic tool for PE using frontal chest X-rays, which is not possible by the human eye. It offers a convenient way to screen potential candidates for the surgical repair of PE, primarily using available image examinations.

9.
Artigo em Inglês | MEDLINE | ID: mdl-36673913

RESUMO

Since the start of 2020, the outbreak of the Coronavirus disease (COVID-19) has been a global public health emergency, and it has caused unprecedented economic and social disaster. In order to improve the diagnosis efficiency of COVID-19 patients, a number of researchers have conducted extensive studies on applying artificial intelligence techniques to the analysis of COVID-19-related medical images. The automatic segmentation of lesions from computed tomography (CT) images using deep learning provides an important basis for the quantification and diagnosis of COVID-19 cases. For a deep learning-based CT diagnostic method, a few of accurate pixel-level labels are essential for the training process of a model. However, the translucent ground-glass area of the lesion usually leads to mislabeling while performing the manual labeling operation, which weakens the accuracy of the model. In this work, we propose a method for correcting rough labels; that is, to hierarchize these rough labels into precise ones by performing an analysis on the pixel distribution of the infected and normal areas in the lung. The proposed method corrects the incorrectly labeled pixels and enables the deep learning model to learn the infected degree of each infected pixel, with which an aiding system (named DLShelper) for COVID-19 CT image diagnosis using the hierarchical labels is also proposed. The DLShelper targets lesion segmentation from CT images, as well as the severity grading. The DLShelper assists medical staff in efficient diagnosis by providing rich auxiliary diagnostic information (including the severity grade, the proportions of the lesion and the visualization of the lesion area). A comprehensive experiment based on a public COVID-19 CT image dataset is also conducted, and the experimental results show that the DLShelper significantly improves the accuracy of segmentation for the lesion areas and also achieves a promising accuracy for the severity grading task.


Assuntos
Inteligência Artificial , COVID-19 , Humanos , COVID-19/diagnóstico por imagem , SARS-CoV-2 , Saúde Pública , Tomografia Computadorizada por Raios X/métodos , Teste para COVID-19
10.
Mob Netw Appl ; 28(3): 873-888, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38737734

RESUMO

In the global epidemic, distance learning occupies an increasingly important place in teaching and learning because of its great potential. This paper proposes a web-based app that includes a proposed 8-layered lightweight, customized convolutional neural network (LCCNN) for COVID-19 recognition. Five-channel data augmentation is proposed and used to help the model avoid overfitting. The LCCNN achieves an accuracy of 91.78%, which is higher than the other eight state-of-the-art methods. The results show that this web-based app provides a valuable diagnostic perspective on the patients and is an excellent way to facilitate medical education. Our LCCNN model is explainable for both radiologists and distance education users. Heat maps are generated where the lesions are clearly spotted. The LCCNN can detect from CT images the presence of lesions caused by COVID-19. This web-based app has a clear and simple interface, which is easy to use. With the help of this app, teachers can provide distance education and guide students clearly to understand the damage caused by COVID-19, which can increase interaction with students and stimulate their interest in learning.

11.
Diagnostics (Basel) ; 12(12)2022 Nov 25.
Artigo em Inglês | MEDLINE | ID: mdl-36552954

RESUMO

This investigation aimed to explore deep learning (DL) models' potential for diagnosing Pseudomonas keratitis using external eye images. In the retrospective research, the images of bacterial keratitis (BK, n = 929), classified as Pseudomonas (n = 618) and non-Pseudomonas (n = 311) keratitis, were collected. Eight DL algorithms, including ResNet50, DenseNet121, ResNeXt50, SE-ResNet50, and EfficientNets B0 to B3, were adopted as backbone models to train and obtain the best ensemble 2-, 3-, 4-, and 5-DL models. Five-fold cross-validation was used to determine the ability of single and ensemble models to diagnose Pseudomonas keratitis. The EfficientNet B2 model had the highest accuracy (71.2%) of the eight single-DL models, while the best ensemble 4-DL model showed the highest accuracy (72.1%) among the ensemble models. However, no statistical difference was shown in the area under the receiver operating characteristic curve and diagnostic accuracy among these single-DL models and among the four best ensemble models. As a proof of concept, the DL approach, via external eye photos, could assist in identifying Pseudomonas keratitis from BK patients. All the best ensemble models can enhance the performance of constituent DL models in diagnosing Pseudomonas keratitis, but the enhancement effect appears to be limited.

12.
Sensors (Basel) ; 22(24)2022 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-36559970

RESUMO

Artificial intelligence plays an essential role in diagnosing lung cancer. Lung cancer is notoriously difficult to diagnose until it has progressed to a late stage, making it a leading cause of cancer-related mortality. Lung cancer is fatal if not treated early, making this a significant issue. Initial diagnosis of malignant nodules is often made using chest radiography (X-ray) and computed tomography (CT) scans; nevertheless, the possibility of benign nodules leads to wrong choices. In their first phases, benign and malignant nodules seem very similar. Additionally, radiologists have a hard time viewing and categorizing lung abnormalities. Lung cancer screenings performed by radiologists are often performed with the use of computer-aided diagnostic technologies. Computer scientists have presented many methods for identifying lung cancer in recent years. Low-quality images compromise the segmentation process, rendering traditional lung cancer prediction algorithms inaccurate. This article suggests a highly effective strategy for identifying and categorizing lung cancer. Noise in the pictures was reduced using a weighted filter, and the improved Gray Wolf Optimization method was performed before segmentation with watershed modification and dilation operations. We used InceptionNet-V3 to classify lung cancer into three groups, and it performed well compared to prior studies: 98.96% accuracy, 94.74% specificity, as well as 100% sensitivity.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Inteligência Artificial , Nódulo Pulmonar Solitário/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Algoritmos , Diagnóstico por Computador/métodos , Pulmão/patologia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Sensibilidade e Especificidade
13.
Front Surg ; 9: 1017603, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36325041

RESUMO

Background: Adrenal tumours are common in urology and endocrinology, and the diagnosis of adrenal tumours were mainly depends on imaging diagnosis. Howerver, misdiagnosis can still occur for some adrenal space-occupying lesions without specific manifestations or abnormal biochemical indexes. Methods: We report the case of a 55-year-old patient with a soft-tissue mass in the left adrenal region, and have no specific manifestations or abnormalities in biochemical indexes. The patient had undergone open splenectomy 20 years ago for splenic rupture caused by traffic-accident trauma, and had a 10-year special history of hypertension. Because of the uncertain nature of the mass, surgical treatment was recommended. Results: The surgeon managed to remove the left adrenal region mass. During the surgery, the adrenal source was excluded. In the histological examination, the splenic corpuscle and splenic medullary structure were seen under the microscope, and an accessory spleen was diagnosed. Conclusions: The accessory spleen was located in the adrenal region rarely, and can easily be misdiagnosed as an adrenal tumour. When the cases show abnormal adrenal space-occupying lesions in imaging examinations, non-adrenal diseases should be considered. we need to combine different imaging techniques for analysis, and think more about it, avoid misdiagnosis leading to unnecessary surgery.

14.
Diagnostics (Basel) ; 12(10)2022 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-36292167

RESUMO

Nasopharyngeal carcinoma (NPC) is one of the most common head and neck cancers. Early diagnosis plays a critical role in the treatment of NPC. To aid diagnosis, deep learning methods can provide interpretable clues for identifying NPC from magnetic resonance images (MRI). To identify the optimal models, we compared the discrimination performance of hierarchical and simple layered convolutional neural networks (CNN). Retrospectively, we collected the MRI images of patients and manually built the tailored NPC image dataset. We examined the performance of the representative CNN models including shallow CNN, ResNet50, ResNet101, and EfficientNet-B7. By fine-tuning, shallow CNN, ResNet50, ResNet101, and EfficientNet-B7 achieved the precision of 72.2%, 94.4%, 92.6%, and 88.4%, displaying the superiority of deep hierarchical neural networks. Among the examined models, ResNet50 with pre-trained weights demonstrated the best classification performance over other types of CNN with accuracy, precision, and an F1-score of 0.93, 0.94, and 0.93, respectively. The fine-tuned ResNet50 achieved the highest prediction performance and can be used as a potential tool for aiding the diagnosis of NPC tumors.

15.
Comput Biol Med ; 147: 105763, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35777086

RESUMO

Conventional size object detection has been extensively studied, whereas researches concerning ultrasmall object detection are rare due to lack of dataset. Here, considering that the stapes in the ear is the smallest bone in our body, we have collected the largest stapedial otosclerosis detection dataset from 633 stapedial otosclerosis patients and 269 normal cases to promote this direction. Nevertheless, noisy classification labels in our dataset are inevitable due to various subjective and objective factors, and this situation prevails in various fields. In this paper, we propose a novel and general noise tolerant loss function named Adaptive Cross Entropy (ACE) which needs no fine-tuning of hyperparameters for training with noisy labels. We provide both theoretical and empirical analyses for the proposed ACE loss and demonstrate its effectiveness in multiple public datasets. Besides, we find high-resolution representations crucial for ultrasmall object detection and present an auxiliary backbone called W-Net to address it accordingly. Extensive experiments demonstrate that the proposed ACE loss is able to boost the diagnosis performance under noisy label setting by a large margin. Furthermore, our W-Net can help extract sufficient high-resolution representations specialized for ultrasmall objects and achieve even better results. Hopefully, our work could provide more clues for future research on ultrasmall object detection and learning with noisy labels.


Assuntos
Otosclerose , Entropia , Humanos , Estribo , Tomografia Computadorizada por Raios X/métodos
16.
Multimed Tools Appl ; 81(12): 16411-16439, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35261555

RESUMO

In such a brief period, the recent coronavirus (COVID-19) already infected large populations worldwide. Diagnosing an infected individual requires a Real-Time Polymerase Chain Reaction (RT-PCR) test, which can become expensive and limited in most developing countries, making them rely on alternatives like Chest X-Rays (CXR) or Computerized Tomography (CT) scans. However, results from these imaging approaches radiated confusion for medical experts due to their similarities with other diseases like pneumonia. Other solutions based on Deep Convolutional Neural Network (DCNN) recently improved and automated the diagnosis of COVID-19 from CXRs and CT scans. However, upon examination, most proposed studies focused primarily on accuracy rather than deployment and reproduction, which may cause them to become difficult to reproduce and implement in locations with inadequate computing resources. Therefore, instead of focusing only on accuracy, this work investigated the effects of parameter reduction through a proposed truncation method and analyzed its effects. Various DCNNs had their architectures truncated, which retained only their initial core block, reducing their parameter sizes to <1 M. Once trained and validated, findings have shown that a DCNN with robust layer aggregations like the InceptionResNetV2 had less vulnerability to the adverse effects of the proposed truncation. The results also showed that from its full-length size of 55 M with 98.67% accuracy, the proposed truncation reduced its parameters to only 441 K and still attained an accuracy of 97.41%, outperforming other studies based on its size to performance ratio.

17.
Mol Clin Oncol ; 16(2): 27, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34987798

RESUMO

The present study created an artificial intelligence (AI)-automated diagnostics system for uterine cervical lesions and assessed the performance of these images for AI diagnostic imaging of pathological cervical lesions. A total of 463 colposcopic images were analyzed. The traditional colposcopy diagnoses were compared to those obtained by AI image diagnosis. Next, 100 images were presented to a panel of 32 gynecologists who independently examined each image in a blinded fashion and diagnosed them for four categories of tumors. Then, the 32 gynecologists revisited their diagnosis for each image after being informed of the AI diagnosis. The present study assessed any changes in physician diagnosis and the accuracy of AI-image-assisted diagnosis (AISD). The accuracy of AI was 57.8% for normal, 35.4% for cervical intraepithelial neoplasia (CIN)1, 40.5% for CIN2-3 and 44.2% for invasive cancer. The accuracy of gynecologist diagnoses from cervical pathological images, before knowing the AI image diagnosis, was 54.4% for CIN2-3 and 38.9% for invasive cancer. After learning of the AISD, their accuracy improved to 58.0% for CIN2-3 and 48.5% for invasive cancer. AI-assisted image diagnosis was able to improve gynecologist diagnosis accuracy significantly (P<0.01) for invasive cancer and tended to improve their accuracy for CIN2-3 (P=0.14).

18.
J Allergy Clin Immunol Pract ; 10(1): 277-283, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34547536

RESUMO

BACKGROUND: Stevens-Johnson syndrome (SJS)/toxic epidermal necrolysis (TEN) is a life-threatening cutaneous adverse drug reaction (cADR). Distinguishing SJS/TEN from nonsevere cADRs is difficult, especially in the early stages of the disease. OBJECTIVE: To overcome this limitation, we developed a computer-aided diagnosis system for the early diagnosis of SJS/TEN, powered by a deep convolutional neural network (DCNN). METHODS: We trained a DCNN using a dataset of 26,661 individual lesion images obtained from 123 patients with a diagnosis of SJS/TEN or nonsevere cADRs. The DCNN's accuracy of classification was compared with that of 10 board-certified dermatologists and 24 trainee dermatologists. RESULTS: The DCNN achieved 84.6% sensitivity (95% confidence interval [CI], 80.6-88.6), whereas the sensitivities of the board-certified dermatologists and trainee dermatologists were 31.3 % (95% CI, 20.9-41.8; P < .0001) and 27.8% (95% CI, 22.6-32.5; P < .0001), respectively. The negative predictive value was 94.6% (95% CI, 93.2-96.0) for the DCNN, 68.1% (95% CI, 66.1-70.0; P < .0001) for the board-certified dermatologists, and 67.4% (95% CI, 66.1-68.7; P < .0001) for the trainee dermatologists. The area under the receiver operating characteristic curve of the DCNN for a SJS/TEN diagnosis was 0.873, which was significantly higher than that for all board-certified dermatologists and trainee dermatologists. CONCLUSIONS: We developed a DCNN to classify SJS/TEN and nonsevere cADRs based on individual lesion images of erythema. The DCNN performed significantly better than did dermatologists in classifying SJS/TEN from skin images.


Assuntos
Síndrome de Stevens-Johnson , Diagnóstico Precoce , Humanos , Redes Neurais de Computação , Pele , Síndrome de Stevens-Johnson/diagnóstico
19.
J Clin Nurs ; 31(23-24): 3550-3559, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34935230

RESUMO

AIMS: The purpose of this study was to construct a model for oral assessment using deep learning image recognition technology and to verify its accuracy. BACKGROUND: The effects of oral care on older people are significant, and the Oral Assessment Guide has been used internationally as an effective oral assessment tool in clinical practice. However, additional training, education, development of user manuals and continuous support from a dental hygienist are needed to improve the inter-rater reliability of the Oral Assessment Guide. DESIGN: A retrospective observational study. METHODS: A total of 3,201 oral images of 114 older people aged >65 years were collected from five dental-related facilities. These images were divided into six categories (lips, tongue, saliva, mucosa, gingiva, and teeth or dentures) that were evaluated by images, out of the total eight items that comprise components of the Oral Assessment Guide. Each item was classified into a rating of 1, 2 or 3. A convolutional neural network, which is a deep learning method used for image recognition, was used to construct the image recognition model. The study methods comply with the STROBE checklist. RESULTS: We constructed models with a classification accuracy of 98.8% for lips, 94.3% for tongue, 92.8% for saliva, 78.6% for mucous membranes, 93.0% for gingiva and 93.6% for teeth or dentures. CONCLUSIONS: Highly accurate diagnostic imaging models using convolutional neural networks were constructed for six items of the Oral Assessment Guide and validated. In particular, for the five items of lips, tongue, saliva, gingiva, and teeth or dentures, models with a high accuracy of over 90% were obtained. RELEVANCE TO CLINICAL PRACTICE: The model built in this study has the potential to contribute to obtain reproducibility and reliability of the ratings, to shorten the time for assessment, to collaborate with dental professionals and to be used as an educational tool.


Assuntos
Lista de Checagem , Redes Neurais de Computação , Humanos , Idoso , Reprodutibilidade dos Testes , Estudos Retrospectivos
20.
J Phys Ther Sci ; 33(11): 845-849, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34776620

RESUMO

[Purpose] This study aimed to observe the thickness of the transverse abdominis muscle in different contraction states using ultrasound, and to investigate the diagnostic capability of transverse abdominal muscle thickness for nonspecific lower back pain. [Participants and Methods] This study included 108 healthy adults (30-50 years old), consisting of 33 participants with low back pain (13 males, 20 females; defined as those who had experienced low back pain for more than six months) and 75 participants without low back pain (22 males, 53 females). The body mass index, body trunk muscle mass, and transverse abdominal muscle thickness, measured at a static state, during the end of inspiration, end of expiration, transverse abdominis contraction, and simultaneous pelvic floor and transverse abdominis muscle contraction, were measured. [Results] Chronic low back pain was correlated with the transverse abdominis muscle thickness during simultaneous transverse abdominis and pelvic floor muscle contraction. [Conclusion] The thickness of the transverse abdominis muscle during simultaneous transverse abdominis and pelvic floor muscle contraction was a viable diagnostic index for evaluating the degree of chronic lower back pain.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...