Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
BMC Infect Dis ; 24(1): 355, 2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38539142

ABSTRACT

BACKGROUND: There are abundant studies on COVID-19 but few on its impact on hepatitis E. We aimed to assess the effect of the COVID-19 countermeasures on the pattern of hepatitis E incidence and explore the application of time series models in analyzing this pattern. METHODS: Our pivotal idea was to fit a pre-COVID-19 model with data from before the COVID-19 outbreak and use the deviation between forecast values and actual values to reflect the effect of COVID-19 countermeasures. We analyzed the pattern of hepatitis E incidence in China from 2013 to 2018. We evaluated the fitting and forecasting capability of 3 methods before the COVID-19 outbreak. Furthermore, we employed these methods to construct pre-COVID-19 incidence models and compare post-COVID-19 forecasts with reality. RESULTS: Before the COVID-19 outbreak, the Chinese hepatitis E incidence pattern was overall stationary and seasonal, with a peak in March, a trough in October, and higher levels in winter and spring than in summer and autumn, annually. Nevertheless, post-COVID-19 forecasts from pre-COVID-19 models were extremely different from reality in sectional periods but congruous in others. CONCLUSIONS: Since the COVID-19 pandemic, the Chinese hepatitis E incidence pattern has altered substantially, and the incidence has greatly decreased. The effect of the COVID-19 countermeasures on the pattern of hepatitis E incidence was temporary. The incidence of hepatitis E was anticipated to gradually revert to its pre-COVID-19 pattern.


Subject(s)
COVID-19 , Hepatitis E , Humans , Hepatitis E/epidemiology , Hepatitis E/prevention & control , COVID-19/epidemiology , COVID-19/prevention & control , Pandemics/prevention & control , Incidence , Time Factors , China/epidemiology , Forecasting
2.
J Dent ; 141: 104829, 2024 02.
Article in English | MEDLINE | ID: mdl-38163456

ABSTRACT

OBJECTIVES: To assess the performance, time-efficiency, and consistency of a convolutional neural network (CNN) based automated approach for integrated segmentation of craniomaxillofacial structures compared with semi-automated method for creating a virtual patient using cone beam computed tomography (CBCT) scans. METHODS: Thirty CBCT scans were selected. Six craniomaxillofacial structures, encompassing the maxillofacial complex bones, maxillary sinus, dentition, mandible, mandibular canal, and pharyngeal airway space, were segmented on these scans using semi-automated and composite of previously validated CNN-based automated segmentation techniques for individual structures. A qualitative assessment of the automated segmentation revealed the need for minor refinements, which were manually corrected. These refined segmentations served as a reference for comparing semi-automated and automated integrated segmentations. RESULTS: The majority of minor adjustments with the automated approach involved under-segmentation of sinus mucosal thickening and regions with reduced bone thickness within the maxillofacial complex. The automated and the semi-automated approaches required an average time of 1.1 min and 48.4 min, respectively. The automated method demonstrated a greater degree of similarity (99.6 %) to the reference than the semi-automated approach (88.3 %). The standard deviation values for all metrics with the automated approach were low, indicating a high consistency. CONCLUSIONS: The CNN-driven integrated segmentation approach proved to be accurate, time-efficient, and consistent for creating a CBCT-derived virtual patient through simultaneous segmentation of craniomaxillofacial structures. CLINICAL RELEVANCE: The creation of a virtual orofacial patient using an automated approach could potentially transform personalized digital workflows. This advancement could be particularly beneficial for treatment planning in a variety of dental and maxillofacial specialties.


Subject(s)
Artificial Intelligence , Spiral Cone-Beam Computed Tomography , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Cone-Beam Computed Tomography/methods
3.
Dent Med Probl ; 61(1): 121-128, 2024.
Article in English | MEDLINE | ID: mdl-37098828

ABSTRACT

One potential application of neural networks (NNs) is the early-stage detection of oral cancer. This systematic review aimed to determine the level of evidence on the sensitivity and specificity of NNs for the detection of oral cancer, following the Preferred Reporting Items for Systematic Reviews and MetaAnalyses (PRISMA) and Cochrane guidelines. Literature sources included PubMed, ClinicalTrials, Scopus, Google Scholar, and Web of Science. In addition, the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool was used to assess the risk of bias and the quality of the studies. Only 9 studies fully met the eligibility criteria. In most studies, NNs showed accuracy greater than 85%, though 100% of the studies presented a high risk of bias, and 33% showed high applicability concerns. Nonetheless, the included studies demonstrated that NNs were useful in the detection of oral cancer. However, studies of higher quality, with an adequate methodology, a low risk of bias and no applicability concerns are required so that more robust conclusions could be reached.


Subject(s)
Mouth Neoplasms , Neural Networks, Computer , Humans , Sensitivity and Specificity , Mouth Neoplasms/diagnosis
4.
BMC Neurol ; 23(1): 358, 2023 Oct 05.
Article in English | MEDLINE | ID: mdl-37798685

ABSTRACT

BACKGROUND: The diagnosis of Parkinson's disease (PD) and evaluation of its symptoms require in-person clinical examination. Remote evaluation of PD symptoms is desirable, especially during a pandemic such as the coronavirus disease 2019 pandemic. One potential method to remotely evaluate PD motor impairments is video-based analysis. In this study, we aimed to assess the feasibility of predicting the Unified Parkinson's Disease Rating Scale (UPDRS) score from gait videos using a convolutional neural network (CNN) model. METHODS: We retrospectively obtained 737 consecutive gait videos of 74 patients with PD and their corresponding neurologist-rated UPDRS scores. We utilized a CNN model for predicting the total UPDRS part III score and four subscores of axial symptoms (items 27, 28, 29, and 30), bradykinesia (items 23, 24, 25, 26, and 31), rigidity (item 22) and tremor (items 20 and 21). We trained the model on 80% of the gait videos and used 10% of the videos as a validation dataset. We evaluated the predictive performance of the trained model by comparing the model-predicted score with the neurologist-rated score for the remaining 10% of videos (test dataset). We calculated the coefficient of determination (R2) between those scores to evaluate the model's goodness of fit. RESULTS: In the test dataset, the R2 values between the model-predicted and neurologist-rated values for the total UPDRS part III score and subscores of axial symptoms, bradykinesia, rigidity, and tremor were 0.59, 0.77, 0.56, 0.46, and 0.0, respectively. The performance was relatively low for videos from patients with severe symptoms. CONCLUSIONS: Despite the low predictive performance of the model for the total UPDRS part III score, it demonstrated relatively high performance in predicting subscores of axial symptoms. The model approximately predicted the total UPDRS part III scores of patients with moderate symptoms, but the performance was low for patients with severe symptoms owing to limited data. A larger dataset is needed to improve the model's performance in clinical settings.


Subject(s)
COVID-19 , Parkinson Disease , Humans , Tremor/diagnosis , Retrospective Studies , Hypokinesia , Parkinson Disease/diagnosis , Neurologic Examination/methods , Mental Status and Dementia Tests , Gait
5.
BMC Pregnancy Childbirth ; 23(1): 560, 2023 Aug 02.
Article in English | MEDLINE | ID: mdl-37533038

ABSTRACT

BACKGROUND: Improving the accuracy of estimated fetal weight (EFW) calculation can contribute to decision-making for obstetricians and decrease perinatal complications. This study aimed to develop a deep neural network (DNN) model for EFW based on obstetric electronic health records. METHODS: This study retrospectively analyzed the electronic health records of pregnant women with live births delivery at the obstetrics department of International Peace Maternity & Child Health Hospital between January 2016 and December 2018. The DNN model was evaluated using Hadlock's formula and multiple linear regression. RESULTS: A total of 34824 live births (23922 primiparas) from 49896 pregnant women were analyzed. The root-mean-square error of DNN model was 189.64 g (95% CI 187.95 g-191.16 g), and the mean absolute percentage error was 5.79% (95%CI: 5.70%-5.81%), significantly lower compared to Hadlock's formula (240.36 g and 6.46%, respectively). By combining with previously unreported factors, such as birth weight of prior pregnancies, a concise and effective DNN model was built based on only 10 parameters. Accuracy rate of a new model increased from 76.08% to 83.87%, with root-mean-square error of only 243.80 g. CONCLUSIONS: Proposed DNN model for EFW calculation is more accurate than previous approaches in this area and be adopted for better decision making related to fetal monitoring.


Subject(s)
Fetal Weight , Ultrasonography, Prenatal , Child , Female , Humans , Pregnancy , Birth Weight , Fetus , Gestational Age , Retrospective Studies , Neural Networks, Computer
6.
J Dent ; 137: 104639, 2023 10.
Article in English | MEDLINE | ID: mdl-37517787

ABSTRACT

OBJECTIVES: To train and validate a cloud-based convolutional neural network (CNN) model for automated segmentation (AS) of dental implant and attached prosthetic crown on cone-beam computed tomography (CBCT) images. METHODS: A total dataset of 280 maxillomandibular jawbone CBCT scans was acquired from patients who underwent implant placement with or without coronal restoration. The dataset was randomly divided into three subsets: training set (n = 225), validation set (n = 25) and testing set (n = 30). A CNN model was developed and trained using expert-based semi-automated segmentation (SS) of the implant and attached prosthetic crown as the ground truth. The performance of AS was assessed by comparing with SS and manually corrected automated segmentation referred to as refined-automated segmentation (R-AS). Evaluation metrics included timing, voxel-wise comparison based on confusion matrix and 3D surface differences. RESULTS: The average time required for AS was 60 times faster (<30 s) than the SS approach. The CNN model was highly effective in segmenting dental implants both with and without coronal restoration, achieving a high dice similarity coefficient score of 0.92±0.02 and 0.91±0.03, respectively. Moreover, the root mean square deviation values were also found to be low (implant only: 0.08±0.09 mm, implant+restoration: 0.11±0.07 mm) when compared with R-AS, implying high AI segmentation accuracy. CONCLUSIONS: The proposed cloud-based deep learning tool demonstrated high performance and time-efficient segmentation of implants on CBCT images. CLINICAL SIGNIFICANCE: AI-based segmentation of implants and prosthetic crowns can minimize the negative impact of artifacts and enhance the generalizability of creating dental virtual models. Furthermore, incorporating the suggested tool into existing CNN models specialized for segmenting anatomical structures can improve pre-surgical planning for implants and post-operative assessment of peri­implant bone levels.


Subject(s)
Deep Learning , Dental Implants , Tooth , Humans , Cone-Beam Computed Tomography , Neural Networks, Computer , Image Processing, Computer-Assisted/methods
7.
J Yeungnam Med Sci ; 40(Suppl): S29-S36, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37491843

ABSTRACT

BACKGROUND: This study aimed to evaluate the accuracy and clinical usability of implant system classification using automated machine learning on a Google Cloud platform. METHODS: Four dental implant systems were selected: Osstem TSIII, Osstem USII, Biomet 3i Os-seotite External, and Dentsply Sirona Xive. A total of 4,800 periapical radiographs (1,200 for each implant system) were collected and labeled based on electronic medical records. Regions of interest were manually cropped to 400×800 pixels, and all images were uploaded to Google Cloud storage. Approximately 80% of the images were used for training, 10% for validation, and 10% for testing. Google automated machine learning (AutoML) Vision automatically executed a neural architecture search technology to apply an appropriate algorithm to the uploaded data. A single-label image classification model was trained using AutoML. The performance of the mod-el was evaluated in terms of accuracy, precision, recall, specificity, and F1 score. RESULTS: The accuracy, precision, recall, specificity, and F1 score of the AutoML Vision model were 0.981, 0.963, 0.961, 0.985, and 0.962, respectively. Osstem TSIII had an accuracy of 100%. Osstem USII and 3i Osseotite External were most often confused in the confusion matrix. CONCLUSION: Deep learning-based AutoML on a cloud platform showed high accuracy in the classification of dental implant systems as a fine-tuned convolutional neural network. Higher-quality images from various implant systems will be required to improve the performance and clinical usability of the model.

9.
Rev. colomb. cir ; 38(3): 439-446, Mayo 8, 2023. fig, tab
Article in Spanish | LILACS | ID: biblio-1438420

ABSTRACT

Introducción. Debido a la ausencia de modelos predictivos estadísticamente significativos enfocados a las complicaciones postoperatorias en el manejo quirúrgico del neumotórax, desarrollamos un modelo, utilizando redes neurales, que identifica las variables independientes y su importancia para reducir la incidencia de complicaciones. Métodos. Se realizó un estudio retrospectivo en un centro asistencial, donde se incluyeron 106 pacientes que requirieron manejo quirúrgico de neumotórax. Todos fueron operados por el mismo cirujano. Se desarrolló una red neural artificial para manejo de datos con muestras limitadas; se optimizaron los datos y cada algoritmo fue evaluado de forma independiente y mediante validación cruzada, para obtener el menor error posible y la mayor precisión con el menor tiempo de respuesta. Resultados. Las variables de mayor importancia según su peso en el sistema de decisión de la red neural (área bajo la curva 0,991) fueron el abordaje por toracoscopia video asistida (OR 1,131), el uso de pleurodesis con talco (OR 0,994) y el uso de autosuturas (OR 0,792; p<0,05). Discusión. En nuestro estudio, los principales predictores independientes asociados a mayor riesgo de complicaciones fueron el neumotórax de etiología secundaria y el neumotórax recurrente. Adicionalmente, confirmamos que las variables asociadas a reducción de riesgo de complicaciones postoperatorias tuvieron significancia estadística. Conclusión. Identificamos la toracoscopia video asistida, el uso de autosuturas y la pleurodesis con talco como posibles variables asociadas a menor riesgo de complicaciones. Se plantea la posibilidad de desarrollar una herramienta que facilite y apoye la toma de decisiones, por lo cual es necesaria la validación externa en estudios prospectivos


Introduction. Due to the absence of statistically significant predictive models focused on postoperative complications in the surgical management of pneumothorax, we developed a model using neural networks that identify the independent variables and their importance in reducing the incidence of postoperative complications. Methods. A retrospective single-center study was carried out, where 106 patients who required surgical management of pneumothorax were included. All patients were operated by the same surgeon. An artificial neural network was developed to manage data with limited samples. The data is optimized and each algorithm is evaluated independently and through cross-validation to obtain the lowest possible error and the highest precision with the shortest response time. Results. The most important variables according to their weight in the decision system of the neural network (AUC 0.991) were the approach via video-assisted thoracoscopy (OR 1.131), use of pleurodesis with powder talcum (OR 0.994) and use of autosutures (OR 0.792, p<0.05). Discussion. In our study, the main independent predictors associated with a higher risk of complications are pneumothorax of secondary etiology and recurrent pneumothorax. Additionally, we confirm that the variables associated with a reduction in the risk of postoperative complications have statistical significance. Conclusion. We identify video-assisted thoracoscopy, use of autosuture and powder talcum pleurodesis as possible variables associated with a lower risk of complications and raise the possibility of developing a tool that facilitates and supports decision-making, for which external validation in prospective studies is necessary


Subject(s)
Humans , Pneumothorax , Artificial Intelligence , Neural Networks, Computer , Postoperative Complications , Talc , Thoracoscopy
10.
J Cancer Res Clin Oncol ; 149(10): 7877-7885, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37046121

ABSTRACT

PURPOSE: Surgical resection with complete tumor excision (R0) provides the best chance of long-term survival for patients with intrahepatic cholangiocarcinoma (iCCA). A non-invasive imaging technology, which could provide quick intraoperative assessment of resection margins, as an adjunct to histological examination, is optical coherence tomography (OCT). In this study, we investigated the ability of OCT combined with convolutional neural networks (CNN), to differentiate iCCA from normal liver parenchyma ex vivo. METHODS: Consecutive adult patients undergoing elective liver resections for iCCA between June 2020 and April 2021 (n = 11) were included in this study. Areas of interest from resection specimens were scanned ex vivo, before formalin fixation, using a table-top OCT device at 1310 nm wavelength. Scanned areas were marked and histologically examined, providing a diagnosis for each scan. An Xception CNN was trained, validated, and tested in matching OCT scans to their corresponding histological diagnoses, through a 5 × 5 stratified cross-validation process. RESULTS: Twenty-four three-dimensional scans (corresponding to approx. 85,603 individual) from ten patients were included in the analysis. In 5 × 5 cross-validation, the model achieved a mean F1-score, sensitivity, and specificity of 0.94, 0.94, and 0.93, respectively. CONCLUSION: Optical coherence tomography combined with CNN can differentiate iCCA from liver parenchyma ex vivo. Further studies are necessary to expand on these results and lead to innovative in vivo OCT applications, such as intraoperative or endoscopic scanning.


Subject(s)
Bile Duct Neoplasms , Cholangiocarcinoma , Adult , Humans , Tomography, Optical Coherence/methods , Neural Networks, Computer , Liver/diagnostic imaging , Liver/surgery , Cholangiocarcinoma/diagnostic imaging , Cholangiocarcinoma/surgery , Bile Duct Neoplasms/diagnostic imaging , Bile Duct Neoplasms/surgery , Bile Ducts, Intrahepatic/diagnostic imaging , Bile Ducts, Intrahepatic/surgery
11.
Cancer Inform ; 21: 11769351221135141, 2022.
Article in English | MEDLINE | ID: mdl-36408331

ABSTRACT

Purpose: There is a lack of tools for identifying the site of origin in mucinous cancer. This study aimed to evaluate the performance of a transcriptome-based classifier for identifying the site of origin in mucinous cancer. Materials And Methods: Transcriptomic data of 1878 non-mucinous and 82 mucinous cancer specimens, with 7 sites of origin, namely, the uterine cervix (CESC), colon (COAD), pancreas (PAAD), stomach (STAD), uterine endometrium (UCEC), uterine carcinosarcoma (UCS), and ovary (OV), obtained from The Cancer Genome Atlas, were used as the training and validation sets, respectively. Transcriptomic data of 14 mucinous cancer specimens from a tissue archive were used as the test set. For identifying the site of origin, a set of 100 differentially expressed genes for each site of origin was selected. After removing multiple iterations of the same gene, 427 genes were chosen, and their RNA expression profiles, at each site of origin, were used to train the deep neural network classifier. The performance of the classifier was estimated using the training, validation, and test sets. Results: The accuracy of the model in the training set was 0.998, while that in the validation set was 0.939 (77/82). In the test set which is newly sequenced from a tissue archive, the model showed an accuracy of 0.857 (12/14). t-SNE analysis revealed that samples in the test set were part of the clusters obtained for the training set. Conclusion: Although limited by small sample size, we showed that a transcriptome-based classifier could correctly identify the site of origin of mucinous cancer.

12.
J Dent ; 111: 103705, 2021 08.
Article in English | MEDLINE | ID: mdl-34077802

ABSTRACT

OBJECTIVES: This study proposed and investigated the performance of a deep learning based three-dimensional (3D) convolutional neural network (CNN) model for automatic segmentation of the pharyngeal airway space (PAS). METHODS: A dataset of 103 computed tomography (CT) and cone-beam CT (CBCT) scans was acquired from an orthognathic surgery patients database. The acquisition devices consisted of 1 CT (128-slice multi-slice spiral CT, Siemens Somatom Definition Flash, Siemens AG, Erlangen, Germany) and 2 CBCT devices (Promax 3D Max, Planmeca, Helsinki, Finland and Newtom VGi evo, Cefla, Imola, Italy) with different scanning parameters. A 3D CNN-based model (3D U-Net) was built for automatic segmentation of the PAS. The complete CT/CBCT dataset was split into three sets, training set (n = 48) for training the model based on the ground-truth observer-based manual segmentation, test set (n = 25) for getting the final performance of the model and validation set (n = 30) for evaluating the model's performance versus observer-based segmentation. RESULTS: The CNN model was able to identify the segmented region with optimal precision (0.97±0.01) and recall (0.96±0.03). The maximal difference between the automatic segmentation and ground truth based on 95% hausdorff distance score was 0.98±0.74mm. The dice score of 0.97±0.02 confirmed the high similarity of the segmented region to the ground truth. The Intersection over union (IoU) metric was also found to be high (0.93±0.03). Based on the acquisition devices, Newtom VGi evo CBCT showed improved performance compared to the Promax 3D Max and CT device. CONCLUSION: The proposed 3D U-Net model offered an accurate and time-efficient method for the segmentation of PAS from CT/CBCT images. CLINICAL SIGNIFICANCE: The proposed method can allow clinicians to accurately and efficiently diagnose, plan treatment and follow-up patients with dento-skeletal deformities and obstructive sleep apnea which might influence the upper airway space, thereby further improving patient care.


Subject(s)
Cone-Beam Computed Tomography , Neural Networks, Computer , Databases, Factual , Finland , Humans , Image Processing, Computer-Assisted , Tomography, X-Ray Computed
13.
Toxicol Pathol ; 49(4): 888-896, 2021 06.
Article in English | MEDLINE | ID: mdl-33287662

ABSTRACT

Rodent progressive cardiomyopathy (PCM) encompasses a constellation of microscopic findings commonly seen as a spontaneous background change in rat and mouse hearts. Primary histologic features of PCM include varying degrees of cardiomyocyte degeneration/necrosis, mononuclear cell infiltration, and fibrosis. Mineralization can also occur. Cardiotoxicity may increase the incidence and severity of PCM, and toxicity-related morphologic changes can overlap with those of PCM. Consequently, sensitive and consistent detection and quantification of PCM features are needed to help differentiate spontaneous from test article-related findings. To address this, we developed a computer-assisted image analysis algorithm, facilitated by a fully convolutional network deep learning technique, to detect and quantify the microscopic features of PCM (degeneration/necrosis, fibrosis, mononuclear cell infiltration, mineralization) in rat heart histologic sections. The trained algorithm achieved high values for accuracy, intersection over union, and dice coefficient for each feature. Further, there was a strong positive correlation between the percentage area of the heart predicted to have PCM lesions by the algorithm and the median severity grade assigned by a panel of veterinary toxicologic pathologists following light microscopic evaluation. By providing objective and sensitive quantification of the microscopic features of PCM, deep learning algorithms could assist pathologists in discerning cardiotoxicity-associated changes.


Subject(s)
Artificial Intelligence , Cardiomyopathies , Algorithms , Animals , Cardiomyopathies/chemically induced , Mice , Neural Networks, Computer , Rats , Rodentia
14.
Korean J Women Health Nurs ; 26(1): 5-9, 2020 Mar 31.
Article in English | MEDLINE | ID: mdl-36311852

ABSTRACT

Artificial intelligence (AI), which includes machine learning and deep learning has been introduced to nursing care in recent years. The present study reviews the following topics: the concepts of AI, machine learning, and deep learning; examples of AI-based nursing research; the necessity of education on AI in nursing schools; and the areas of nursing care where AI is useful. AI refers to an intelligent system consisting not of a human, but a machine. Machine learning refers to computers' ability to learn without being explicitly programmed. Deep learning is a subset of machine learning that uses artificial neural networks consisting of multiple hidden layers. It is suggested that the educational curriculum should include big data, the concept of AI, algorithms and models of machine learning, the model of deep learning, and coding practice. The standard curriculum should be organized by the nursing society. An example of an area of nursing care where AI is useful is prenatal nursing interventions based on pregnant women's nursing records and AI-based prediction of the risk of delivery according to pregnant women's age. Nurses should be able to cope with the rapidly developing environment of nursing care influenced by AI and should understand how to apply AI in their field. It is time for Korean nurses to take steps to become familiar with AI in their research, education, and practice.

15.
Front Endocrinol (Lausanne) ; 11: 552719, 2020.
Article in English | MEDLINE | ID: mdl-33505353

ABSTRACT

Objective: Decreased bone mineral density (BMD) impairs screw purchase in trabecular bone and can cause screw loosening following spinal instrumentation. Existing computed tomography (CT) scans could be used for opportunistic osteoporosis screening for decreased BMD. Purpose of this case-control study was to investigate the association of opportunistically assessed BMD with the outcome after spinal surgery with semi-rigid instrumentation for lumbar degenerative instability. Methods: We reviewed consecutive patients that had primary surgery with semi-rigid instrumentation in our hospital. Patients that showed screw loosening in follow-up imaging qualified as cases. Patients that did not show screw loosening or-if no follow-up imaging was available (n = 8)-reported benefit from surgery ≥ 6 months after primary surgery qualified as controls. Matching criteria were sex, age, and surgical construct. Opportunistic BMD screening was performed at L1 to L4 in perioperative CT scans by automatic spine segmentation and using asynchronous calibration. Processing steps of this deep learning-driven approach can be reproduced using the freely available online-tool Anduin (https://anduin.bonescreen.de). Area under the curve (AUC) was calculated for BMD as a predictor of screw loosening. Results: Forty-six elderly patients (69.9 ± 9.1 years)-23 cases and 23 controls-were included. The majority of surgeries involved three spinal motion segments (n = 34). Twenty patients had low bone mass and 13 had osteoporotic BMD. Cases had significantly lower mean BMD (86.5 ± 29.5 mg/cm³) compared to controls (118.2 ± 32.9 mg/cm³, p = 0.001), i.e. patients with screw loosening showed reduced BMD. Screw loosening was best predicted by a BMD < 81.8 mg/cm³ (sensitivity = 91.3%, specificity = 56.5%, AUC = 0.769, p = 0.002). Conclusion: Prevalence of osteoporosis or low bone mass (BMD ≤ 120 mg/cm³) was relatively high in this group of elderly patients undergoing spinal surgery. Screw loosening was associated with BMD close to the threshold for osteoporosis (< 80 mg/cm³). Opportunistic BMD screening is feasible using the presented approach and can guide the surgeon to take measures to prevent screw loosening and to increase favorable outcomes.


Subject(s)
Bone Diseases, Metabolic/diagnostic imaging , Orthopedic Procedures/instrumentation , Orthopedic Procedures/methods , Osteoporosis/diagnostic imaging , Pedicle Screws , Spinal Diseases/surgery , Aged , Aged, 80 and over , Bone Diseases, Metabolic/epidemiology , Case-Control Studies , Humans , Lumbar Vertebrae/surgery , Middle Aged , Osteoporosis/epidemiology , Sensitivity and Specificity , Spinal Diseases/epidemiology , Treatment Outcome
16.
Cad. Saúde Pública (Online) ; 36(8): e00038319, 2020. tab, graf
Article in Portuguese | LILACS | ID: biblio-1124320

ABSTRACT

Resumo: O objetivo foi aplicar as redes neurais artificiais para classificar os municípios do Estado do Rio Grande do Norte, Brasil, de acordo com sua vulnerabilidade social. Estudo ecológico que utilizou 17 variáveis que refletissem os indicadores epidemiológicos, demográficos, socioeconômicos e educacionais para o ano de 2010. As fontes pesquisadas foram o Atlas do Desenvolvimento Humano no Brasil e o Instituto Brasileiro de Geografia e Estatística. Para a classificação dos municípios, foram aplicadas as redes neurais artificiais, dos tipos PNN e Multilayer feedforward, resultando a classificação em cinco categorias de vulnerabilidade: muito alta, alta, média, baixa e muito baixa. A fase de treinamento das redes utilizou os valores de mínimo, máximo, percentis 25 e 75 e mediana das 17 variáveis selecionadas. A rede Multilayer feedforward com seis nós apresentou os melhores resultados. Os municípios da região metropolitana (Natal, Parnamirim), das microrregiões do Seridó oriental e ocidental (Caicó, Currais Novos, São José do Seridó, Jardim do Seridó, Parelhas, Carnaúba dos Dantas) apresentaram níveis mais baixos de vulnerabilidade. Os municípios de alta e muito alta vulnerabilidade encontram-se na mesorregião do Leste potiguar: nas microrregiões do Litoral Nordeste (municípios de João Câmara, Touros, Caiçara do Rio dos Ventos) e do Litoral Sul (Nísia Floresta, São José do Mipibu, Arês, Canguaretama). A rede neural classificou os municípios com elevada precisão, destacando os que possuem extrema vulnerabilidade daqueles que detêm os melhores indicadores sociais.


Abstract: The objective was to apply artificial neural networks to classify municipalities (counties) in Rio Grande do Norte State, Brazil, according to their social vulnerability. This was an ecological study using 17 variables that reflected epidemiological, demographic, socioeconomic, and educational indicators for the year 2010. The sources were the Human Development Atlas for Brazil and the Brazilian Institute of Geography and Statistics. For classification of the municipalities, the study applied the artificial neural networks of the PNN and Multilayer feedforward types, resulting in a classification in five categories of vulnerability: very high, high, medium, low, and very low. The networks' training phase used the minimum and maximum values, 25th and 75th percentiles, and medians for the 17 selected variables. The Multilayer feedforward network with six nodes showed the best results. The municipalities from the Metropolitan Area (Natal, Parnamirim) and the eastern and western Seridó micro-regions (Caicó, Currais Novos, São José do Seridó, Jardim do Seridó, Parelhas, Carnaúba dos Dantas) showed the lowest levels of vulnerability. The municipalities with high and very high vulnerability were located in the East of the state, in the micro-regions of the Northeast Coast (João Câmara, Touros, Caiçara do Rio dos Ventos) and Southern Coast (Nísia Floresta, São José do Mipibu, Arês, Canguaretama). The neural network classified the municipalities with high precision, distinguishing those with extreme vulnerability from those with better social indicators.


Resumen: El objetivo fue aplicar las redes neuronales artificiales para clasificar los municipios del estado de Rio Grande do Norte, Brasil, de acuerdo con su vulnerabilidad social. Se realizó un estudio ecológico que utilizó 17 variables que reflejaron los indicadores epidemiológicos, demográficos, socioeconómicos y educacionales durante el año 2010. Las fuentes investigadas fueron: el Atlas de Desarrollo Humano en Brasil y el Instituto Brasileño de Geografía y Estadística. Para la clasificación de los municipios, se aplicaron las redes neuronales artificiales de los tipos PNN y Multilayer feedforward, resultando la clasificación en cinco categorías de vulnerabilidad: muy alta, alta, media, baja y muy baja. La fase de entrenamiento de las redes utilizó los valores: mínimo, máximo, percentiles 25 y 75 y mediana de las 17 variables seleccionadas. La red Multilayer feedforward con seis nudos presentó los mejores resultados. Los municipios de la región metropolitana (Natal, Parnamirim), de las microrregiones del Seridó oriental y ocidental (Caicó, Currais Novos, São José do Seridó, Jardim do Seridó, Parelhas, Carnaúba dos Dantas) presentaron niveles más bajos de vulnerabilidad. Los municipios de alta y muy alta vulnerabilidad se encuentran en la mesorregión del este potiguar: en las microrregiones del litoral nordeste (municipios de João Câmara, Touros, Caiçara do Rio dos Ventos) y del litoral sur (Nísia Floresta, São José do Mipibu, Arês, Canguaretama). La red neuronal clasificó los municipios con elevada precisión, destacando los que poseen extrema vulnerabilidad de aquellos que ostentan los mejores indicadores sociales.


Subject(s)
Humans , Neural Networks, Computer , Environment , Brazil , Cities , Geography
SELECTION OF CITATIONS
SEARCH DETAIL
...