Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 599
Filter
1.
J. bras. econ. saúde (Impr.) ; 16(2): 108-120, Agosto/2024.
Article in Portuguese | LILACS-Express | LILACS | ID: biblio-1571621

ABSTRACT

Objetivo: O presente trabalho explora a percepção de gestores das áreas de Tecnologia e Inovação de hospitais privados brasileiros acerca do uso da inteligência artificial (IA) na saúde, com foco específico na personalização da experiência do paciente nesses hospitais. Métodos: Este trabalho se caracteriza como uma pesquisa descritiva transversal quantitativa. Foi desenvolvido um questionário com 14 questões que foi distribuído a uma amostra de gestores de tecnologia e inovação em hospitais, com o apoio da Associação Nacional de Hospitais Privados (ANAHP). O questionário foi disponibilizado em versão online à base de 122 hospitais associados à ANAHP. Resultados: Foram obtidas 30 respostas completas (aproximadamente 25% da base total), conquistando percepções sobre as vantagens, desvantagens e desafios éticos e técnicos relacionados ao emprego da IA na área clínica, particularmente em ambientes hospitalares. As respostas coletadas ratificaram o otimismo e a reserva dos profissionais de tecnologia e inovação em hospitais privados quanto ao poder e aos impactos da IA na personalização da experiência do paciente, bem como indicaram a necessidade de treinamento adequado para os funcionários desses hospitais, a fim de maximizar os benefícios da IA como ferramenta de apoio à tomada de decisão. Conclusões: Este trabalho é uma fonte de consulta para instituições de saúde que considerem utilizar a IA na personalização da experiência do paciente e queiram estabelecer treinamentos de pessoal baseados nesses princípios. Desse modo, os resultados aqui obtidos oferecem orientações valiosas para a adoção plena de IA no setor de saúde.


Objective: This study explores the perception of managers in the Technology and Innovation areas of Brazilian private hospitals regarding the use of artificial intelligence (AI) in healthcare, specifically focusing on patient experience personalization in these hospitals. Methods: This study is characterized as a quantitative cross-sectional descriptive research. A questionnaire with 14 questions was developed and distributed to a sample of technology and innovation managers in hospitals, with the support of the National Association of Private Hospitals (NAPH). The questionnaire was made available online to a base of 122 hospitals associated with NAPH. Results: 30 complete responses were obtained (nearly 25% of the total base), capturing perceptions on the advantages, disadvantages, and ethical and technical challenges related to the use of AI in clinical settings, particularly in hospital environments. The collected responses affirmed the optimism and caution of technology and innovation professionals in private hospitals regarding the power and impacts of AI on patient experience personalization, and indicated the need for adequate training for employees in these hospitals to maximize the benefits of AI as a decision support tool. Conclusions: This study serves as a reference for healthcare institutions considering the use of AI in patient experience personalization and aiming to establish personnel training based on these principles. Thus, the results obtained here offer valuable guidance for the full adoption of AI in the healthcare sector.

2.
An. Fac. Cienc. Méd. (Asunción) ; 57(2): 90-104, 01/08/2024.
Article in Spanish | LILACS | ID: biblio-1573797

ABSTRACT

La inteligencia artificial se está usando ampliamente en diversos campos de la medicina. El objetivo de esta revisión es describir las principales aplicaciones, oportunidades y desafíos de la inteligencia artificial en medicina brindando una perspectiva del contexto actual. Se realizó una revisión narrativa de la literatura, identificando la información más actualizada y relevante sobre el tema. Se consultaron las bases de datos electrónicas PubMed, Scopus y SciELO, desde enero de 2019 a marzo de 2024, tanto en inglés como en español. Se incluyeron revisiones sistemáticas y no sistemáticas de la literatura, scoping reviews, artículos originales y capítulos de libros. Se excluyeron artículos duplicados, trabajos científicos poco claros, aquellos de bajo rigor científico y literatura gris. La implementación de la inteligencia artificial en medicina ha traído consigo notables beneficios, que van desde el registro de información médica hasta el descubrimiento de nuevos fármacos. Ha generado una revolución en la forma tradicional de hacer medicina. Por otro lado, ha traído consigo desafíos en materia de precisión, confiabilidad, ética, privacidad, entre otros. Es crucial mantener un enfoque centrado en el paciente y garantizar que estas tecnologías se utilicen para mejorar los resultados en salud y promover la equidad en el acceso a la atención médica. La colaboración entre profesionales de la salud, investigadores, entidades reguladoras y desarrolladores de tecnología será fundamental para enfrentar estos desafíos y aprovechar todo el potencial de la inteligencia artificial.


Artificial intelligence is being widely used in various fields of medicine. The aim of this review is to describe the main applications, opportunities and challenges of AI in medicine by providing an overview of the current context. An overview of the literature was conducted, identifying the most up-to-date and relevant information on the topic. The electronic databases PubMed, Scopus and SciELO were consulted, from January 2019 to March 2024, in both English and Spanish. Systematic and non-systematic literature reviews, scoping reviews, original articles and book chapters were included. Duplicate articles, unclear scientific papers, those of low scientific rigour and grey literature were excluded. The implementation of artificial intelligence in medicine has brought remarkable benefits, ranging from the recording of medical information to the discovery of new drugs. It has generated a revolution in the traditional way of doing medicine. On the other hand, it has brought with it challenges in terms of accuracy, reliability, ethics, privacy, among others. It is crucial to maintain a patient-centred approach and ensure that these technologies are used to improve health outcomes and promote equity in access to care. Collaboration between healthcare professionals, researchers, regulators and technology developers will be critical to address these challenges and realise the full potential of artificial intelligence.


Subject(s)
Artificial Intelligence , Medicine
3.
Int. j. morphol ; 42(4): 970-976, ago. 2024. ilus, tab
Article in English | LILACS | ID: biblio-1569272

ABSTRACT

SUMMARY: Since machine learning algorithms give more reliable results, they have been used in the field of health in recent years. The orbital variables give very successful results in classifying sex correctly. This research has focused on sex determination using certain variables obtained from the orbital images of the computerized tomography (CT) by using machine learning algorithms (ML). In this study 12 variables determined on 600 orbital images of 300 individuals (150 men and 150 women) were tested with different ML. Decision tree (DT), K-Nearest Neighbour (KNN), Logistic Regression (LR), Random Forest (RF), Linear Discriminant Analysis (LDA), and Naive Bayes (NB) algorithms of ML were used for unsupervised learning. Statistical analyses of the variables were conducted with Minitab® 21.2 (64-bit) program. ACC rate of NB, DT, KNN, and LR algorithms was found as % 83 while the ACC rate of LDA and RFC algorithms was determined as % 85. According to Shap analysis, the variable with the highest degree of effect was found as BOW. The study has determined the sex with high accuracy at the ratios of 0.83 and 0.85 through using the variables of the orbital CT images, and the related morphometric data of the population under question was acquired, emphasizing the racial variation.


Dado que los algoritmos de aprendizaje automático dan resultados más fiables, en los últimos años han sido utilizados en el campo de la salud. Las variables orbitales dan resultados muy exitosos a la hora de clasificar correctamente el sexo. Esta investigación se ha centrado en la determinación del sexo utilizando determinadas variables obtenidas a partir de las imágenes orbitales de la tomografía computarizada (TC) mediante el uso de algoritmos de aprendizaje automático (AA). En este estudio se probaron 12 variables determinadas en 600 imágenes orbitales de 300 individuos (150 hombres y 150 mujeres) con diferentes AA. Se utilizaron algoritmos de AA de árbol de decisión (DT), K-Nearest Neighbour, regresión logística (RL), Random Forest (RF), análisis discriminante lineal (ADL) y Naive Bayes (NB) para el aprendizaje no supervisado. Los análisis estadísticos de las variables se realizaron con el programa Minitab® 21.2 (64 bits). La tasa de ACC de los algoritmos NB, DT, KNN y RL se encontró en % 83, mientras que la tasa de ACC de los algoritmos ADL y RFC se determinó en % 85. Según el análisis de Sharp, la variable con el mayor grado de efecto se encontró como BOW. El estudio determinó el sexo con alta precisión en las proporciones de 0,83 y 0,85 mediante el uso de las variables de las imágenes de TC orbitales, y se adquirieron los datos morfométricos relacionados de la población en cuestión, enfatizando la variación racial.


Subject(s)
Humans , Male , Female , Orbit/diagnostic imaging , Tomography, X-Ray Computed , Sex Determination by Skeleton , Machine Learning , Orbit/anatomy & histology , Algorithms , Logistic Models , Forensic Anthropology , Imaging, Three-Dimensional
4.
Article | IMSEAR | ID: sea-231636

ABSTRACT

This multidisciplinary research presents a comprehensive method to tackle the widespread problem of spice adulteration, which represents substantial risks to both public health and spices authenticity. A comprehensive approach is developed to authenticate spices with high accuracy and efficiency by combining old methods with contemporary approaches such as machine learning and artificial intelligence. This paper presents a specific case study where machine learning models, specifically using transfer learning with proven frameworks like MobileNetV2, were effectively employed. The models achieved an impressive accuracy of 98.67% in identifying Capsicum annum, a spice that is usually adulterated in the market. In addition, a wide range of traditional and advanced techniques, including qualitative testing, microscopy, colorimetry, density measurement, and spectroscopy, are reviewed closely. In addition, this article provides a detailed explanation of high-performance liquid chromatography based quantitation of capsaicin, which is the main active constituent for ascertaining the quality of C. annum. The present work defines a new interdisciplinary approach and also provides valuable information on evaluating the quality of spices and identifying adulterants using artificial intelligence. The outcomes presented here have the potential to completely transform the methods used to verify the authenticity of spices and herbal drugs, therefore ensuring the safety and health of consumers by confirming the quality.

5.
BAG, J. basic appl. genet. (Online) ; 35(1): 39-51, jun. 2024. graf
Article in English | LILACS-Express | LILACS | ID: biblio-1574062

ABSTRACT

ABSTRACT Random Forest approaches have been used in phenotyping at both morphological and metabolic levels and in genomics studies, but direct applications in practical situations of plant genetics and breeding are scarce. Random Forest was compared with Discriminant Analysis for its ability in classifying tomato individuals belonging to different breeding populations, exclusively based on phenotypic fruit quality traits. In order to take into account different steps in breeding programs, two populations were assayed. One was composed by a set of RILs derived from an interspecific tomato cross, and the other was composed by two of these RILs and the corresponding F1, F2 and backcross generations. Being tomato an autogamous species, the first population was considered a final step in breeding programs because promising genotypes are being evaluated for putative commercial release as new cultivars. Meanwhile, the second one, in which new variation is being generated, was considered as an initial step. Both Random Forest and Discriminant Analysis were able to classify populations with the aim of evaluating general variability and identifying the traits that most contribute to this variability. However, overall errors in classification were lower for Random Forest. When comparing the adequacy of classification between populations, errors of both statistical analyses were greater in the second population than in the first one, though Random Forest was more precise than Discriminant Analysis even in this initial step of plant breeding programs. Random Forest allowed breeders to get a reliable classification of tomato individuals belonging to different breeding populations.


RESUMEN Los enfoques de Random Forest se han utilizado en la fenotipificación, tanto a nivel morfológico como metabólico, y en estudios de genómica, pero las aplicaciones directas en situaciones prácticas de fitomejoramiento y genética son escasas. Random Forest se comparó con el Análisis Discriminante por su capacidad en la clasificación de individuos de tomate pertenecientes a diferentes poblaciones de mejoramiento, exclusivamente en función de los rasgos fenotípicos de calidad de la fruta. Para tener en cuenta los diferentes pasos en los programas de mejoramiento, se ensayaron dos poblaciones. Una estaba compuesta por un conjunto de RILs derivadas de un cruce interespecífico de tomate, y la otra estaba compuesta por dos de estas RILs y las correspondientes generaciones F1, F2 y retrocruzas. Siendo el tomate una especie autógama, la primera población se consideró un paso final en los programas de mejoramiento porque se están evaluando genotipos prometedores para su lanzamiento comercial putativo como nuevos cultivares. Mientras tanto, la segunda, en la que se está generando nueva variación, se consideró como un paso inicial. Tanto Random Forest como Análisis Discriminante pudieron clasificar poblaciones con el objetivo de evaluar la variabilidad general e identificar los rasgos que más contribuyen a esta variabilidad. Sin embargo, los errores generales en la clasificación fueron menores para Random Forest. Al comparar la adecuación de la clasificación entre poblaciones, los errores de ambos análisis estadísticos fueron mayores en la segunda población que en la primera, aunque Random Forest fue más preciso que el Análisis Discriminante incluso en este paso inicial de los programas de fitomejoramiento. Random Forest permitió a los criadores obtener una clasificación fiable de individuos de tomate pertenecientes a diferentes poblaciones de cría.

6.
Article | IMSEAR | ID: sea-228108

ABSTRACT

Background: Financial strain resulting from cancer treatment correlates with reduced quality of life, treatment nonadherence, bankruptcy, and maladaptive behaviours. This study aims to explore the potential of a supervised machine learning algorithm in predicting financial toxicity in cancer patients based on their Tweets. Methods: A dataset of Tweets related to cancer and financial toxicity was constructed using Twitter's API. The dataset was curated, and synthetic Tweets were generated to augment the final dataset. A supervised machine learning algorithm, specifically Multinomial Naïve Bayes, was trained and tested to predict financial toxicity in cancer patients. Results: The model demonstrated high accuracy (0.97), precision (0.95), recall (0.99), specificity (0.96), F-1 score (0.97) and area-under-the-receiver-operating-characteristics (0.98) in predicting financial toxicity from Tweets. Wordcloud visualizations illustrated distinct linguistic patterns between Tweets related to financial toxicity and those unrelated to financial toxicity. The study also outlined potential proactive strategies for leveraging social media platforms like Twitter to identify and support cancer patients experiencing financial toxicity. Conclusions: This study marks the first attempt to construct a dataset of Tweets related to financial toxicity in cancer patients and to evaluate a predictive model trained on this dataset. The findings highlight the predictive capabilities of the model and its potential utility in guiding health systems and cancer center financial navigators to alleviate economic burdens associated with cancer treatment.

7.
Medicina (B.Aires) ; Medicina (B.Aires);84(supl.1): 57-64, mayo 2024. graf
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1558485

ABSTRACT

Resumen Introducción : El Trastorno del Espectro Autista (TEA) es un trastorno del neurodesarrollo, y sus procedimien tos tradicionales de evaluación encuentran ciertas li mitaciones. El actual campo de investigación sobre TEA está explorando y respaldando métodos innovadores para evaluar el trastorno tempranamente, basándose en la detección automática de biomarcadores. Sin embargo, muchos de estos procedimientos carecen de validez ecológica en sus mediciones. En este contexto, la reali dad virtual (RV) presenta un prometedor potencial para registrar objetivamente bioseñales mientras los usuarios experimentan situaciones ecológicas. Métodos : Este estudio describe un novedoso y lúdi co procedimiento de RV para la evaluación temprana del TEA, basado en la grabación multimodal de bio señales. Durante una experiencia de RV con 12 esce nas virtuales, se midieron la mirada, las habilidades motoras, la actividad electrodermal y el rendimiento conductual en 39 niños con TEA y 42 compañeros de control. Se desarrollaron modelos de aprendizaje automático para identificar biomarcadores digitales y clasificar el autismo. Resultados : Las bioseñales reportaron un rendimien to variado en la detección del TEA, mientras que el modelo resultante de la combinación de los modelos de las bioseñales demostró la capacidad de identificar el TEA con una precisión del 83% (DE = 3%) y un AUC de 0.91 (DE = 0.04). Discusión : Esta herramienta de detección pue de respaldar el diagnóstico del TEA al reforzar los resultados de los procedimientos tradicionales de evaluación.


Abstract Introduction : Autism Spectrum Disorder (ASD) is a neurodevelopmental condition which traditional as sessment procedures encounter certain limitations. The current ASD research field is exploring and endorsing innovative methods to assess the disorder early on, based on the automatic detection of biomarkers. How ever, many of these procedures lack ecological validity in their measurements. In this context, virtual reality (VR) shows promise for objectively recording biosignals while users experience ecological situations. Methods : This study outlines a novel and playful VR procedure for the early assessment of ASD, relying on multimodal biosignal recording. During a VR experience featuring 12 virtual scenes, eye gaze, motor skills, elec trodermal activity and behavioural performance were measured in 39 children with ASD and 42 control peers. Machine learning models were developed to identify digital biomarkers and classify autism. Results : Biosignals reported varied performance in detecting ASD, while the combined model resulting from the combination of specific-biosignal models demon strated the ability to identify ASD with an accuracy of 83% (SD = 3%) and an AUC of 0.91 (SD = 0.04). Discussion : This screening tool may support ASD diagnosis by reinforcing the outcomes of traditional assessment procedures.

8.
Rev. mex. ing. bioméd ; 45(1): 31-42, Jan.-Apr. 2024. tab, graf
Article in English | LILACS-Express | LILACS | ID: biblio-1570001

ABSTRACT

Abstract The objective of this research is to present a comparative analysis using various lengths of time windows (TW) during emotion recognition, employing machine learning techniques and the portable wireless sensing device EPOC+. In this study, entropy will be utilized as a feature to evaluate the performance of different classifier models across various TW lengths, based on a dataset of EEG signals extracted from individuals during emotional stimulation. Two types of analyses were conducted: between-subjects and within-subjects. Performance measures such as accuracy, area under the curve, and Cohen's Kappa coefficient were compared among five supervised classifier models: K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF), and Decision Trees (DT). The results indicate that, in both analyses, all five models exhibit higher performance in TW ranging from 2 to 15 seconds, with the 10 seconds TW particularly standing out for between-subjects analysis and the 5-second TW for within-subjects; furthermore, TW exceeding 20 seconds are not recommended. These findings provide valuable guidance for selecting TW in EEG signal analysis when studying emotions.


Resumen El objetivo de esta investigación es presentar un análisis comparativo empleando diversas longitudes de ventanas de tiempo (VT) durante el reconocimiento de emociones, utilizando técnicas de aprendizaje automático y el dispositivo de sensado inalámbrico portátil EPOC+. En este estudio, se utilizará la entropía como característica para evaluar el rendimiento de diferentes modelos clasificadores en diferentes longitudes de VT, basándose en un conjunto de datos de señales EEG extraídas de individuos durante la estimulación de emociones. Se llevaron a cabo dos tipos de análisis: entre sujetos e intra-sujetos. Se compararon las medidas de rendimiento, tales como la exactitud, el área bajo la curva y el coeficiente de Cohen's Kappa, de cinco modelos clasificadores supervisados: K-Nearest Neighbors (KNN), Support Vector Machine (SVM), Logistic Regression (LR), Random Forest (RF) y Decision Trees (DT). Los resultados indican que, en ambos análisis, los cinco modelos presentan un mayor rendimiento en VT de 2 a 15 segundos, destacándose especialmente la VT de 10 segundos para el análisis entre los sujetos y 5 segundos intrasujetos; además, no se recomienda utilizar VT superiores a 20 segundos. Estos hallazgos ofrecen una orientación valiosa para la elección de las VT en el análisis de señales EEG al estudiar las emociones.

9.
Rev. colomb. anestesiol ; 52(1)mar. 2024.
Article in English | LILACS-Express | LILACS | ID: biblio-1535712

ABSTRACT

The rapid advancement of Artificial Intelligence (AI) has taken the world by "surprise" due to the lack of regulation over this technological innovation which, while promising application opportunities in different fields of knowledge, including education, simultaneously generates concern, rejection and even fear. In the field of Health Sciences Education, clinical simulation has transformed educational practice; however, its formal insertion is still heterogeneous, and we are now facing a new technological revolution where AI has the potential to transform the way we conceive its application.


El rápido avance de la inteligencia artificial (IA) ha tomado al mundo por "sorpresa" debido a la falta de regulación sobre esta innovación tecnológica, que si bien promete oportunidades de aplicación en diferentes campos del conocimiento, incluido el educativo, también genera preocupación e incluso miedo y rechazo. En el campo de la Educación en Ciencias de la Salud la Simulación Clínica ha transformado la práctica educativa; sin embargo, aún es heterogénea su inserción formal, y ahora nos enfrentamos a una nueva revolución tecnológica, en la que las IA tienen el potencial de transformar la manera en que concebimos su aplicación.

10.
Rev. argent. cardiol ; 92(1): 5-14, mar. 2024. tab, graf
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1559227

ABSTRACT

RESUMEN Introducción: El número creciente de estudios ecocardiográficos y la necesidad de cumplir rigurosamente con las recomendaciones de guías internacionales de cuantificación, ha llevado a que los cardiólogos deban realizar tareas sumamente extensas y repetitivas, como parte de la interpretación y análisis de cantidades de información cada vez más abrumadoras. Novedosas técnicas de machine learning (ML), diseñadas para reconocer imágenes y realizar mediciones en las vistas adecuadas, están siendo cada vez más utilizadas para responder a esta necesidad evidente de automatización de procesos. Objetivos: Nuestro objetivo fue evaluar un modelo alternativo de interpretación y análisis de estudios ecocardiográficos, basado fundamentalmente en la utilización de software de ML, capaz de identificar y clasificar vistas y realizar mediciones estandarizadas de forma automática. Material y métodos: Se utilizaron imágenes obtenidas en 2000 sujetos normales, libres de enfermedad, de los cuales 1800 fueron utilizados para desarrollar los algoritmos de ML y 200 para su validación posterior. Primero, una red neuronal convolucional fue desarrollada para reconocer 18 vistas ecocardiográficas estándar y clasificarlas de acuerdo con 8 grupos (stacks) temáticos. Los resultados de la identificación automática fueron comparados con la clasificación realizada por expertos. Luego, algoritmos de ML fueron desarrollados para medir automáticamente 16 parámetros de eco Doppler de evaluación clínica habitual, los cuales fueron comparados con las mediciones realizadas por un lector experto. Finalmente, comparamos el tiempo necesario para completar el análisis de un estudio ecocardiográfico con la utilización de métodos manuales convencionales, con el tiempo necesario con el empleo del modelo que incorpora ML en la clasificación de imágenes y mediciones ecocardiográficas iniciales. La variabilidad inter e intraobservador también fue analizada. Resultados: La clasificación automática de vistas fue posible en menos de 1 segundo por estudio, con una precisión de 90 % en imágenes 2D y de 94 % en imágenes Doppler. La agrupación de imágenes en stacks tuvo una precisión de 91 %, y fue posible completar dichos grupos con las imágenes necesarias en 99% de los casos. La concordancia con expertos fue excelente, con diferencias similares a las observadas entre dos lectores humanos. La incorporación de ML en la clasificación y medición de imágenes ecocardiográficas redujo un 41 % el tiempo de análisis y demostró menor variabilidad que la metodología de interpretación convencional. Conclusión: La incorporación de técnicas de ML puede mejorar significativamente la reproducibilidad y eficiencia de las interpretaciones y mediciones ecocardiográficas. La implementación de este tipo de tecnologías en la práctica clínica podría resultar en reducción de costos y aumento en la satisfacción del personal médico.


ABSTRACT Background: The growing number of echocardiographic tests and the need for strict adherence to international quantification guidelines have forced cardiologists to perform highly extended and repetitive tasks when interpreting and analyzing increasingly overwhelming amounts of data. Novel machine learning (ML) techniques, designed to identify images and perform measurements at relevant visits, are becoming more common to meet this obvious need for process automation. Objectives: Our objective was to evaluate an alternative model for the interpretation and analysis of echocardiographic tests mostly based on the use of ML software in order to identify and classify views and perform standardized measurements automatically. Methods: Images came from 2000 healthy subjects, 1800 of whom were used to develop ML algorithms and 200 for subsequent validation. First, a convolutional neural network was developed in order to identify 18 standard echocardiographic views and classify them based on 8 thematic groups (stacks). The results of automatic identification were compared to classification by experts. Later, ML algorithms were developed to automatically measure 16 Doppler scan parameters for regular clinical evaluation, which were compared to measurements by an expert reader. Finally, we compared the time required to complete the analysis of an echocardiographic test using conventional manual methods with the time needed when using the ML model to classify images and perform initial echocardiographic measurements. Inter- and intra-observer variability was also analyzed. Results: Automatic view classification was possible in less than 1 second per test, with a 90% accuracy for 2D images and a 94% accuracy for Doppler scan images. Stacking images had a 91% accuracy, and it was possible to complete the groups with any necessary images in 99% of cases. Expert agreement was outstanding, with discrepancies similar to those found between two human readers. Applying ML to echocardiographic imaging classification and measurement reduced time of analysis by 41% and showed lower variability than conventional reading methods. Conclusion: Application of ML techniques may significantly improve reproducibility and efficiency of echocardiographic interpretations and measurements. Using this type of technologies in clinical practice may lead to reduced costs and increased medical staff satisfaction.

11.
Rev. argent. cardiol ; 92(1): 55-63, mar. 2024. graf
Article in Spanish | LILACS-Express | LILACS | ID: biblio-1559233

ABSTRACT

RESUMEN La inteligencia artificial (IA) está basada en programas computacionales que pueden imitar el pensamiento humano y automatizar algunos procesos. En el ámbito médico se está estudiando hace más de 50 años, pero en los últimos años el crecimiento ha sido exponencial. El campo de las imágenes cardiovasculares es particularmente atractivo para aplicarla, dado que, guiadas por IA, personas no expertas pueden adquirir imágenes completas, automatizar procesos y mediciones, orientar diagnósticos, detectar hallazgos no visibles al ojo humano, realizar diagnósticos oportunistas de afecciones no buscadas en el estudio índice pero evaluables a través de las imágenes disponibles, o identificar patrones de asociación dentro de una gran cantidad de datos como fuente de generación de hipótesis. En el campo de la prevención cardiovascular, la IA se ha aplicado en diferentes escenarios con fines diagnósticos, pronósticos y terapéuticos en el manejo de algunos factores de riesgo cardiovascular, como las dislipidemias o la hipertensión arterial. Si bien existen limitaciones con el uso de la IA tales como el costo, la accesibilidad y la compatibilidad de los programas, la validez externa de los resultados en determinadas poblaciones, o algunos aspectos éticos-legales (privacidad de los datos), esta tecnología está en crecimiento vertiginoso y posiblemente revolucione la práctica médica actual.


ABSTRACT Artificial intelligence (AI) is based on computer programs that imitate human thinking and automate certain processes. Artificial intelligence has been studied in the medical field for over 50 years, but in recent years, its growth has been exponential. The field of cardiovascular imaging is particularly attractive since AI can guide non-experts in image acquisition, automate processes and measurements, guide diagnoses, detect findings not visible to the human eye, make opportunistic diagnoses of unexpected conditions in the index test, or identify patterns of association within a large amount of data as a source of hypothesis generation. In the field of cardiovascular prevention, AI has been used for diagnostic, prognostic, and therapeutic purposes in managing cardiovascular risk factors such as dyslipidemia and hypertension. While there are limitations to the use of AI, such as cost, accessibility, compatibility of programs, external validity of results in certain populations, and ethical-legal aspects such as data privacy, this technology is rapidly growing and is likely to revolutionize current medical practice.

12.
Article | IMSEAR | ID: sea-233770

ABSTRACT

Mutations that promote aberrant cell growth are the root of the condition known as cancer. There are over a hundred distinct forms of cancer that have been identified, with lung, colon, pancreatic, breast, kidney, and prostate cancer being the most prevalent. The likelihood that a patient will survive cancer is significantly improved by early identification. Most techniques used to detect cancer are invasive, which may be painful and uncomfortable for patients and prevent them from seeking treatment. As a result, cancer is frequently discovered only after substantial symptoms have developed and it may then be too late for treatment. In this review, we will discuss several methods for detecting cancer through blood tests, different elements that serve as biomarkers, and machine learning algorithms for predicting outcomes.

13.
Yao Xue Xue Bao ; (12): 76-83, 2024.
Article in Chinese | WPRIM | ID: wpr-1005439

ABSTRACT

Most chemical medicines have polymorphs. The difference of medicine polymorphs in physicochemical properties directly affects the stability, efficacy, and safety of solid medicine products. Polymorphs is incomparably important to pharmaceutical chemistry, manufacturing, and control. Meantime polymorphs is a key factor for the quality of high-end drug and formulations. Polymorph prediction technology can effectively guide screening of trial experiments, and reduce the risk of missing stable crystal form in the traditional experiment. Polymorph prediction technology was firstly based on theoretical calculations such as quantum mechanics and computational chemistry, and then was developed by the key technology of machine learning using the artificial intelligence. Nowadays, the popular trend is to combine the advantages of theoretical calculation and machine learning to jointly predict crystal structure. Recently, predicting medicine polymorphs has still been a challenging problem. It is expected to learn from and integrate existing technologies to predict medicine polymorphs more accurately and efficiently.

14.
Article in Chinese | WPRIM | ID: wpr-1006505

ABSTRACT

@#Objective     To construct a radiomics model for identifying clinical high-risk carotid plaques. Methods     A retrospective analysis was conducted on patients with carotid artery stenosis in China-Japan Friendship Hospital from December 2016 to June 2022. The patients were classified as a clinical high-risk carotid plaque group and a clinical low-risk carotid plaque group according to the occurrence of stroke, transient ischemic attack and other cerebrovascular clinical symptoms within six months. Six machine learning models including eXtreme Gradient Boosting, support vector machine, Gaussian Naive Bayesian, logical regression, K-nearest neighbors and artificial neural network were established. We also constructed a joint predictive model combined with logistic regression analysis of clinical risk factors. Results    Finally 652 patients were collected, including 427 males and 225 females, with an average age of 68.2 years. The results showed that the prediction ability of eXtreme Gradient Boosting was the best among the six machine learning models, and the area under the curve (AUC) in validation dataset was 0.751. At the same time, the AUC of eXtreme Gradient Boosting joint prediction model established by clinical data and carotid artery imaging data validation dataset was 0.823. Conclusion     Radiomics features combined with clinical feature model can effectively identify clinical high-risk carotid plaques.

15.
Article in Chinese | WPRIM | ID: wpr-1006526

ABSTRACT

@#Lung adenocarcinoma is a prevalent histological subtype of non-small cell lung cancer with different morphologic and molecular features that are critical for prognosis and treatment planning. In recent years, with the development of artificial intelligence technology, its application in the study of pathological subtypes and gene expression of lung adenocarcinoma has gained widespread attention. This paper reviews the research progress of machine learning and deep learning in pathological subtypes classification and gene expression analysis of lung adenocarcinoma, and some problems and challenges at the present stage are summarized and the future directions of artificial intelligence in lung adenocarcinoma research are foreseen.

16.
Journal of Army Medical University ; (semimonthly): 738-745, 2024.
Article in Chinese | WPRIM | ID: wpr-1017586

ABSTRACT

Objective To construct risk prediction models of death or readmission in patients with acute heart failure(AHF)during the vulnerable phase based on machine learning algorithms and screen the optimal model.Methods A total of 651 AHF patients with admitted to Department of Cardiology of the Second Affiliated Hospital of Army Medical University from October 2019 to July 2021 were included.The clinical data consisting of admission vital signs,comorbidities and laboratory results were collected from electronic medical records.The composite endpoint was defined as all-cause death or readmission for worsening heart failure within 3 months after discharge.The patients were divided into a training set(521 patients)and a test set(130 patients)in a ratio of 8:2 through the simple random sampling.Six machine learning models were developed,including logistic regression(LR),random forest(RF),decision tree(DT),light gradient boosting machine(LGBM),extreme gradient boosting(XGBoost)and neural networks(NN).Receiver operating characteristic(ROC)curve and decision curve analysis(DCA)were used to evaluate the predictive performance and clinical benefit of the models.Shapley additive explanation(SHAP)was used to explain and evaluate the effect of different clinical characteristics on the models.Results A total of 651 AHF patients were included,of whom 203 patients(31.2%)died or were readmitted during the vulnerable phase.ROC curve analysis showed that the AUC values of the LR,RF,DT,LGBM,XGBoost and NN model were 0.707,0.756,0.616,0.677,0.768 and 0.681,respectively.The XGBoost model had the highest AUC value.DCA showed that the XGBoost model exhibited greater clinical net benefit compared with other models,with the best predictive performance.SHAP algorithm analysis showed that the clinical features that had the greatest impact on the output of the model were serum uric acid,D-dimer,mean arterial pressure,B-type natriuretic peptide,left atrial diameter,body mass index,and New York Heart Association(NYHA)classification.Conclusion The XGBoost model has the best predictive performance in predicting the risk of death or readmission of AHF patients during the vulnerable phase.

17.
Journal of Army Medical University ; (semimonthly): 753-759, 2024.
Article in Chinese | WPRIM | ID: wpr-1017588

ABSTRACT

Objective To establish an early prediction model for the diagnosis of severe acute pancreatitis based on the improved machine learning models,and to analyze its clinical value.Methods A case-control study was conducted on 352 patients with acute pancreatitis admitted to the Gastroenterology and Hepatobiliary Surgery Departments of the Army Medical Center of PLA and Emergency and Critical Care Medicine Department of No.945 Hospital of Joint Logistics Support Force of PLA from January 2014 to August 2023.According to the severity of the disease,the patients were divided into the severe group(n=88)and the non-severe group(n=264).The RUSBoost model and improved Archimead optimization algorithm was used to analyze 39 routine laboratory biochemical indicators within 48 h after admission to construct an early diagnosis and prediction model for severe acute pancreatitis.The task of feature screening and hyperparameter optimization was completed simultaneously.The ReliefF algorithm feature importance rank and multivariate logistic analysis were used to analyze the value of the selected features.Results In the training set,the area under curve(AUC)of the improved machine learning model was 0.922.In the testing set,the AUC of the improved machine learning model reached 0.888.The 4 key features of predicting severe acute pancreatitis based on the improved Archimedes optimization algorithm were C-reactive protein,blood chlorine,blood magnesium and fibrinogen level,which were consistent with the results of ReliefF algorithm feature importance ranking and multivariate logistic analysis.Conclusion The application of improved machine learning model analyzing the laboratory examination results can help to early predict the occurrence of severe acute pancreatitis.

18.
Journal of Army Medical University ; (semimonthly): 760-767, 2024.
Article in Chinese | WPRIM | ID: wpr-1017589

ABSTRACT

Objective To construct a machine learning prediction model for postoperative liver injury in patients with non-liver surgery based on preoperative and intraoperative medication indicators.Methods A case-control study was conducted on 315 patients with liver injury after non-liver surgery selected from the databases developed by 3 large general hospitals from January 2014 to September 2022.With the positive/negative ratio of 1 ∶3,928 cases in corresponding period with non-liver surgery and without liver injury were randomly matched as negative control cases.These 1243 patients were randomly divided into the modeling group(n=869)and the validation group(n=374)in a ratio of 7∶3 using the R language setting code.Preoperative clinical indicators(basic information,medical history,relevant scale score,surgical information and results of laboratory tests)and intraoperative medication were used to construct the prediction model for liver injury after non-liver surgery based on 4 machine learning algorithms,k-nearest neighbor(KNN),support vector machine linear(SVM),logic regression(LR)and extreme gradient boosting(XGBoost).In the validation group,receiver operating characteristic(ROC)curve,precision-recall curve(P-R),decision curve analysis(DCA)curve,Kappa value,sensitivity,specificity,Brier score,and F1 score were applied to evaluate the efficacy of model.Results The model established by 4 machine learning algorithms to predict postoperative liver injury after non-liver surgery was optimal using the XGBoost algorithm.The area under the receiver operating characteristic curve(AUROC)was 0.916(95%CI:0.883~0.949),area under the precision-recall curve(AUPRC)was 0.841,Brier score was 0.097,and sensitivity and specificity was 78.95%and 87.10%,respectively.Conclusion The postoperative liver injury prediction model for non-liver surgery based on the XGBoost algorithm has effective prediction for the occurrence of postoperative liver injury.

19.
Article in Chinese | WPRIM | ID: wpr-1017644

ABSTRACT

To address the throughput limitations of digital nucleic acid analysis,a tricolor combination-based droplet coding technique was developed to achieve multiplex digital nucleic acid analysis with flexible throughput expansibility.To improve the analysis efficiency,a machine learning-based method was further developed for automatic decoding of color-coded droplet array.The machine learning algorithm empowered the computer program to automatically extract the color-position-quantity information of the droplets.By correlating this color-position-quantity of droplets before and after nucleic acid amplification,the proportion of positive droplets for each target was rapidly determined.This droplet decoding strategy was applied to multiplex digital nucleic acid analysis.The experimental results demonstrated that this droplet decoding method was fast and accurate,with a decoding process completed within 2 min.Furthermore,the droplet identification accuracy exceeded 99%.Additionally,the obtained nucleic acid quantification results exhibited a good correlation(R2>0.99)with those reported by a commercial digital PCR instrument.

20.
Article in Chinese | WPRIM | ID: wpr-1017795

ABSTRACT

Objective To explore the machine learning model and risk factor analysis for hospital infection caused by carbapenem-resistant enterobacteriaceae(CRE).Methods The clinical data of totally 451 patients infected with extended-spectrum β-lactamases(ESBL)producing Enterobacteriaceae treated in the hospital from 2018 to 2022 were retrospectively collected.The patients were divided into CRE group(115 cases)and sensitive group(336 cases)according to the susceptibility of carbapenem.Four machine learning methods in-cluding Logistic regression analysis,random forest,support vector machine,and neural network were used to build prediction models and receiver operating characteristic curve was used to evaluate.Based on the predic-tion model with the best performance,risk factors for CRE infection were analyzed.Results Random forest model had the best performance,with the area under the curve of 0.952 3.The risk factors for predicting CRE infection by the random forest model included 15 clinical data items,namely fever for more than 3 days,cere-bral injury,drainage fluid sample,trunk surgery,first-level or special-level nursing,ICU treatment,procalcito-nin,anti-anaerobic bacteria,the use of third-generation cephalosporins,age,pre-albumin,creatinine,white blood cell count,and albumin.Conclusion The CRE prediction model developed in this study has good predic-tive value and the risk factors have guiding significance for the early prevention and treatment of CRE infec-tion in clinical practice.

SELECTION OF CITATIONS
SEARCH DETAIL