Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 46
Filtre
1.
Acta odontol. Colomb. (En linea) ; 12(2): 61-77, Jul-Dec. 2022. tab, graf
Article Dans Espagnol | LILACS | ID: biblio-1397171

Résumé

Objetivo: establecer los parámetros para la evaluación visual e instrumental del color dental en estudios in-vitro a partir de la literatura científca publicada entre 2015 y 2021. Métodos: se realizó la búsqueda en las bases de datos: PubMed, Web of Science, Science Direct, Scopus, Scielo y Lilacs; también en el motor de búsqueda Google Académico y las bibliotecas de las editoriales Wiley y Springer. Las palabras clave utilizadas fueron tooth, color, in-vitro, color perception, shade matching, thresholds, appearance, surrounding, "CIELAB" y "CIEDE2000". Teniendo en cuenta los criterios de elegibilidad, se seleccionaron los estudios de acuerdo al título, resumen y texto completo. Resultados: la búsqueda arrojó un total de 37 publicaciones que se agruparon en tres tópicos: 1. toma de color visual: condiciones ambientales, observadores y nivelación; 2. toma de color instrumental: instrumentos; y 3. procesamiento de datos: cálculo de la diferencia de color y umbrales de perceptibilidad (PT) y aceptabilidad (AT). Conclusiones: los aspectos más importantes en la evaluación visual son la iluminación, el ambiente para registro (sitio, entorno y fondo alrededor de la muestra), las condiciones geométricas de visualización, los observadores y el uso de guías. En la evaluación instrumental es relevante elegir el aparato apropiado de acuerdo con su precisión y reproducibilidad, como los espectroradiómetros y los espectrofotómetros de uso clínico. Se presenta el procesamiento de datos para establecer las variaciones de cada coordenada, las diferencias de color (ΔE): CIELAB y CIEDE2000, los umbrales y los lineamientos.


Objective: To establish the parameters for the visual and instrumental evaluation of tooth color in in-vitro studies based on the scientifc literature published between 2015 and 2021. Methods: The search was carried out in the databases of PubMed, Web of Science, Science Direct, Scopus, Scielo, Lilacs; search engine Google Scholar and publishers' library of Wiley and Scielo, using the keywords "tooth", "color", "in vitro", "color perception", "shade matching", "thresholds", "appearance", "surrounding", "CIELAB", and "CIEDE2000". The literature was selected according to the title, abstract and full text taking into the eligibility criteria. Results: It yielded a total of 37 publications, which were grouped into three topics: 1. visual color acquisition: environmental conditions for color acquisition, observers and levelling. 2. instrumental color sampling: instruments. 3. Data processing: Calculation of color diference and perception thresholds (PT) and acceptability thresholds (AT). Conclusions: The most important aspects in the visual assessment are lighting, the environment for color registration (site, environment and background around the sample), the geometric conditions of visualization, the observers and the use of guides. Regarding the instrumental assessment of color, the appropriate devices must be chosen according to its precision and reproducibility, being the spectrophotometers and spectroradiometers the most precise ones. It is presented how the data processing is carried out to establish the variations of each coordinate, the color diferences (ΔE): CIELAB and CIEDE2000, thresholds and guidelines.


Sujets)
Dent , Perception des couleurs , Techniques in vitro , Seuil différentiel
2.
Movimento (Porto Alegre) ; 28: e28037, 2022. tab
Article Dans Portugais | LILACS | ID: biblio-1406047

Résumé

Este trabalho versa sobre potencialidades e limites relacionados à utilização de processamentos de dados para auxiliar na produção e sistematização de conhecimento científico. Objetiva, através de um exercício experimental envolvendo a utilização de algoritmo, discutir a viabilidade do uso de técnicas de coleta automatizada para levantamento e produção de dados utilizáveis no âmbito das pesquisas científicas. Como demonstração, busca reproduzir de maneira automatizada processos relacionados à coleta de dados de pesquisa anteriormente publicada neste periódico, descrevendo metodologicamente como foram organizados e desenvolvidos a extração e o tratamento desses dados. Como resultado, constata que o processamento automatizado pode ser uma alternativa produtiva e eficiente para auxiliar nas sistematizações e análises sobre o acumulado crescente de publicações no campo científico, podendo abrir novos caminhos metodológicos de pesquisa na Educação Física, especialmente considerando o volume de dados passível de coleta e análise em redes sociais, fóruns e outras plataformas na web. (AU)


This paper deals with potentials and limits related to the use of data processing to assist in the production and systematization of scientific knowledge. It aims, through an experimental exercise involving the use of an algorithm, to discuss the feasibility of using automated collection techniques for surveying and producing data that can be used in scientific research. As a demonstration, it seeks to automatically reproduce processes related to the collection of research data previously published in this journal, describing methodologically how the extraction and treatment of these data was organized and developed. As a result, it finds that automated processing can be a productive and efficient alternative to assist in the systematization and analysis of the growing accumulation of publications in the scientific field, which may open new methodological paths for research in Physical Education, especially considering the volume of data subject to collection and analysis on social networks, forums and other web platforms. (AU)


Este trabajo aborda las potencialidades y los límites relacionados con el uso del procesamiento de datos para ayudar en la producción y sistematización del conocimiento científico. Su objetivo, a través de un ejercicio experimental que implica el uso de un algoritmo, es discutir la viabilidad del uso de técnicas de recolección automatizada para la obtención y producción de datos que se puedan utilizar en el ámbito de las investigaciones científicas. A modo de demostración, se busca reproducir de manera automatizada procesos relacionados con la recolección de datos de una investigación previamente publicada en esta revista, describiendo metodológicamente cómo se organizó y desarrolló la extracción y el tratamiento de esos datos. Como resultado, se constata que el procesamiento automatizado puede ser una alternativa productiva y eficiente para ayudar en la sistematización y análisis de la creciente acumulación de publicaciones en el campo científico, lo que puede abrir nuevos caminos metodológicos para la investigación en Educación Física, especialmente considerando el volumen de datos que se pueden recolectar y analizar en redes sociales, foros y otras plataformas web. (AU)


Sujets)
Humains , Mâle , Femelle , Traitement automatique des données , Bibliométrie , Mémorisation et recherche des informations
3.
China Journal of Chinese Materia Medica ; (24): 279-284, 2022.
Article Dans Chinois | WPRIM | ID: wpr-927935

Résumé

Quality is the guarantee for the clinical safety and effectiveness of Chinese medicine. Accurate quality evaluation is the key to the standardization and modernization of Chinese medicine. Efforts have been made in improving Chinese medicine quality and strengthening the quality and safety supervision in China, but rapid and accurate quality evaluation of complex Chinese medicine samples is still a challenge. On the basis of the development of ambient mass spectrometry and the application in quality evaluation of complex Chinese medicine systems in recent years, the authors developed the multi-scenario Chinese medicine quality evaluation strategies. A systematic methodology was proposed in specific areas such as real-time monitoring of the quality of complex Chinese medicine decoction system, rapid toxicity grading of compound Chinese patent medicine, and evaluation of bulk medicinals of Chinese patent medicine. Allowing multi-scenario analysis of Chinese medicine, it is expected to provide universal research ideas and technical methods for rapid and accurate quality evaluation of Chinese medicine and boost the high-quality development of Chinese medicine industry.


Sujets)
Chine , Médicaments issus de plantes chinoises , Spectrométrie de masse , Médecine traditionnelle chinoise , Médicaments sans ordonnance , Normes de référence
4.
Santa Tecla, La Libertad; ITCA Editores; 2021. 60 p. ilus.^c28 cm., tab..
Monographie Dans Portugais | BISSAL, LILACS | ID: biblio-1352868

Résumé

En el presente informe, se encontrará con el estudio sobre el desarrollo de un sistema multiplataforma para el control de operaciones de emergencias, inventario y recurso humano de la Cruz Roja Salvadoreña del municipio de Chinameca, San Miguel, cuyos objetivos fueron diseñar un modelado de datos relacional con características de escalabilidad y definir procesos en el software que se acoplen a las necesidades actuales de la Cruz Roja y las solventen. Para el desarrollo de la investigación se realizó el diseño de interfaces que mejoren la experiencia de usuario en el uso de la plataforma. Finalmente se capacitó al personal en el uso de la plataforma.El diseño del modelado de datos relacional implementado en este sistema informático permite su adaptación a los cambios gracias al funcionamiento lógico del sistema, facilitando de esta manera su actualización a nuevas versiones y tecnologías sin afectar su rendimiento


In this report, the development of a multiplatform system for the control of emergency operations, inventory and human resources of the Salvadoran Red Cross of the municipality of Chinameca, San Miguel, will be explained. The objectives were to design a data modeling relational with scalability characteristics, processes and to define in the software the current needs of the Red Cross and solve them. For the development of the research, the design of interfaces that improve the user experience in the use of the platform was carried out. Finally, the personnel were trained in the use of the platform. The data modeling relational design implemented in this system allows the adaptation to changes thanks to logic system performance, making easier the updating to new versions and technologies without affecting his efficiency


Sujets)
Croix-Rouge/organisation et administration , Administration des services de santé/tendances , Conception de logiciel , Urgences , Logiciel , Équipement et fournitures , Rapport de recherche , Besoins et demandes de services de santé
5.
Epidemiol. serv. saúde ; 30(1): e2020576, 2021. tab, graf
Article Dans Anglais, Portugais | LILACS | ID: biblio-1286334

Résumé

Indicadores de saúde representam uma importante ferramenta de acompanhamento de desempenho de ações em Saúde Pública, permitindo a avaliação de intervenções realizadas, bem como a identificação de tendências e regiões prioritárias para alocação de recursos. Com o objetivo de aumentar a praticidade nas tarefas de análise e manipulação de dados desses indicadores, foi criado um pacote R. O pacote rtabnetsp realiza requisições aos servidores TabNet da página eletrônica da Secretaria de Estado da Saúde de São Paulo, recuperando e tratando tais dados para utilização do usuário. Este artigo apresenta o pacote rtabnetsp e suas funções, modo de instalação e uso; traz também exemplos de suas funcionalidades, que permitem a visualização, busca e seleção, entre uma lista de indicadores, do conteúdo desejado, além da obtenção dos dados agregados pelo nível de regionalização disponível na matriz de dados, conferindo maior agilidade a tarefas de gestão em saúde do estado de São Paulo.


Indicadores de salud son una herramienta importante para monitorear el desempeño de las acciones de salud pública, permitiendo la evaluación de las intervenciones hechas, así como la identificación de tendencias y regiones prioritarias para la asignación de recursos. En la búsqueda de aumentar la practicidad en las tareas de análisis y manipulación de datos de estos indicadores, se creó un paquete R. El paquete rtabnetsp realiza solicitudes a los servidores TabNet del Departamento de Salud del Estado de São Paulo, recogiendo y procesando dichos datos para el usuario. Este artículo presenta el paquete, sus funciones, instalación y uso, así como ejemplos de sus funcionalidades, que permiten visualizar y buscar desde un listado de indicadores, seleccionar el contenido deseado y obtener los datos agregados por el nivel de regionalización disponible en la matriz de datos, alcanzando más agilidad en las tareas de gestión de la salud en el estado de São Paulo.


Health status indicators are an important tool for monitoring the performance of public health actions, identifying trends and priority regions for resource allocation. An R package was developed in order to increase the feasibility of handling and analyzing health status indicator data. The rtabnetsp package requests data from TabNet servers on the São Paulo State Department of Health website, retrieving and preprocessing the data for user manipulation. This article presents the rtabnetsp package and its functions, installation and use; as well as providing examples of its functionalities, which involve listing and searching among available indicators, selecting desired content and obtaining data aggregated according to regionalization level held on the data matrix, enabling greater agility in tasks regarding public health management in the state of São Paulo.


Sujets)
Humains , Indicateurs d'état de santé , Gestion de la Santé , Systèmes d'information sur la santé , Brésil , Traitement automatique des données
6.
Rev. Fac. Med. (Bogotá) ; 68(1): 117-120, Jan.-Mar. 2020.
Article Dans Anglais | LILACS-Express | LILACS | ID: biblio-1125615

Résumé

Abstract Big data is a term that comprises a group of technological tools capable of processing extremely large heterogeneous data sets, which are continuously collected and are available to be used at any time, and, therefore, constitutes a source of scientific evidence production. In the pharmacoepidemiology field, analyses made using these data sets may result in the development of pharmacological therapies that are more efficient, less expensive, and have a lower occurrence rate of adverse reactions. Likewise, the use of tools such as Text Mining or Machine Learning has led to major advances in pharmacoepidemiology and pharmacovigilance areas, so it is likely that these tools will be increasingly used over time.


Resumen Big data es un término que comprende un grupo de herramientas tecnológicas capaces de procesar conjuntos de datos heterogéneos extremadamente grandes, los cuales se recolectan de manera continua, están disponibles para ser usados y constituyen una fuente de evidencia científica. En el área de la farmacoepidemiología, los análisis generados a partir de estos conjuntos de datos pueden resultar en la obtención de terapias médicas más eficientes, con menor número de reacciones adversas y menos costosas. Asimismo, el uso de herramientas como el Text Mining o el Machine Learning también ha llevado a grandes avances en las áreas de farmacoepidemiología y farmacovigilancia, por lo que es probable que su empleo sea cada vez mayor.

7.
Journal of Pharmaceutical Analysis ; (6): 240-246, 2020.
Article Dans Chinois | WPRIM | ID: wpr-824001

Résumé

Compared to their linear counterparts, cyclic peptides show better biological activities, such as anti-bacterial, immunosuppressive, and anti-tumor activities, and pharmaceutical properties due to their conformational rigidity. However, cyclic peptides could form numerous putative metabolites from po-tential hydrolytic cleavages and their fragments are very difficult to interpret. These characteristics pose a great challenge when analyzing metabolites of cyclic peptides by mass spectrometry. This study was to assess and apply a software-aided analytical workflow for the detection and structural characterization of cyclic peptide metabolites. Insulin and atrial natriuretic peptide (ANP) as model cyclic peptides were incubated with trypsin/chymotrypsin and/or rat liver S9, followed by data acquisition using TripleTOF? 5600. Resultant full-scan MS and MS/MS datasets were automatically processed through a combination of targeted and untargeted peak finding strategies. MS/MS spectra of predicted metabolites were interrogated against putative metabolite sequences, in light of a, b, y and internal fragment series. The resulting fragment assignments led to the confirmation and ranking of the metabolite sequences and identification of metabolic modification. As a result, 29 metabolites with linear or cyclic structures were detected in the insulin incubation with the hydrolytic enzymes. Sequences of twenty insulin metabolites were further determined, which were consistent with the hydrolytic sites of these enzymes. In the same manner, multiple metabolites of insulin and ANP formed in rat liver S9 incubation were detected and structurally characterized, some of which have not been previously reported. The results demonstrated the utility of software-aided data processing tool in detection and identification of cyclic peptide metabolites.

8.
Cad. Saúde Pública (Online) ; 35(9): e00032419, 2019. graf
Article Dans Portugais | LILACS | ID: biblio-1039423

Résumé

O objetivo do estudo foi desenvolver um algoritmo capaz de realizar o download e o pré-processamento de microdados fornecidos pelo Departamento de Informática do SUS (DATASUS) para diversos sistemas de informações em saúde para a linguagem de programação estatística R. O pacote desenvolvido permite o download e o pré-processamento de dados de diversos sistemas de informação em saúde, com a inclusão da rotulagem dos campos categóricos nos arquivos. A função de download foi capaz de acessar diretamente e reduzir o volume de trabalho para a seleção de arquivos e variáveis de microdados junto ao DATASUS. Já a função de pré-processamento foi capaz de efetuar a codificação automática de diversos campos categóricos. Dessa forma, a utilização desse pacote possibilita um fluxo de trabalho contínuo no mesmo programa, no qual esse algoritmo permite o download e o pré-processamento, e outros pacotes do R permitem a análise de dados dos sistemas de informação em saúde do Sistema Único de Saúde (SUS).


This study aimed to develop an algorithm for downloading and preprocessing microdata furnished by the Brazilian Health Informatics Department (DATASUS) for various health information systems, using the R statistical programming language. The package allows downloading and preprocessing data from various health information systems, with the inclusion of labeling categorical fields in the files. The download function was capable of directly accessing and reducing the workload for the selection of microdata files and variables in DATASUS, while the preprocessing function enabled automatic coding of various categorical fields. The package thus enables a continuous workflow in the same program, in which the algorithm allows downloading and preprocessing and other packages in R allow analyzing data from the health information systems in the Brazilian Unified National Health System (SUS).


El objetivo del estudio fue desarrollar un algoritmo capaz de realizar la descarga y pre-procesamiento de microdatos, proporcionados por el Departamento de Informática del SUS (DATASUS), para diversos sistemas de información en salud, así como para el lenguaje de programación estadístico R. El paquete desarrollado permite la descarga y preprocesamiento de datos de diversos sistemas de información en salud, con la inclusión del rótulo de los campos categóricos en los archivos. La función de descarga se mostró capaz de acceder directamente y reducir el volumen de trabajo para la selección de archivos y variables de microdatos a través del DATASUS, mientras que la función de pre-procesamiento fue capaz de efectuar la codificación automática de diversos campos categóricos. De esta forma, la utilización de este paquete posibilita un flujo de trabajo continuo en el mismo programa, donde este algoritmo permite la descarga y preprocesamiento y otros paquetes del R permiten el análisis de datos de los sistemas de información en salud del Sistema Único de Salud (SUS).


Sujets)
Humains , Informatique médicale/instrumentation , Systèmes d'information/instrumentation , Bases de données factuelles , Algorithmes , Brésil , Flux de travaux
9.
Chinese Journal of Epidemiology ; (12): 17-19, 2019.
Article Dans Chinois | WPRIM | ID: wpr-738208

Résumé

Precision medicine became the key strategy in development priority of science and technology in China.The large population-based cohorts become valuable resources in preventing and treating major diseases in the population,which can contribute scientific evidence for personalized treatment and precise prevention.The fundamental question of the achievements above,therefore,is how to construct a large population-based cohort in a standardized way.The Chinese Preventive Medicine Association co-ordinated experienced researchers from Peking University and other well-known institutes to write up two group standards Technical specification of data processing for large population-based cohort study (T/CPMA 001-2018) and Technical specification of data security for large population-based cohort study (T/CPMA 002-2018),on data management.The standards are drafted with principles of emphasizing their scientific,normative,feasible,and generalizable nature.In these two standards,the key principles are proposed,and technical specifications are recommended in data standardization,cleansing,quality control,data integration,data privacy protection,and database security and stability management in large cohort studies.The standards aim to guide the large population-based cohorts that have been or intended to be established in China,including national cohorts,regional population cohorts,and special population cohorts,hence,to improve domestic scientific research level and the international influence,and to support decision-making and practice of disease prevention and control.

10.
Chinese Journal of Epidemiology ; (12): 17-19, 2019.
Article Dans Chinois | WPRIM | ID: wpr-736740

Résumé

Precision medicine became the key strategy in development priority of science and technology in China.The large population-based cohorts become valuable resources in preventing and treating major diseases in the population,which can contribute scientific evidence for personalized treatment and precise prevention.The fundamental question of the achievements above,therefore,is how to construct a large population-based cohort in a standardized way.The Chinese Preventive Medicine Association co-ordinated experienced researchers from Peking University and other well-known institutes to write up two group standards Technical specification of data processing for large population-based cohort study (T/CPMA 001-2018) and Technical specification of data security for large population-based cohort study (T/CPMA 002-2018),on data management.The standards are drafted with principles of emphasizing their scientific,normative,feasible,and generalizable nature.In these two standards,the key principles are proposed,and technical specifications are recommended in data standardization,cleansing,quality control,data integration,data privacy protection,and database security and stability management in large cohort studies.The standards aim to guide the large population-based cohorts that have been or intended to be established in China,including national cohorts,regional population cohorts,and special population cohorts,hence,to improve domestic scientific research level and the international influence,and to support decision-making and practice of disease prevention and control.

11.
Intestinal Research ; : 365-374, 2019.
Article Dans Anglais | WPRIM | ID: wpr-764154

Résumé

BACKGROUND/AIMS: TrueColours ulcerative colitis (TCUC) is a comprehensive web-based program that functions through email, providing direct links to questionnaires. Several similar programs are available, however patient perspectives are unexplored. METHODS: A pilot study was conducted to determine feasibility, usability and patient perceptions of real-time data collection (daily symptoms, fortnightly quality of life, 3 monthly outcomes). TCUC was adapted from a web-based program for patients with relapsing-remitting bipolar disorder, using validated UC indices. A semi-structured interview was developed and audio-recorded face-to-face interviews were conducted after 6 months of interaction with TCUC. Transcripts were coded in NVivo11, a qualitative data analysis software package. An inductive approach and thematic analysis was conducted. RESULTS: TCUC was piloted in 66 patients for 6 months. Qualitative analysis currently defies statistical appraisal beyond “data saturation,” even if it has more influence on clinical practice than quantitative data. A total of 28 face-to-face interviews were conducted. Six core themes emerged: awareness, control, decision-making, reassurance, communication and burden of treatment. There was a transcending overarching theme of patient empowerment, which cut across all aspects of the TCUC experience. CONCLUSIONS: Patient perception of the impact of real-time data collection was extremely positive. Patients felt empowered as a product of the self-monitoring format of TCUC, which may be a way of improving self-management of UC whilst also decreasing the burden on the individual and healthcare services.


Sujets)
Humains , Traitement automatique des données , Trouble bipolaire , Rectocolite hémorragique , Collecte de données , Prestations des soins de santé , Courrier électronique , Participation des patients , Projets pilotes , Qualité de vie , Autosoins , Statistiques comme sujet , Ulcère
12.
Rev. méd. Urug ; 34(3): 133-138, jul. 2018.
Article Dans Espagnol | LILACS | ID: biblio-914713

Résumé

El triaje -proceso de clasificación de pacientes según prioridades asistenciales- es una herramienta reconocida para la gestión asistencial y administrativa de los departamentos de emergencia. En el Hospital de Clínicas, luego de ocho años de funcionamiento, se cuenta con un proceso de triaje automatizado y normalizado que se transformó en la base organizativa para el abordaje calificado de las consultas. El objetivo del trabajo fue comparar la concordancia del triaje efectuado por personal de salud entrenado y no entrenado previamente sin apoyo informático versus el sistema informatizado, comparándolo con los resultados del mismo en tiempo real. Se observó que existe un mayor nivel de concordancia del personal entrenado con los resultados del sistema informatizado si lo comparamos con el personal no entrenado. El observador capacitado con más resultados concordantes obtuvo 55,9% de acuerdos con el sistema informatizado de triaje (19 concordantes de 34), y el que obtuvo menos resultados concordantes tuvo 32,4% de similitud (11 concordantes de 34). En el grupo de no expertos el promedio global de concordancia fue de 41,5%. El observador experto tuvo 79,4% (27/34) de resultados iguales y un índice Kappa respecto al sistema informatizado de triaje. El observador experto tuvo un índice Kappa de 0,695, mientras que los observadores capacitados tuvieron un índice Kappa de 0,19 y 0,23 cuando se compararon con el sistema informático y el observador experimentado, respectivamente. Se concluye que un período breve de entrenamiento en triaje no aumenta la concordancia cuando se comparan con los resultados del triaje usando un sistema informático y con el triaje realizado por un observador experimentado. Estos resultados deberían ser validados en series mayores de pacientes. (AU)


"Triage" -the process of quickly examining patients according to their priority of treatment - is a tool that has been recognized for institutional and administrative management in the Emergency Departments. Eight years after its introduction, the Clinicas Hospital has an automatized and normalized process which has become the organizational bases to address consultations in a qualified manner. The study aimed to compare triage done by health professionals who had been trained and the one done by health professionals with no prior training of IT support, to the computerized system, comparing it with results in real time. A higher level of agreement between trained health professionals with the results in the computerized system, when compared to professionals who lacked training was observed. The trained observer with the most matching results achieved 55.9% of agreements with the computerized triage system (19 out of 34), and the observer with the least matching results obtained 32.4% of similarities (11 out of 34). Global agreement level was 41.5% in the group of professionals who were not experts. Experienced observers accounted for 79.4% (27/34) of equal results and kappa index of 0.695, whereas trained observers had 0.19 and 0.23 Kappa indexes when compared to the computerized system and the experiences observer, respectively. Therefore, we find that a short training in triage does not increase agreement when compared to the computerized system and it does increase when we compare it to triage by an experienced observer. These results should be validated in larger series of patients. (AU)


A "triagem" -processo de classificação de pacientes por prioridades assistenciais- é uma ferramenta reconhecida para a gestão assistencial e administrativa dos Departamentos de Emergência. No Hospital de Clínicas, depois de oito anos de funcionamento, está disponível um processo de triagem automatizado e normalizado que funciona como base da organização para a abordagem qualificada das consultas. O objetivo deste trabalho foi comparar a concordância da triagem realizada por pessoal de saúde treinado e não treinado previamente sem apoio informático, versus sistema informatizado, comparando os resultados em tempo real. Observou-se um maior nível de concordância do pessoal treinado com os resultados do sistema informatizado, se comparamos com o pessoal não treinado. O observador capacitado com mais resultados concordantes teve 55,9% de concordâncias com o sistema informatizado de triagem (19 concordantes de 34), e o que obteve menos resultados concordantes 32,4% de similitude (11 concordantes de 34). No grupo de no expertos a média global de concordância foi 41,5%. O observador experto teve 79,4% (27/34) de resultados iguais e um índice kappa respeito al sistema informatizado de triagem. O observador experto teve um índice de Kappa de 0,695, enquanto os observadores capacitados tiveram um índice kappa de 0.19 y 0.23 quando foram comparados com o sistema informático e o observador experimentado, respectivamente. Conclui-se que um período breve de treinamento em triagem não aumenta a concordância quando se compara com si e com um observador experimentado. Estes resultados deveriam ser validados em series maiores de pacientes. (AU)


Sujets)
Traitement automatique des données , Triage
13.
Rev. cub. inf. cienc. salud ; 29(1): 55-73, ene.-mar. 2018. ilus, tab
Article Dans Espagnol | LILACS, CUMED | ID: biblio-900943

Résumé

Por la diversidad de formas en la entrada de los campos de autor-afiliación, la normalización de los datos bibliográficos es uno de los problemas que limitan los análisis de información métrica en tiempo de ejecución, fiabilidad de los indicadores y tamaño del corpus de datos. Este trabajo tiene como objetivo proponer los requerimientos para el mejoramiento de la normalización de datos en software de análisis métricos. Para lograr el objetivo se realizó un diagnóstico de los principales métodos y técnicas que son empleados a nivel mundial en este tipo de estudio. Como resultado principal, se relacionan los requerimientos para una aplicación de preprocesamiento automatizado de datos con fines métricos. Se proponen la base de datos, las tareas, los pasos y los algoritmos que contendrá esa aplicación. Se debe usar una combinación de algoritmos para desambiguar los campos afiliación y autor(AU)


Due to the diversity of methods used to enter author-affiliation information, the resulting lack of standardization of bibliographic data has become one of the problems limiting analysis of metric information in terms of execution time, reliability of indicators and size of the data corpus. The purpose of the study was to propose requirements to improve data normalization in metric analysis software. To achieve this objective, a diagnosis was made of the main methods and techniques used worldwide in this type of study. The main result is the presentation of requirements to be met by an application for automated pre-processing of data for metric purposes. A proposal is made of the database, tasks, steps and algorithms that this application will contain. A combination of algorithms should be used to disambiguate author and affiliation fields(AU)


Sujets)
Humains , Traitement automatique des données , Interprétation statistique de données , Fouille de données
14.
Cad. Saúde Pública (Online) ; 34(6): e00088117, 2018. tab, graf
Article Dans Portugais | LILACS | ID: biblio-952404

Résumé

O objetivo do presente estudo foi demonstrar a aplicação de uma etapa de pós-processamento determinístico, baseada em medidas de similaridade, para aumentar a performance do relacionamento probabilístico com e sem a etapa de revisão manual. As bases de dados utilizadas no estudo foram o Sistema de Informação de Agravos de Notificação e o Sistema de Informações sobre Mortalidade, no período de 2007 a 2015, do Município de Palmas, Tocantins, Brasil. O software probabilístico utilizado foi o OpenRecLink; foi desenvolvida e aplicada uma etapa de pós-processamento determinístico aos dados obtidos por três diferentes estratégias de pareamento probabilístico. As três estratégias foram comparadas entre si e acrescidas da etapa de pós-processamento determinístico. A sensibilidade das estratégias probabilísticas sem revisão manual variou entre 69,1% e 77,8%, já as mesmas estratégias, acrescidas da etapa de pós-processamento determinístico, apresentaram uma variação entre 92,9% e 96,3%. A sensibilidade de duas estratégias probabilísticas com revisão manual foi semelhante à obtida pela etapa de pós-processamento determinístico, no entanto, o número de pares destinados à revisão manual pelas duas estratégias probabilísticas variou entre 1.177 e 1.132 registros, contra 149 e 145 após a etapa de pós-processamento determinístico. Nossos resultados sugerem que a etapa de pós-processamento determinístico é uma opção promissora, tanto para aumentar a sensibilidade quanto para reduzir o número de pares que precisam ser revisados manualmente, ou mesmo para eliminar sua necessidade.


The aim of this study was to demonstrate the application of a deterministic post-processing stage, based on measures of similarity, to increase the performance of probabilistic record linkage with and without manual revision. The databases used in the study were the Brazilian Information System for Notificable Diseases and the Brazilian Mortality Information System, from 2007 to 2015, in Palmas, Tocantins State, Brazil. The probabilistic software was OpenRecLink, and a deterministic post-processing stage was applied to the data obtained from three different probabilistic linkage strategies. The three strategies were compared to each other, and the deterministic post-processing stage was added. The sensibility of the probabilistic strategies without manual revision varied from 69.1% and 77.8%, while the same strategies plus the deterministic post-processing stage varied from 92.9% to 96.3%. Sensitivity of the two probabilistic strategies with manual revision was similar to that obtained by the deterministic post-processing stage, but the number of matches that were referred to manual revision by the two probabilistic strategies varied between 1,177 and 1,132 records, compared to 149 and 145 after the deterministic post-processing stage. Our findings suggest that the deterministic post-processing stage is a promising option, both to increase the sensitivity and to reduce the number of matches that need to be reviewed manually, or even to eliminate the need for manual revision altogether.


El objetivo del presente estudio fue mostrar la aplicación de una etapa de postprocesamiento determinístico, basada en medidas de similitud, con el objeto de aumentar el rendimiento del enlace probabilístico con y sin etapa de revisión manual. Las bases de datos utilizadas en el estudio fueron el Sistema de Información sobre Enfermedades de Notificación Obligatoria y el Sistema de Informaciones sobre Mortalidad, durante el período de 2007 a 2015, en el municipio de Palmas, Tocantins, Brasil. El software probabilístico utilizado fue el OpenRecLink; se desarrolló y aplicó una etapa de postprocesamiento determinístico con los datos obtenidos mediante tres estrategias diferentes de emparejamiento probabilístico. Las tres estrategias se compararon entre sí y se añadieron a la etapa de postprocesamiento determinístico. La sensibilidad de las estrategias probabilísticas sin revisión manual varió entre el 69,1% y el 77,8%, incluso las mismas estrategias, añadidas de la etapa de postprocesamiento determinístico, presentaron una variación entre 92,9% y 96,3%. La sensibilidad de las dos estrategias probabilísticas con revisión manual fue semejante a la obtenida por la etapa de postprocesamiento determinístico, sin embargo, el número de pares destinados a la revisión manual por las dos estrategias probabilísticas varió entre 1.177 y 1.132 registros, frente 149 y 145 tras la etapa de postprocesamiento determinístico. Nuestros resultados sugieren que la etapa de postprocesamiento determinístico es una opción prometedora, tanto para aumentar la sensibilidad, como para reducir el número de pares que necesitan ser revisados manualmente, o incluso para eliminar su necesidad.


Sujets)
Humains , Logiciel , Traitement automatique des données/méthodes , Couplage des dossiers médicaux/méthodes , Bases de données comme sujet/statistiques et données numériques , Brésil , Probabilité , Reproductibilité des résultats , Systèmes informatisés de dossiers médicaux/statistiques et données numériques , Exactitude des données
15.
China Journal of Chinese Materia Medica ; (24): 4182-4191, 2018.
Article Dans Chinois | WPRIM | ID: wpr-775361

Résumé

Internal environment of metabolism of traditional Chinese medicine (TCM) is a dynamic process, which is in line with the "holistic-dynamic-comprehensive-analytic" characteristics of metabonomics, therefore metabonomics have a unique advantage to reveal the metabolic pattern of TCM. The application of metabonomics in TCM has great practical significance in understanding the pharmacodynamic/toxic effect material basis, mechanisms and guiding for determination of dosage and treatment course; At the same time, the scientific compatibility of TCM prescription, the germplasm resources of TCM and the preclinical safety/toxicity can be widely researched. At present, metabolomics has become a leading technology in many industries and fields including the research and development of TCM. The core of metabolomics is analytical technology, because comprehensive metabolite profiles or accurate identification of known metabolites can be obtained from complex biological samples only by appropriate analytical techniques. At the same time, a series of bioinformatics/chemical informatics/stoichiometry methods are needed to process the data, so as to obtain the potential law and information in the mass data. In this paper, the concept of metabolomics, relevant analytical techniques, data processing methods and applications were explained and analyzed clearly. In addition, the core problems and countermeasures of metabolomics were summarized, and the future development of metabolomics was prospected as well.


Sujets)
Humains , Biologie informatique , Médecine traditionnelle chinoise , Métabolomique , Recherche
16.
Journal of Shanghai Jiaotong University(Medical Science) ; (12): 805-810, 2018.
Article Dans Chinois | WPRIM | ID: wpr-843665

Résumé

Data processing and analysis has presented major bottlenecks in high-throughput metabolomics research. Bioinformatics tools emerged for performing these high-throughput datasets. These tools can preprocess complex high-dimensional datasets, detect and annotate metabolites, perform various statistical analysis and results interpretation. According to the workflow and methods of metabolomics data processing and analysis, this paper summarized some integral metabolomics softwares and compared the merit and demerit of four typical tools so that it can provide users with a reference guide to select softwares.

17.
Acta Medica Philippina ; : 374-379, 2018.
Article Dans Anglais | WPRIM | ID: wpr-959685

Résumé

@#<p style="text-align: justify;"><b>BACKGROUND:</b> The Philippine Health Insurance Corporation (PhilHealth) has adopted several computer-based systems to enhance claims processing for hospitals.</p><p style="text-align: justify;"><strong>OBJECTIVES:</strong> This study sought to determine the efficiency gains in the processing of PhilHealth claims following the introduction of computer-based processing systems, taking into account differences in hospital characteristics.</p><p style="text-align: justify;"><strong>METHODS:</strong> Data were obtained from a survey conducted among 200 hospitals, and their corresponding 2014 claims figures as provided by PhilHealth. Summary descriptive statistics of hospital capacities (ownership, service level, and utilization of PhilHealth computer systems) and claims outcomes (claims rejection rates, as well as length of claims processing times for hospitals and with PhilHealth) were generated. Multivariate regression analysis was done using claims outcomes as dependent variables, and hospital capacities as independent variables.</p><p style="text-align: justify;"><strong>RESULTS:</strong> Nearly a quarter of the surveyed hospitals did not utilize any of PhilHealth's computer-based claims systems. Utilization was lowest for primary as well as public facilities. Among those that used the systems, most employed the on-line membership verification program. The mean claims rejection rate was 3.81%. Claims processing by hospitals took an average of 35 days, while PhilHealth required 40 days from receipt of claims to the release of reimbursement. Regression analysis indicated that facilities that utilized computers, as well as private hospitals, had significantly lower claims rejection rates (p<0.05). The claims processing duration was significantly shorter among private facilities.</p><p style="text-align: justify;"><strong>CONCLUSIONS:</strong> Private hospitals are able to process claims and obtain reimbursements faster than public facilities, regardless of the use of PhilHealth's computer-based systems. PhilHealth and public hospitals need to optimize claims processing arrangements.</p>


Sujets)
Humains , Examen des demandes de remboursement d'assurance , Philippines
18.
Chinese Journal of Laboratory Medicine ; (12): 680-684, 2018.
Article Dans Chinois | WPRIM | ID: wpr-712193

Résumé

Objective To investigate the advantages and continuous optimization of laboratory automation system through analysis and assessment of the core data and performance after the application of open assembly line.Methods Collect the data of biochemical and immunoassay in Shuguang Hospital attached to Shanghai University of Traditional Chinese Medicine from April to October 2017.( 1 ) Cost analysis of the assembly line schemes;(2) Analysis of workflow before and after the application of assembly line;(3) Analysis of the volume of samples collecting before and after the application of assembly line ;(4) Analysis of TAT data before and after the application of assembly line; ( 5 ) Analysis of staffs allocation before and after the application of assembly line; (6) Analysis of samples rechecking before and after the application of assembly line .Results (1) Open assembly line costs least on hardware (8 million) and site among various projects;(2) Inspection process is greatly simplified after the application of assembly line;(3) The samples′volume of biochemical and immunoassay inspection were reduced by 31.85%;(4) The items′test cycle decreases after the application of assembly line , the average TAT is reduced by 32 minutes;(5) Staffs for samples pretreatment can be reduced by 50%after the application of assembly line , and the quantity of operators does not change;(6) The number of re-check samples increase except the gray zone and critical values , which ensures the reliability of the results .Conclusion To analyze the core data and to evaluate the performance , the laboratory improve on detection cycle ,staffs,and test efficiency.

19.
Chinese Journal of Nursing ; (12): 422-425, 2017.
Article Dans Chinois | WPRIM | ID: wpr-505674

Résumé

Objective To establish standardized traceable management procedure for implanted high-value consumables in operating room.Methods The management model combining information-based system operation process and quality control process was designed,and management results before and after implementation were compared.Results There were statistically significant differences in error rates of information recording,bar code sticking and charging of implantable high-value consumables after the implementation of the process management mode (P<0.05).At the same time,there were statistically significant differences in improvement of traceability of high-value consumables,adverse event reporting and patient satisfaction(P<0.05).Conclusion Establishment of management model in operating room for implanted high-value consumables can ensure medical safety and increase medical quality.It was proved to improve the level of hospital management.

20.
Japanese Journal of Drug Informatics ; : 8-16, 2017.
Article Dans Anglais | WPRIM | ID: wpr-378876

Résumé

<b>Objective: </b>Numerous new drugs have been developed in recent years, making the available types of prescription drugs quite diverse, with increasingly more complex drug interactions.  From an operations support system perspective, hospitals that cannot incorporate a large-scale custom-order system because of financial or use-efficiency limitations have no choice but to rely on commercial products.  However, this leaves many problems unsolved, such as functional restrictions and limited specifications.  In this study, we used Microsoft®Visual Basic®for Application (VBA) to develop an economical drug discrimination system suited to our situation and equipped with original function from the perspective of clinical pharmacists.<br><b>Design: </b>System design and development.<br><b>Methods: </b>We prototyped the system in VBA and used Microsoft®Excel®to create Query Tables.  The utility of the new system was evaluated based on drug discrimination output and time required in each process.<br><b>Results: </b>The new system is capable of inter-database communication and automated data analysis and uses drop-down lists of pre-defined options for data input in many places.  Compared with the conventional method, the new system enabled us to significantly reduce the average time needed to input and confirm data by as much as 61.9%.  This indicates that the new system can considerably reduce the time required for completing time-intensive processes and is also useful in preparing highly precise drug discrimination reports.<br><b>Conclusion: </b>Based on the results obtained so far, the new, original system, developed with zero design or development costs, is more efficient and offers more reliable information in the clinical setting than the conventional system.  As a result, we are able to maintain operational quality and reduce the amount of time required for drug discrimination.

SÉLECTION CITATIONS
Détails de la recherche