Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
BMC Med Inform Decis Mak ; 19(Suppl 9): 243, 2019 12 12.
Artigo em Inglês | MEDLINE | ID: mdl-31830986

RESUMO

BACKGROUND: Assessment and rating of Parkinson's Disease (PD) are commonly based on the medical observation of several clinical manifestations, including the analysis of motor activities. In particular, medical specialists refer to the MDS-UPDRS (Movement Disorder Society - sponsored revision of Unified Parkinson's Disease Rating Scale) that is the most widely used clinical scale for PD rating. However, clinical scales rely on the observation of some subtle motor phenomena that are either difficult to capture with human eyes or could be misclassified. This limitation motivated several researchers to develop intelligent systems based on machine learning algorithms able to automatically recognize the PD. Nevertheless, most of the previous studies investigated the classification between healthy subjects and PD patients without considering the automatic rating of different levels of severity. METHODS: In this context, we implemented a simple and low-cost clinical tool that can extract postural and kinematic features with the Microsoft Kinect v2 sensor in order to classify and rate PD. Thirty participants were enrolled for the purpose of the present study: sixteen PD patients rated according to MDS-UPDRS and fourteen healthy paired subjects. In order to investigate the motor abilities of the upper and lower body, we acquired and analyzed three main motor tasks: (1) gait, (2) finger tapping, and (3) foot tapping. After preliminary feature selection, different classifiers based on Support Vector Machine (SVM) and Artificial Neural Networks (ANN) were trained and evaluated for the best solution. RESULTS: Concerning the gait analysis, results showed that the ANN classifier performed the best by reaching 89.4% of accuracy with only nine features in diagnosis PD and 95.0% of accuracy with only six features in rating PD severity. Regarding the finger and foot tapping analysis, results showed that an SVM using the extracted features was able to classify healthy subjects versus PD patients with great performances by reaching 87.1% of accuracy. The results of the classification between mild and moderate PD patients indicated that the foot tapping features were the most representative ones to discriminate (81.0% of accuracy). CONCLUSIONS: The results of this study have shown how a low-cost vision-based system can automatically detect subtle phenomena featuring the PD. Our findings suggest that the proposed tool can support medical specialists in the assessment and rating of PD patients in a real clinical scenario.


Assuntos
Análise Custo-Benefício , Atividade Motora/fisiologia , Doença de Parkinson/fisiopatologia , Índice de Gravidade de Doença , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Feminino , Análise da Marcha , Humanos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , Máquina de Vetores de Suporte
2.
J Digit Imaging ; 32(6): 1008-1018, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31485953

RESUMO

As of common routine in tumor resections, surgeons rely on local examinations of the removed tissues and on the swiftly made microscopy findings of the pathologist, which are based on intraoperatively taken tissue probes. This approach may imply an extended duration of the operation, increased effort for the medical staff, and longer occupancy of the operating room (OR). Mixed reality technologies, and particularly augmented reality, have already been applied in surgical scenarios with positive initial outcomes. Nonetheless, these methods have used manual or marker-based registration. In this work, we design an application for a marker-less registration of PET-CT information for a patient. The algorithm combines facial landmarks extracted from an RGB video stream, and the so-called Spatial-Mapping API provided by the HMD Microsoft HoloLens. The accuracy of the system is compared with a marker-based approach, and the opinions of field specialists have been collected during a demonstration. A survey based on the standard ISO-9241/110 has been designed for this purpose. The measurements show an average positioning error along the three axes of (x, y, z) = (3.3 ± 2.3, - 4.5 ± 2.9, - 9.3 ± 6.1) mm. Compared with the marker-based approach, this shows an increment of the positioning error of approx. 3 mm along two dimensions (x, y), which might be due to the absence of explicit markers. The application has been positively evaluated by the specialists; they have shown interest in continued further work and contributed to the development process with constructive criticism.


Assuntos
Realidade Aumentada , Imageamento Tridimensional/métodos , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Cirurgia Assistida por Computador/métodos , Cirurgia Bucal/métodos , Algoritmos , Humanos , Projetos Piloto , Reprodutibilidade dos Testes
3.
Appl Ergon ; 65: 481-491, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-28283174

RESUMO

The evaluation of the exposure to risk factors in workplaces and their subsequent redesign represent one of the practices to lessen the frequency of work-related musculoskeletal disorders. In this paper we present K2RULA, a semi-automatic RULA evaluation software based on the Microsoft Kinect v2 depth camera, aimed at detecting awkward postures in real time, but also in off-line analysis. We validated our tool with two experiments. In the first one, we compared the K2RULA grand-scores with those obtained with a reference optical motion capture system and we found a statistical perfect match according to the Landis and Koch scale (proportion agreement index = 0.97, k = 0.87). In the second experiment, we evaluated the agreement of the grand-scores returned by the proposed application with those obtained by a RULA expert rater, finding again a statistical perfect match (proportion agreement index = 0.96, k = 0.84), whereas a commercial software based on Kinect v1 sensor showed a lower agreement (proportion agreement index = 0.82, k = 0.34).


Assuntos
Técnicas Biossensoriais/métodos , Ergonomia/métodos , Software/normas , Adulto , Fenômenos Biomecânicos , Humanos , Masculino , Movimento (Física) , Simulação de Paciente , Postura , Reprodutibilidade dos Testes , Extremidade Superior/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...