Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Heliyon ; 10(14): e34167, 2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39092249

RESUMO

Purpose: To understand real-world eye drop adherence among glaucoma patients and evaluate the performance of our proposed cloud-based support for eye drop adherence (CASEA). Design: Prospective, observational case series. Methods: Setting: The Department of Ophthalmology at Tsukazaki Hospital. Patient or study population: Glaucoma patients treated at the hospital from May 2021 to September 2022, with 61 patients initially enrolled. Intervention or observation procedures: Pharmacists guided eye drop administration before the study. Changes in bottle orientation were detected using an accelerometer attached to the container, and acceleration waveforms and date/time data were recorded. Patients visited the clinic during the 4th and 8th weeks to report their eye drop administration, and the data were uploaded to the cloud. Main outcome measures: Two AI models (B-LSTM) were created to analyze the eye drop bottle movement time-series data for patients treating one or both eyes. The models were evaluated by comparing the true administration status with the AI model judgment. Results: Four of the 61 study subjects dropped out. The remaining 57 patients achieved recall, precision, and accuracy values of 98.6 %, 98.6 %, and 95.9 %, respectively, for the two-eyes model and 95.8 %, 98.8 %, and 95.6 % for the one-eye model. Three low-accuracy participants (77.1 %, 71.0 %, and 81.0 %) improved to 100 %, 99.1 %, and 100 %, respectively, after undergoing an additional 8-week performance validation using an aid-type container designed to ensure that the bottle was fully inverted during instillation. Conclusions: CASEA precisely monitored daily eye drop adherence and enhanced treatment efficacy by identifying patients with difficulty self-medicating. This system has the potential to improve glaucoma patient outcomes by supporting adherence.

2.
Taiwan J Ophthalmol ; 12(2): 147-154, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35813791

RESUMO

PURPOSE: We demonstrated real-time evaluation technology for cataract surgery using artificial intelligence (AI) to residents and supervising doctors (doctors), and performed a comparison between the two groups in terms of risk indicators and duration for two of the important processes of surgery, continuous curvilinear capsulorhexis (CCC) and phacoemulsification (Phaco). MATERIALS AND METHODS: Each of three residents with operative experience of fewer than 100 cases, and three supervising doctors with operative experience of 1000 or more cases, performed cataract surgeries on three cases, respectably, a total of 18 cases. The mean values of the risk indicators in the CCC and Phaco processes measured in real-time during the surgery were statistically compared between the residents' group and the doctors' group. RESULTS: The mean values (standard deviation) of the risk indicator (the safest, 0 to most risky, 1) for CCC were 0.556 (0.384) in the residents and 0.433 (0.421) in the doctors, those for Phaco were 0.511 (0.423) in the residents and 0.377 (0.406) in the doctors. The doctors' risk indicators were significantly better in both processes (P = 0.0003, P < 0.0001 by Wilcoxon test). CONCLUSION: We successfully implemented a real-time surgical technique evaluation system for cataract surgery and collected data. The risk indicators were significantly better in the doctors than in the resident's group, suggesting that AI can objectively serve as a new indicator to intraoperatively identify surgical risks.

3.
J Clin Med ; 9(12)2020 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-33266345

RESUMO

Surgical skill levels of young ophthalmologists tend to be instinctively judged by ophthalmologists in practice, and hence a stable evaluation is not always made for a single ophthalmologist. Although it has been said that standardizing skill levels presents difficulty as surgical methods vary greatly, approaches based on machine learning seem to be promising for this objective. In this study, we propose a method for displaying the information necessary to quantify the surgical techniques of cataract surgery in real-time. The proposed method consists of two steps. First, we use InceptionV3, an image classification network, to extract important surgical phases and to detect surgical problems. Next, one of the segmentation networks, scSE-FC-DenseNet, is used to detect the cornea and the tip of the surgical instrument and the incisional site in the continuous curvilinear capsulorrhexis, a particularly important phase in cataract surgery. The first and second steps are evaluated in terms of the area under curve (i.e., AUC) of the figure of the true positive rate versus (1-false positive rate) and the intersection over union (i.e., IoU) obtained by the ground truth and prediction associated with the region of interest. As a result, in the first step, the network was able to detect surgical problems with an AUC of 0.97. In the second step, the detection rate of the cornea was 99.7% when the IoU was 0.8 or more, and the detection rates of the tips of the forceps and the incisional site were 86.9% and 94.9% when the IoU was 0.1 or more, respectively. It was thus expected that the proposed method is one of the basic techniques to achieve the standardization of surgical skill levels.

4.
Sci Rep ; 9(1): 16590, 2019 11 12.
Artigo em Inglês | MEDLINE | ID: mdl-31719589

RESUMO

The present study aimed to conduct a real-time automatic analysis of two important surgical phases, which are continuous curvilinear capsulorrhexis (CCC), nuclear extraction, and three other surgical phases of cataract surgery using artificial intelligence technology. A total of 303 cases of cataract surgery registered in the clinical database of the Ophthalmology Department of Tsukazaki Hospital were used as a dataset. Surgical videos were downsampled to a resolution of 299 × 168 at 1 FPS to image each frame. Next, based on the start and end times of each surgical phase recorded by an ophthalmologist, the obtained images were labeled correctly. Using the data, a neural network model, known as InceptionV3, was developed to identify the given surgical phase for each image. Then, the obtained images were processed in chronological order using the neural network model, where the moving average of the output result of five consecutive images was derived. The class with the maximum output value was defined as the surgical phase. For each surgical phase, the time at which a phase was first identified was defined as the start time, and the time at which a phase was last identified was defined as the end time. The performance was evaluated by finding the mean absolute error between the start and end times of each important phase recorded by the ophthalmologist as well as the start and end times determined by the model. The correct response rate of the cataract surgical phase classification was 90.7% for CCC, 94.5% for nuclear extraction, and 97.9% for other phases, with a mean correct response rate of 96.5%. The errors between each phase's start and end times recorded by the ophthalmologist and those determined by the neural network model were as follows: CCC's start and end times, 3.34 seconds and 4.43 seconds, respectively and nuclear extraction's start and end times, 7.21 seconds and 6.04 seconds, respectively, with a mean of 5.25 seconds. The neural network model used in this study was able to perform the classification of the surgical phase by only referring to the last 5 seconds of video images. Therefore, our method has performed like a real-time classification.


Assuntos
Algoritmos , Extração de Catarata/métodos , Catarata/terapia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Cirurgia Assistida por Computador/métodos , Cirurgia Vídeoassistida/métodos , Inteligência Artificial , Bases de Dados Factuais , Humanos
5.
Int J Neural Syst ; 18(2): 135-45, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18452247

RESUMO

Associative memory networks based on quaternionic Hopfield neural network are investigated in this paper. These networks are composed of quaternionic neurons, and input, output, threshold, and connection weights are represented in quaternions, which is a class of hypercomplex number systems. The energy function of the network and the Hebbian rule for embedding patterns are introduced. The stable states and their basins are explored for the networks with three neurons and four neurons. It is clarified that there exist at most 16 stable states, called multiplet components, as the degenerated stored patterns, and each of these states has its basin in the quaternionic networks.


Assuntos
Aprendizagem por Associação/fisiologia , Memória/fisiologia , Redes Neurais de Computação , Simulação por Computador , Humanos , Modelos Neurológicos , Rede Nervosa/fisiologia , Percepção Espacial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...