Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Br J Ophthalmol ; 108(3): 432-439, 2024 02 21.
Artigo em Inglês | MEDLINE | ID: mdl-36596660

RESUMO

BACKGROUND: Optical coherence tomography angiography (OCTA) enables fast and non-invasive high-resolution imaging of retinal microvasculature and is suggested as a potential tool in the early detection of retinal microvascular changes in Alzheimer's Disease (AD). We developed a standardised OCTA analysis framework and compared their extracted parameters among controls and AD/mild cognitive impairment (MCI) in a cross-section study. METHODS: We defined and extracted geometrical parameters of retinal microvasculature at different retinal layers and in the foveal avascular zone (FAZ) from segmented OCTA images obtained using well-validated state-of-the-art deep learning models. We studied these parameters in 158 subjects (62 healthy control, 55 AD and 41 MCI) using logistic regression to determine their potential in predicting the status of our subjects. RESULTS: In the AD group, there was a significant decrease in vessel area and length densities in the inner vascular complexes (IVC) compared with controls. The number of vascular bifurcations in AD is also significantly lower than that of healthy people. The MCI group demonstrated a decrease in vascular area, length densities, vascular fractal dimension and the number of bifurcations in both the superficial vascular complexes (SVC) and the IVC compared with controls. A larger vascular tortuosity in the IVC, and a larger roundness of FAZ in the SVC, can also be observed in MCI compared with controls. CONCLUSION: Our study demonstrates the applicability of OCTA for the diagnosis of AD and MCI, and provides a standard tool for future clinical service and research. Biomarkers from retinal OCTA images can provide useful information for clinical decision-making and diagnosis of AD and MCI.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Humanos , Angiofluoresceinografia/métodos , Vasos Retinianos/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Doença de Alzheimer/diagnóstico por imagem , Microvasos/diagnóstico por imagem , Disfunção Cognitiva/diagnóstico por imagem
2.
Sensors (Basel) ; 23(20)2023 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-37896742

RESUMO

With the advent of autonomous vehicles, sensors and algorithm testing have become crucial parts of the autonomous vehicle development cycle. Having access to real-world sensors and vehicles is a dream for researchers and small-scale original equipment manufacturers (OEMs) due to the software and hardware development life-cycle duration and high costs. Therefore, simulator-based virtual testing has gained traction over the years as the preferred testing method due to its low cost, efficiency, and effectiveness in executing a wide range of testing scenarios. Companies like ANSYS and NVIDIA have come up with robust simulators, and open-source simulators such as CARLA have also populated the market. However, there is a lack of lightweight and simple simulators catering to specific test cases. In this paper, we introduce the SLAV-Sim, a lightweight simulator that specifically trains the behaviour of a self-learning autonomous vehicle. This simulator has been created using the Unity engine and provides an end-to-end virtual testing framework for different reinforcement learning (RL) algorithms in a variety of scenarios using camera sensors and raycasts.

3.
Artigo em Inglês | MEDLINE | ID: mdl-36103441

RESUMO

Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/ scene. To this end, we propose a method that effectively captures subtle changes by aggregating context-aware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. It outperforms the state-of-the-art approaches by a significant margin in recognition accuracy.

4.
IEEE Trans Med Imaging ; 41(12): 3969-3980, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36044489

RESUMO

Automated detection of retinal structures, such as retinal vessels (RV), the foveal avascular zone (FAZ), and retinal vascular junctions (RVJ), are of great importance for understanding diseases of the eye and clinical decision-making. In this paper, we propose a novel Voting-based Adaptive Feature Fusion multi-task network (VAFF-Net) for joint segmentation, detection, and classification of RV, FAZ, and RVJ in optical coherence tomography angiography (OCTA). A task-specific voting gate module is proposed to adaptively extract and fuse different features for specific tasks at two levels: features at different spatial positions from a single encoder, and features from multiple encoders. In particular, since the complexity of the microvasculature in OCTA images makes simultaneous precise localization and classification of retinal vascular junctions into bifurcation/crossing a challenging task, we specifically design a task head by combining the heatmap regression and grid classification. We take advantage of three different en face angiograms from various retinal layers, rather than following existing methods that use only a single en face. We carry out extensive experiments on three OCTA datasets acquired using different imaging devices, and the results demonstrate that the proposed method performs on the whole better than either the state-of-the-art single-purpose methods or existing multi-task learning solutions. We also demonstrate that our multi-task learning method generalizes across other imaging modalities, such as color fundus photography, and may potentially be used as a general multi-task learning tool. We also construct three datasets for multiple structure detection, and part of these datasets with the source code and evaluation benchmark have been released for public access.


Assuntos
Vasos Retinianos , Tomografia de Coerência Óptica , Tomografia de Coerência Óptica/métodos , Angiofluoresceinografia/métodos , Vasos Retinianos/diagnóstico por imagem , Fundo de Olho , Retina/diagnóstico por imagem
5.
IEEE Trans Image Process ; 30: 3691-3704, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33705316

RESUMO

This article presents a novel keypoints-based attention mechanism for visual recognition in still images. Deep Convolutional Neural Networks (CNNs) for recognizing images with distinctive classes have shown great success, but their performance in discriminating fine-grained changes is not at the same level. We address this by proposing an end-to-end CNN model, which learns meaningful features linking fine-grained changes using our novel attention mechanism. It captures the spatial structures in images by identifying semantic regions (SRs) and their spatial distributions, and is proved to be the key to modeling subtle changes in images. We automatically identify these SRs by grouping the detected keypoints in a given image. The "usefulness" of these SRs for image recognition is measured using our innovative attentional mechanism focusing on parts of the image that are most relevant to a given task. This framework applies to traditional and fine-grained image recognition tasks and does not require manually annotated regions (e.g. bounding-box of body parts, objects, etc.) for learning and prediction. Moreover, the proposed keypoints-driven attention mechanism can be easily integrated into the existing CNN models. The framework is evaluated on six diverse benchmark datasets. The model outperforms the state-of-the-art approaches by a considerable margin using Distracted Driver V1 (Acc: 3.39%), Distracted Driver V2 (Acc: 6.58%), Stanford-40 Actions (mAP: 2.15%), People Playing Musical Instruments (mAP: 16.05%), Food-101 (Acc: 6.30%) and Caltech-256 (Acc: 2.59%) datasets.


Assuntos
Aprendizado Profundo , Atividades Humanas/classificação , Processamento de Imagem Assistida por Computador/métodos , Feminino , Humanos , Masculino , Semântica
6.
Palliat Med ; 33(8): 1106-1113, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31250734

RESUMO

BACKGROUND: Medical robots are increasingly used for a variety of applications in healthcare. Robots have mainly been used to support surgical procedures, and for a variety of assistive uses in dementia and elderly care. To date, there has been limited debate about the potential opportunities and risks of robotics in other areas of palliative, supportive and end-of-life care. AIM: The objective of this article is to examine the possible future impact of medical robotics on palliative, supportive care and end-of-life care. Specifically, we will discuss the strengths, weaknesses, opportunities and threats (SWOT) of this technology. METHODS: A SWOT analysis to understand the strengths, weaknesses, opportunities and threats of robotic technology in palliative and supportive care. RESULTS: The opportunities of robotics in palliative, supportive and end-of-life care include a number of assistive, therapeutic, social and educational uses. However, there are a number of technical, societal, economic and ethical factors which need to be considered to ensure meaningful use of this technology in palliative care. CONCLUSION: Robotics could have a number of potential applications in palliative, supportive and end-of-life care. Future work should evaluate the health-related, economic, societal and ethical implications of using this technology. There is a need for collaborative research to establish use-cases and inform policy, to ensure the appropriate use (or non-use) of robots for people with serious illness.


Assuntos
Cuidados Paliativos , Robótica , Assistência Terminal , Cuidados Paliativos na Terminalidade da Vida , Humanos
7.
PLoS One ; 10(6): e0127769, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26126116

RESUMO

Today, the workflows that are involved in industrial assembly and production activities are becoming increasingly complex. To efficiently and safely perform these workflows is demanding on the workers, in particular when it comes to infrequent or repetitive tasks. This burden on the workers can be eased by introducing smart assistance systems. This article presents a scalable concept and an integrated system demonstrator designed for this purpose. The basic idea is to learn workflows from observing multiple expert operators and then transfer the learnt workflow models to novice users. Being entirely learning-based, the proposed system can be applied to various tasks and domains. The above idea has been realized in a prototype, which combines components pushing the state of the art of hardware and software designed with interoperability in mind. The emphasis of this article is on the algorithms developed for the prototype: 1) fusion of inertial and visual sensor information from an on-body sensor network (BSN) to robustly track the user's pose in magnetically polluted environments; 2) learning-based computer vision algorithms to map the workspace, localize the sensor with respect to the workspace and capture objects, even as they are carried; 3) domain-independent and robust workflow recovery and monitoring algorithms based on spatiotemporal pairwise relations deduced from object and user movement with respect to the scene; and 4) context-sensitive augmented reality (AR) user feedback using a head-mounted display (HMD). A distinguishing key feature of the developed algorithms is that they all operate solely on data from the on-body sensor network and that no external instrumentation is needed. The feasibility of the chosen approach for the complete action-perception-feedback loop is demonstrated on three increasingly complex datasets representing manual industrial tasks. These limited size datasets indicate and highlight the potential of the chosen technology as a combined entity as well as point out limitations of the system.


Assuntos
Algoritmos , Saúde Ocupacional , Fluxo de Trabalho , Cognição , Humanos , Imageamento Tridimensional , Aprendizagem , Medicina do Trabalho , Integração de Sistemas , Interface Usuário-Computador
8.
Front Hum Neurosci ; 7: 441, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23986671

RESUMO

Perception of scenes has typically been investigated by using static or simplified visual displays. How attention is used to perceive and evaluate dynamic, realistic scenes is more poorly understood, in part due to the problem of comparing eye fixations to moving stimuli across observers. When the task and stimulus is common across observers, consistent fixation location can indicate that that region has high goal-based relevance. Here we investigated these issues when an observer has a specific, and naturalistic, task: closed-circuit television (CCTV) monitoring. We concurrently recorded eye movements and ratings of perceived suspiciousness as different observers watched the same set of clips from real CCTV footage. Trained CCTV operators showed greater consistency in fixation location and greater consistency in suspiciousness judgements than untrained observers. Training appears to increase between-operators consistency by learning "knowing what to look for" in these scenes. We used a novel "Dynamic Area of Focus (DAF)" analysis to show that in CCTV monitoring there is a temporal relationship between eye movements and subsequent manual responses, as we have previously found for a sports video watching task. For trained CCTV operators and for untrained observers, manual responses were most highly related to between-observer eye position spread when a temporal lag was introduced between the fixation and response data. Several hundred milliseconds after between-observer eye positions became most similar, observers tended to push the joystick to indicate perceived suspiciousness. Conversely, several hundred milliseconds after between-observer eye positions became dissimilar, observers tended to rate suspiciousness as low. These data provide further support for this DAF method as an important tool for examining goal-directed fixation behavior when the stimulus is a real moving image.

9.
Exp Brain Res ; 214(1): 131-7, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21822674

RESUMO

Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.


Assuntos
Atenção/fisiologia , Fixação Ocular/fisiologia , Percepção Visual/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Valor Preditivo dos Testes , Gravação de Videoteipe , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...