Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-39024089

ABSTRACT

This work tackles the problem of automatically predicting the grasping intention of humans observing their environment, with eye-tracker glasses and video cameras recording the scene view. Our target application is the assistance to people with motor disabilities and potential cognitive impairments, using assistive robotics. Our proposal leverages the analysis of human attention captured in the form of gaze fixations recorded by an eye-tracker on the first person video, as the anticipation of prehension actions is a well studied and well known phenomenon. We propose a multi-task system that simultaneously addresses the prediction of human attention in the near future, and the anticipation of grasping actions. In our model, visual attention is modeled as a competitive process between a discrete set of states, each one associated to a well-known gaze movement pattern from visual psychology. We additionally consider an asymmetric multitask problem, where attention modeling is an auxiliary task that helps to regularize the learning process of the main action prediction task, and propose a constrained multi-task loss that naturally deals with this asymmetry. Our model shows superior performance than other losses for dynamic multi-task learning, current dominant deep architectures for general action forecasting and particularly-tailored models for predicting grasping intention. In particular, it provides state-of-the-art performance in three datasets for egocentric action anticipation, with an average precision of 0.569 and 0.524 in GITW and Sharon datasets, respectively, and an accuracy of 89.2% and a success rate of 51.7% in Invisible dataset.

2.
J Imaging Inform Med ; 2024 Feb 27.
Article in English | MEDLINE | ID: mdl-38413459

ABSTRACT

Ultrasound is a widespread imaging modality, with special application in medical fields such as nephrology. However, automated approaches for ultrasound renal interpretation still pose some challenges: (1) the need for manual supervision by experts at various stages of the system, which prevents its adoption in primary healthcare, and (2) their limited considered taxonomy (e.g., reduced number of pathologies), which makes them unsuitable for training practitioners and providing support to experts. This paper proposes a fully automated computer-aided diagnosis system for ultrasound renal imaging addressing both of these challenges. Our system is based in a multi-task architecture, which is implemented by a three-branched convolutional neural network and is capable of segmenting the kidney and detecting global and local pathologies with no need of human interaction during diagnosis. The integration of different image perspectives at distinct granularities enhanced the proposed diagnosis. We employ a large (1985 images) and demanding ultrasound renal imaging database, publicly released with the system and annotated on the basis of an exhaustive taxonomy of two global and nine local pathologies (including cysts, lithiasis, hydronephrosis, angiomyolipoma), establishing a benchmark for ultrasound renal interpretation. Experiments show that our proposed method outperforms several state-of-the-art methods in both segmentation and diagnosis tasks and leverages the combination of global and local image information to improve the diagnosis. Our results, with a 87.41% of AUC in healthy-pathological diagnosis and 81.90% in multi-pathological diagnosis, support the use of our system as a helpful tool in the healthcare system.

3.
Nature ; 601(7893): 415-421, 2022 01.
Article in English | MEDLINE | ID: mdl-34987220

ABSTRACT

Transcriptional and proteomic profiling of individual cells have revolutionized interpretation of biological phenomena by providing cellular landscapes of healthy and diseased tissues1,2. These approaches, however, do not describe dynamic scenarios in which cells continuously change their biochemical properties and downstream 'behavioural' outputs3-5. Here we used 4D live imaging to record tens to hundreds of morpho-kinetic parameters describing the dynamics of individual leukocytes at sites of active inflammation. By analysing more than 100,000 reconstructions of cell shapes and tracks over time, we obtained behavioural descriptors of individual cells and used these high-dimensional datasets to build behavioural landscapes. These landscapes recognized leukocyte identities in the inflamed skin and trachea, and uncovered a continuum of neutrophil states inside blood vessels, including a large, sessile state that was embraced by the underlying endothelium and associated with pathogenic inflammation. Behavioural screening in 24 mouse mutants identified the kinase Fgr as a driver of this pathogenic state, and interference with Fgr protected mice from inflammatory injury. Thus, behavioural landscapes report distinct properties of dynamic environments at high cellular resolution.


Subject(s)
Inflammation , Leukocytes , Proteomics , Animals , Cell Shape , Endothelium/immunology , Inflammation/immunology , Leukocytes/immunology , Mice , Neutrophils/immunology , Proto-Oncogene Proteins/immunology , src-Family Kinases/immunology
4.
Med Image Anal ; 77: 102358, 2022 04.
Article in English | MEDLINE | ID: mdl-35066392

ABSTRACT

Cell detection and tracking applied to in vivo fluorescence microscopy has become an essential tool in biomedicine to characterize 4D (3D space plus time) biological processes at the cellular level. Traditional approaches to cell motion analysis by microscopy imaging, although based on automatic frameworks, still require manual supervision at some points of the system. Hence, when dealing with a large amount of data, the analysis becomes incredibly time-consuming and typically yields poor biological information. In this paper, we propose a fully-automated system for segmentation, tracking and feature extraction of migrating cells within blood vessels in 4D microscopy imaging. Our system consists of a robust 3D convolutional neural network (CNN) for joint blood vessel and cell segmentation, a 3D tracking module with collision handling, and a novel method for feature extraction, which takes into account the particular geometry in the cell-vessel arrangement. Experiments on a large 4D intravital microscopy dataset show that the proposed system achieves a significantly better performance than the state-of-the-art tools for cell segmentation and tracking. Furthermore, we have designed an analytical method of cell behaviors based on the automatically extracted features, which supports the hypotheses related to leukocyte migration posed by expert biologists. This is the first time that such a comprehensive automatic analysis of immune cell migration has been performed, where the total population under study reaches hundreds of neutrophils and thousands of time instances.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Cell Movement , Diagnostic Imaging , Humans , Intravital Microscopy
SELECTION OF CITATIONS
SEARCH DETAIL
...