Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Int J Comput Vis ; 131(1): 259-283, 2023.
Article in English | MEDLINE | ID: mdl-36624862

ABSTRACT

The understanding of human-object interactions is fundamental in First Person Vision (FPV). Visual tracking algorithms which follow the objects manipulated by the camera wearer can provide useful information to effectively model such interactions. In the last years, the computer vision community has significantly improved the performance of tracking algorithms for a large variety of target objects and scenarios. Despite a few previous attempts to exploit trackers in the FPV domain, a methodical analysis of the performance of state-of-the-art trackers is still missing. This research gap raises the question of whether current solutions can be used "off-the-shelf" or more domain-specific investigations should be carried out. This paper aims to provide answers to such questions. We present the first systematic investigation of single object tracking in FPV. Our study extensively analyses the performance of 42 algorithms including generic object trackers and baseline FPV-specific trackers. The analysis is carried out by focusing on different aspects of the FPV setting, introducing new performance measures, and in relation to FPV-specific tasks. The study is made possible through the introduction of TREK-150, a novel benchmark dataset composed of 150 densely annotated video sequences. Our results show that object tracking in FPV poses new challenges to current visual trackers. We highlight the factors causing such behavior and point out possible research directions. Despite their difficulties, we prove that trackers bring benefits to FPV downstream tasks requiring short-term object tracking. We expect that generic object tracking will gain popularity in FPV as new and FPV-specific methodologies are investigated. Supplementary Information: The online version contains supplementary material available at 10.1007/s11263-022-01694-6.

2.
Comput Med Imaging Graph ; 102: 102142, 2022 12.
Article in English | MEDLINE | ID: mdl-36446308

ABSTRACT

Convolutional neural networks (CNNs) applied to magnetic resonance imaging (MRI) have demonstrated their ability in the automatic diagnosis of knee injuries. Despite the promising results, the currently available solutions do not take into account the particular anatomy of knee disorders. Existing works have shown that injuries are localized in small-sized knee regions near the center of MRI scans. Based on such insights, we propose MRPyrNet, a CNN architecture capable of extracting more relevant features from these regions. Our solution is composed of a Feature Pyramid Network with Pyramidal Detail Pooling, and can be plugged into any existing CNN-based diagnostic pipeline. The first module aims to enhance the CNN intermediate features to better detect the small-sized appearance of disorders, while the second one captures such kind of evidence by maintaining its detailed information. An extensive evaluation campaign is conducted to understand in-depth the potential of the proposed solution. The experimental results achieved demonstrate that the application of MRPyrNet to baseline methodologies improves their diagnostic capability, especially in the case of anterior cruciate ligament tear and meniscal tear because of MRPyrNet's ability in exploiting the relevant appearance features of such disorders. Code is available at https://github.com/matteo-dunnhofer/MRPyrNet.


Subject(s)
Magnetic Resonance Imaging , Neural Networks, Computer
3.
Med Image Anal ; 60: 101631, 2020 02.
Article in English | MEDLINE | ID: mdl-31927473

ABSTRACT

The tracking of the knee femoral condyle cartilage during ultrasound-guided minimally invasive procedures is important to avoid damaging this structure during such interventions. In this study, we propose a new deep learning method to track, accurately and efficiently, the femoral condyle cartilage in ultrasound sequences, which were acquired under several clinical conditions, mimicking realistic surgical setups. Our solution, that we name Siam-U-Net, requires minimal user initialization and combines a deep learning segmentation method with a siamese framework for tracking the cartilage in temporal and spatio-temporal sequences of 2D ultrasound images. Through extensive performance validation given by the Dice Similarity Coefficient, we demonstrate that our algorithm is able to track the femoral condyle cartilage with an accuracy which is comparable to experienced surgeons. It is additionally shown that the proposed method outperforms state-of-the-art segmentation models and trackers in the localization of the cartilage. We claim that the proposed solution has the potential for ultrasound guidance in minimally invasive knee procedures.


Subject(s)
Cartilage, Articular/diagnostic imaging , Image Processing, Computer-Assisted/methods , Knee Joint/diagnostic imaging , Neural Networks, Computer , Ultrasonography, Interventional/methods , Arthroscopy , Deep Learning , Female , Healthy Volunteers , Humans , Imaging, Three-Dimensional , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...