Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Front Physiol ; 12: 647603, 2021.
Article in English | MEDLINE | ID: mdl-34322033

ABSTRACT

"Brainless" cells, the living constituents inhabiting all biological materials, exhibit remarkably smart, i.e., stimuli-responsive and adaptive, behavior. The emergent spatial and temporal patterns of adaptation, observed as changes in cellular connectivity and tissue remodeling by cells, underpin neuroplasticity, muscle memory, immunological imprinting, and sentience itself, in diverse physiological systems from brain to bone. Connectomics addresses the direct connectivity of cells and cells' adaptation to dynamic environments through manufacture of extracellular matrix, forming tissues and architectures comprising interacting organs and systems of organisms. There is imperative to understand the physical renderings of cellular experience throughout life, from the time of emergence, to growth, adaptation and aging-associated degeneration of tissues. Here we address this need through development of technological approaches that incorporate cross length scale (nm to m) structural data, acquired via multibeam scanning electron microscopy, with machine learning and information transfer using network modeling approaches. This pilot case study uses cutting edge imaging methods for nano- to meso-scale study of cellular inhabitants within human hip tissue resected during the normal course of hip replacement surgery. We discuss the technical approach and workflow and identify the resulting opportunities as well as pitfalls to avoid, delineating a path for cellular connectomics studies in diverse tissue/organ environments and their interactions within organisms and across species. Finally, we discuss the implications of the outlined approach for neuromechanics and the control of physical behavior and neuromuscular training.

2.
Transl Vis Sci Technol ; 5(2): 3, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26966639

ABSTRACT

PURPOSE: To develop and evaluate a software tool for automated detection of focal hyperpigmentary changes (FHC) in eyes with intermediate age-related macular degeneration (AMD). METHODS: Color fundus (CFP) and autofluorescence (AF) photographs of 33 eyes with FHC of 28 AMD patients (mean age 71 years) from the prospective longitudinal natural history MODIAMD-study were included. Fully automated to semiautomated registration of baseline to corresponding follow-up images was evaluated. Following the manual circumscription of individual FHC (four different readings by two readers), a machine-learning algorithm was evaluated for automatic FHC detection. RESULTS: The overall pixel distance error for the semiautomated (CFP follow-up to CFP baseline: median 5.7; CFP to AF images from the same visit: median 6.5) was larger as compared for the automated image registration (4.5 and 5.7; P < 0.001 and P < 0.001). The total number of manually circumscribed objects and the corresponding total size varied between 637 to 1163 and 520,848 pixels to 924,860 pixels, respectively. Performance of the learning algorithms showed a sensitivity of 96% at a specificity level of 98% using information from both CFP and AF images and defining small areas of FHC ("speckle appearance") as "neutral." CONCLUSIONS: FHC as a high-risk feature for progression of AMD to late stages can be automatically assessed at different time points with similar sensitivity and specificity as compared to manual outlining. Upon further development of the research prototype, this approach may be useful both in natural history and interventional large-scale studies for a more refined classification and risk assessment of eyes with intermediate AMD. TRANSLATIONAL RELEVANCE: Automated FHC detection opens the door for a more refined and detailed classification and risk assessment of eyes with intermediate AMD in both natural history and future interventional studies.

3.
Med Image Comput Comput Assist Interv ; 17(Pt 2): 154-61, 2014.
Article in English | MEDLINE | ID: mdl-25485374

ABSTRACT

In this work we propose a novel framework for generic event monitoring in live cell culture videos, built on the assumption that unpredictable observations should correspond to biological events. We use a small set of event-free data to train a multioutput multikernel Gaussian process model that operates as an event predictor by performing autoregression on a bank of heterogeneous features extracted from consecutive frames of a video sequence. We show that the prediction error of this model can be used as a probability measure of the presence of relevant events, that can enable users to perform further analysis or monitoring of large-scale non-annotated data. We validate our approach in two phase-contrast sequence data sets containing mitosis and apoptosis events: a new private dataset of human bone cancer (osteosarcoma) cells and a benchmark dataset of stem cells.


Subject(s)
Cell Cycle , Cell Tracking/methods , Microscopy, Phase-Contrast/methods , Osteosarcoma/pathology , Pattern Recognition, Automated/methods , Stem Cells/cytology , Subtraction Technique , Algorithms , Cells, Cultured , Computer Simulation , Data Interpretation, Statistical , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
4.
IEEE Trans Pattern Anal Mach Intell ; 36(5): 1012-25, 2014 May.
Article in English | MEDLINE | ID: mdl-26353233

ABSTRACT

In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

5.
IEEE Trans Pattern Anal Mach Intell ; 35(4): 882-97, 2013 Apr.
Article in English | MEDLINE | ID: mdl-22889818

ABSTRACT

Following recent advances in detection, context modeling, and tracking, scene understanding has been the focus of renewed interest in computer vision research. This paper presents a novel probabilistic 3D scene model that integrates state-of-the-art multiclass object detection, object tracking and scene labeling together with geometric 3D reasoning. Our model is able to represent complex object interactions such as inter-object occlusion, physical exclusion between objects, and geometric context. Inference in this model allows us to jointly recover the 3D scene context and perform 3D multi-object tracking from a mobile observer, for objects of multiple categories, using only monocular video as input. Contrary to many other approaches, our system performs explicit occlusion reasoning and is therefore capable of tracking objects that are partially occluded for extended periods of time, or objects that have never been observed to their full extent. In addition, we show that a joint scene tracklet model for the evidence collected over multiple frames substantially improves performance. The approach is evaluated for different types of challenging onboard sequences. We first show a substantial improvement to the state of the art in 3D multipeople tracking. Moreover, a similar performance gain is achieved for multiclass 3D tracking of cars and trucks on a challenging dataset.


Subject(s)
Algorithms , Human Activities , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Automobiles , Cluster Analysis , Databases, Factual , Humans , Models, Theoretical , Video Recording , Walking
6.
IEEE Trans Pattern Anal Mach Intell ; 34(4): 743-61, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21808091

ABSTRACT

Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.


Subject(s)
Image Enhancement/methods , Electronic Data Processing , Humans , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...