Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Morphologie ; 103(343): 139-147, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31570309

ABSTRACT

OBJECTIVE OF THE STUDY: Transcatheter mitral valve interventions are emerging as a viable alternative for patients at high risk. Two key aspects are crucial during the preoperative planning: left ventricular outflow tract assessment and anatomical analysis. Given that the manual anatomical analysis is time-consuming, an automated approach may introduce efficiency during preoperative planning. In this study, we present an automatic method to detect the mitral valve annulus and discuss possible implementation of this method in clinical practice. PATIENTS: This retrospective study used the data of 71 patients collected from multiple centra. The mean age of this cohort was 74.2±13.1 years, and 56.1% of the patients were female and 43.9% male. MATERIALS AND METHODS: We trained three deep learning models to segment the area around the mitral valve annulus. In a post-processing step, we extracted the mitral valve annulus from this segmentation. As a final step, clinically relevant measurements such as 2D perimeter, trigone-to-trigone (TT) distance, septal-to-lateral (SL) distance and commissure-to-commissure (IC) distance were derived from the predicted mitral valve annulus. The method was cross-validated with k-folding. RESULTS: The predicted measurements showed excellent correlation with the manually obtained clinical measurements: 2D perimeter: R2=0.93, TT-distance: R2=0.86, SL-distance: R2=0.86 and IC-distance: R2=0.90. The total analysis time per patient of the automatic method was less than 1 second, which is an enormous speed-up as compared to the manual process (25minutes). CONCLUSION: The efficiency and accuracy of the proposed method give the confidence to move towards implementation of this technology in clinical practice. We propose a possible implementation of this method in clinical practice, which, in our opinion, will facilitate safe and efficient preoperative planning of transcatheter mitral valve interventions.


Subject(s)
Heart Valve Prosthesis Implantation/methods , Mitral Valve Insufficiency/surgery , Mitral Valve/diagnostic imaging , Patient Care Planning , Postoperative Complications/prevention & control , Aged , Aged, 80 and over , Deep Learning , Female , Heart Valve Prosthesis/adverse effects , Heart Valve Prosthesis Implantation/adverse effects , Heart Valve Prosthesis Implantation/instrumentation , Humans , Male , Middle Aged , Mitral Valve/pathology , Mitral Valve/surgery , Mitral Valve Insufficiency/diagnostic imaging , Multidetector Computed Tomography , Postoperative Complications/etiology , Prosthesis Design , Retrospective Studies
2.
J Neural Eng ; 14(3): 036021, 2017 06.
Article in English | MEDLINE | ID: mdl-28287076

ABSTRACT

OBJECTIVE: Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. APPROACH: We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method's strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. MAIN RESULTS: Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. SIGNIFICANCE: Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.


Subject(s)
Brain-Computer Interfaces , Brain/physiology , Communication Aids for Disabled , Evoked Potentials/physiology , Machine Learning , Models, Statistical , Pattern Recognition, Automated/methods , Adult , Algorithms , Computer Simulation , Data Interpretation, Statistical , Female , Humans , Male , Reproducibility of Results , Sensitivity and Specificity , Task Performance and Analysis
3.
Appl Opt ; 55(1): 133-9, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26835632

ABSTRACT

We propose using a neural network approach in conjunction with digital holographic microscopy in order to rapidly determine relevant parameters such as the core and shell diameter of coated, non-absorbing spheres. We do so without requiring a time-consuming reconstruction of the cell image. In contrast to previous approaches, we are able to obtain a continuous value for parameters such as size, as opposed to binning into a discrete number of categories. Also, we are able to separately determine both core and shell diameter. For simulated particle sizes ranging between 7 and 20 µm, we obtain accuracies of (4.4±0.2)% and (0.74±0.01)% for the core and shell diameter, respectively.


Subject(s)
Holography/methods , Neural Networks, Computer , Computer Simulation , Leukocytes/cytology
4.
J Neural Eng ; 12(6): 066027, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26580120

ABSTRACT

OBJECTIVE: State of the art brain-computer interface (BCI) research focuses on improving individual components such as the application or the decoder that converts the user's brain activity to control signals. In this study, we investigate the interaction between these components in the P300 speller, a BCI for communication. We introduce a synergistic approach in which the stimulus presentation sequence is modified to enhance the machine learning decoding. In this way we aim for an improved overall BCI performance. APPROACH: First, a new stimulus presentation paradigm is introduced which provides us flexibility in tuning the sequence of visual stimuli presented to the user. Next, an experimental setup in which this paradigm is compared to other paradigms uncovers the underlying mechanism of the interdependence between the application and the performance of the decoder. MAIN RESULTS: Extensive analysis of the experimental results reveals the changing requirements of the decoder concerning the data recorded during the spelling session. When few data is recorded, the balance in the number of target and non-target stimuli shown to the user is more important than the signal-to-noise rate (SNR) of the recorded response signals. Only when more data has been collected, the SNR becomes the dominant factor. SIGNIFICANCE: For BCIs in general, knowing the dominant factor that affects the decoder performance and being able to respond to it is of utmost importance to improve system performance. For the P300 speller, the proposed tunable paradigm offers the possibility to tune the application to the decoder's needs at any time and, as such, fully exploit this application-decoder interaction.


Subject(s)
Brain-Computer Interfaces , Electroencephalography/methods , Event-Related Potentials, P300/physiology , Machine Learning , Photic Stimulation/methods , Adult , Brain-Computer Interfaces/trends , Electroencephalography/trends , Female , Humans , Machine Learning/trends , Male
5.
Sci Rep ; 2: 287, 2012.
Article in English | MEDLINE | ID: mdl-22371825

ABSTRACT

Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an optoelectronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations.

6.
Nat Commun ; 2: 468, 2011 Sep 13.
Article in English | MEDLINE | ID: mdl-21915110

ABSTRACT

Novel methods for information processing are highly desired in our information-driven society. Inspired by the brain's ability to process information, the recently introduced paradigm known as 'reservoir computing' shows that complex networks can efficiently perform computation. Here we introduce a novel architecture that reduces the usually required large number of elements to a single nonlinear node with delayed feedback. Through an electronic implementation, we experimentally and numerically demonstrate excellent performance in a speech recognition benchmark. Complementary numerical studies also show excellent performance for a time series prediction benchmark. These results prove that delay-dynamical systems, even in their simplest manifestation, can perform efficient information processing. This finding paves the way to feasible and resource-efficient technological implementations of reservoir computing.

SELECTION OF CITATIONS
SEARCH DETAIL
...