Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 39
Filter
1.
medRxiv ; 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38559196

ABSTRACT

Purpose: Visual prosthetics have emerged as a promising assistive technology for individuals with vision loss, yet research often overlooks the human aspects of this technology. While previous studies have concentrated on the perceptual experiences of implant recipients (implantees) or the attitudes of potential implantees towards near-future implants, a systematic account of how current implants are being used in everyday life is still lacking. Methods: We interviewed six recipients of the most widely used visual implants (Argus II and Orion) and six leading researchers in the field. Through thematic and statistical analyses, we explored the daily usage of these implants by implantees and compared their responses to the expectations of researchers. We also sought implantees' input on desired features for future versions, aiming to inform the development of the next generation of implants. Results: Although implants are designed to facilitate various daily activities, we found that implantees use them less frequently than researchers expected. This discrepancy primarily stems from issues with usability and reliability, with implantees finding alternative methods to accomplish tasks, reducing the need to rely on the implant. For future implants, implantees emphasized the desire for improved vision, smart integration, and increased independence. Conclusions: Our study reveals a significant gap between researcher expectations and implantee experiences with visual prostheses, underscoring the importance of focusing future research on usability and real-world application. Translational relevance: This work advocates for a better alignment between technology development and implantee needs to enhance clinical relevance and practical utility of visual prosthetics.

2.
J Neural Eng ; 21(2)2024 Mar 19.
Article in English | MEDLINE | ID: mdl-38452381

ABSTRACT

Objective.Retinal prostheses evoke visual precepts by electrically stimulating functioning cells in the retina. Despite high variance in perceptual thresholds across subjects, among electrodes within a subject, and over time, retinal prosthesis users must undergo 'system fitting', a process performed to calibrate stimulation parameters according to the subject's perceptual thresholds. Although previous work has identified electrode-retina distance and impedance as key factors affecting thresholds, an accurate predictive model is still lacking.Approach.To address these challenges, we (1) fitted machine learning models to a large longitudinal dataset with the goal of predicting individual electrode thresholds and deactivation as a function of stimulus, electrode, and clinical parameters ('predictors') and (2) leveraged explainable artificial intelligence (XAI) to reveal which of these predictors were most important.Main results.Our models accounted for up to 76% of the perceptual threshold response variance and enabled predictions of whether an electrode was deactivated in a given trial with F1 and area under the ROC curve scores of up to 0.732 and 0.911, respectively. Our models identified novel predictors of perceptual sensitivity, including subject age, time since blindness onset, and electrode-fovea distance.Significance.Our results demonstrate that routinely collected clinical measures and a single session of system fitting might be sufficient to inform an XAI-based threshold prediction strategy, which has the potential to transform clinical practice in predicting visual outcomes.


Subject(s)
Visual Prosthesis , Humans , Artificial Intelligence , Electrodes, Implanted , Retina/physiology , Machine Learning , Electric Stimulation/methods
3.
Sci Rep ; 14(1): 5949, 2024 03 11.
Article in English | MEDLINE | ID: mdl-38467699

ABSTRACT

There are known individual differences in both the ability to learn the layout of novel environments and the flexibility of strategies for navigating known environments. However, it is unclear how navigational abilities are impacted by high-stress scenarios. Here we used immersive virtual reality (VR) to develop a novel behavioral paradigm to examine navigation under dynamically changing situations. We recruited 48 participants (24 female; ages 17-32) to navigate a virtual maze (7.5 m × 7.5 m). Participants learned the maze by moving along a fixed path past the maze's landmarks (paintings). Subsequently, participants experienced either a non-stress condition, or a high-stress condition tasking them with navigating the maze. In the high-stress condition, their initial path was blocked, the environment was darkened, threatening music was played, fog obstructed more distal views of the environment, and participants were given a time limit of 20 s with a countdown timer displayed at the top of their screen. On trials where the path was blocked, we found self-reported stress levels and distance traveled increased while trial completion rate decreased (as compared to non-stressed control trials). On unblocked stress trials, participants were less likely to take a shortcut and consequently navigated less efficiently compared to control trials. Participants with more trait spatial anxiety reported more stress and navigated less efficiently. Overall, our results suggest that navigational abilities change considerably under high-stress conditions.


Subject(s)
Spatial Navigation , Stress, Physiological , Virtual Reality , Female , Humans , Individuality , Maze Learning , Male , Adolescent , Young Adult , Adult
4.
J Neural Eng ; 21(2)2024 Apr 08.
Article in English | MEDLINE | ID: mdl-38457841

ABSTRACT

Objective.Retinal implants use electrical stimulation to elicit perceived flashes of light ('phosphenes'). Single-electrode phosphene shape has been shown to vary systematically with stimulus parameters and the retinal location of the stimulating electrode, due to incidental activation of passing nerve fiber bundles. However, this knowledge has yet to be extended to paired-electrode stimulation.Approach.We retrospectively analyzed 3548 phosphene drawings made by three blind participants implanted with an Argus II Retinal Prosthesis. Phosphene shape (characterized by area, perimeter, major and minor axis length) and number of perceived phosphenes were averaged across trials and correlated with the corresponding single-electrode parameters. In addition, the number of phosphenes was correlated with stimulus amplitude and neuroanatomical parameters: electrode-retina and electrode-fovea distance as well as the electrode-electrode distance to ('between-axon') and along axon bundles ('along-axon'). Statistical analyses were conducted using linear regression and partial correlation analysis.Main results.Simple regression revealed that each paired-electrode shape descriptor could be predicted by the sum of the two corresponding single-electrode shape descriptors (p < .001). Multiple regression revealed that paired-electrode phosphene shape was primarily predicted by stimulus amplitude and electrode-fovea distance (p < .05). Interestingly, the number of elicited phosphenes tended to increase with between-axon distance (p < .05), but not with along-axon distance, in two out of three participants.Significance.The shape of phosphenes elicited by paired-electrode stimulation was well predicted by the shape of their corresponding single-electrode phosphenes, suggesting that two-point perception can be expressed as the linear summation of single-point perception. The impact of the between-axon distance on the perceived number of phosphenes provides further evidence in support of the axon map model for epiretinal stimulation. These findings contribute to the growing literature on phosphene perception and have important implications for the design of future retinal prostheses.


Subject(s)
Retina , Visual Prosthesis , Humans , Retrospective Studies , Retina/physiology , Phosphenes , Axons , Electric Stimulation , Perception
5.
medRxiv ; 2023 Dec 26.
Article in English | MEDLINE | ID: mdl-37546858

ABSTRACT

Purpose: Retinal implants use electrical stimulation to elicit perceived flashes of light ("phosphenes"). Single-electrode phosphene shape has been shown to vary systematically with stimulus parameters and the retinal location of the stimulating electrode, due to incidental activation of passing nerve fiber bundles. However, this knowledge has yet to be extended to paired-electrode stimulation. Methods: We retrospectively analyzed 3548 phosphene drawings made by three blind participants implanted with an Argus II Retinal Prosthesis. Phosphene shape (characterized by area, perimeter, major and minor axis length) and number of perceived phosphenes were averaged across trials and correlated with the corresponding single-electrode parameters. In addition, the number of phosphenes was correlated with stimulus amplitude and neuroanatomical parameters: electrode-retina and electrode-fovea distance as well as the electrode-electrode distance to ("between-axon") and along axon bundles ("along-axon"). Statistical analyses were conducted using linear regression and partial correlation analysis. Results: Simple regression revealed that each paired-electrode shape descriptor could be predicted by the sum of the two corresponding single-electrode shape descriptors (p < .001). Multiple regression revealed that paired-electrode phosphene shape was primarily predicted by stimulus amplitude and electrode-fovea distance (p < .05). Interestingly, the number of elicited phosphenes tended to increase with between-axon distance (p < .05), but not with along-axon distance, in two out of three participants. Conclusions: The shape of phosphenes elicited by paired-electrode stimulation was well predicted by the shape of their corresponding single-electrode phosphenes, suggesting that two-point perception can be expressed as the linear summation of single-point perception. The notable impact of the between-axon distance on the perceived number of phosphenes provides further evidence in support of the axon map model for epiretinal stimulation. These findings contribute to the growing literature on phosphene perception and have important implications for the design of future retinal prostheses.

6.
bioRxiv ; 2023 May 30.
Article in English | MEDLINE | ID: mdl-37398256

ABSTRACT

Despite their immense success as a model of macaque visual cortex, deep convolutional neural networks (CNNs) have struggled to predict activity in visual cortex of the mouse, which is thought to be strongly dependent on the animal's behavioral state. Furthermore, most computational models focus on predicting neural responses to static images presented under head fixation, which are dramatically different from the dynamic, continuous visual stimuli that arise during movement in the real world. Consequently, it is still unknown how natural visual input and different behavioral variables may integrate over time to generate responses in primary visual cortex (V1). To address this, we introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. We show that the model achieves state-of-the-art predictions of V1 activity during free exploration and demonstrate the importance of each component in an extensive ablation study. Analyzing our model using maximally activating stimuli and saliency maps, we reveal new insights into cortical function, including the prevalence of mixed selectivity for behavioral variables in mouse V1. In summary, our model offers a comprehensive deep-learning framework for exploring the computational principles underlying V1 neurons in freely-moving animals engaged in natural behavior.

7.
Front Neurosci ; 17: 1147729, 2023.
Article in English | MEDLINE | ID: mdl-37274203

ABSTRACT

Introduction: Understanding the retina in health and disease is a key issue for neuroscience and neuroengineering applications such as retinal prostheses. During degeneration, the retinal network undergoes complex and multi-stage neuroanatomical alterations, which drastically impact the retinal ganglion cell (RGC) response and are of clinical importance. Here we present a biophysically detailed in silico model of the cone pathway in the retina that simulates the network-level response to both light and electrical stimulation. Methods: The model included 11, 138 cells belonging to nine different cell types (cone photoreceptors, horizontal cells, ON/OFF bipolar cells, ON/OFF amacrine cells, and ON/OFF ganglion cells) confined to a 300 × 300 × 210µm patch of the parafoveal retina. After verifying that the model reproduced seminal findings about the light response of retinal ganglion cells (RGCs), we systematically introduced anatomical and neurophysiological changes (e.g., reduced light sensitivity of photoreceptor, cell death, cell migration) to the network and studied their effect on network activity. Results: The model was not only able to reproduce common findings about RGC activity in the degenerated retina, such as hyperactivity and increased electrical thresholds, but also offers testable predictions about the underlying neuroanatomical mechanisms. Discussion: Overall, our findings demonstrate how biophysical changes typified by cone-mediated retinal degeneration may impact retinal responses to light and electrical stimulation. These insights may further our understanding of retinal processing and inform the design of retinal prostheses.

8.
J Vis ; 23(5): 5, 2023 05 02.
Article in English | MEDLINE | ID: mdl-37140911

ABSTRACT

Over the past decade, extended reality (XR) has emerged as an assistive technology not only to augment residual vision of people losing their sight but also to study the rudimentary vision restored to blind people by a visual neuroprosthesis. A defining quality of these XR technologies is their ability to update the stimulus based on the user's eye, head, or body movements. To make the best use of these emerging technologies, it is valuable and timely to understand the state of this research and identify any shortcomings that are present. Here we present a systematic literature review of 227 publications from 106 different venues assessing the potential of XR technology to further visual accessibility. In contrast to other reviews, we sample studies from multiple scientific disciplines, focus on technology that augments a person's residual vision, and require studies to feature a quantitative evaluation with appropriate end users. We summarize prominent findings from different XR research areas, show how the landscape has changed over the past decade, and identify scientific gaps in the literature. Specifically, we highlight the need for real-world validation, the broadening of end-user participation, and a more nuanced understanding of the usability of different XR-based accessibility aids.


Subject(s)
Blindness , Visually Impaired Persons , Humans , Vision Disorders
9.
Ophthalmol Sci ; 3(3): 100288, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37131961

ABSTRACT

Purpose: To identify novel susceptibility loci for retinal vascular tortuosity, to better understand the molecular mechanisms modulating this trait, and reveal causal relationships with diseases and their risk factors. Design: Genome-wide Association Studies (GWAS) of vascular tortuosity of retinal arteries and veins followed by replication meta-analysis and Mendelian randomization (MR). Participants: We analyzed 116 639 fundus images of suitable quality from 63 662 participants from 3 cohorts, namely the UK Biobank (n = 62 751), the Swiss Kidney Project on Genes in Hypertension (n = 397), and OphtalmoLaus (n = 512). Methods: Using a fully automated retina image processing pipeline to annotate vessels and a deep learning algorithm to determine the vessel type, we computed the median arterial, venous and combined vessel tortuosity measured by the distance factor (the length of a vessel segment over its chord length), as well as by 6 alternative measures that integrate over vessel curvature. We then performed the largest GWAS of these traits to date and assessed gene set enrichment using the novel high-precision statistical method PascalX. Main Outcome Measure: We evaluated the genetic association of retinal tortuosity, measured by the distance factor. Results: Higher retinal tortuosity was significantly associated with higher incidence of angina, myocardial infarction, stroke, deep vein thrombosis, and hypertension. We identified 175 significantly associated genetic loci in the UK Biobank; 173 of these were novel and 4 replicated in our second, much smaller, metacohort. We estimated heritability at ∼25% using linkage disequilibrium score regression. Vessel type specific GWAS revealed 116 loci for arteries and 63 for veins. Genes with significant association signals included COL4A2, ACTN4, LGALS4, LGALS7, LGALS7B, TNS1, MAP4K1, EIF3K, CAPN12, ECH1, and SYNPO2. These tortuosity genes were overexpressed in arteries and heart muscle and linked to pathways related to the structural properties of the vasculature. We demonstrated that retinal tortuosity loci served pleiotropic functions as cardiometabolic disease variants and risk factors. Concordantly, MR revealed causal effects between tortuosity, body mass index, and low-density lipoprotein. Conclusions: Several alleles associated with retinal vessel tortuosity suggest a common genetic architecture of this trait with ocular diseases (glaucoma, myopia), cardiovascular diseases, and metabolic syndrome. Our results shed new light on the genetics of vascular diseases and their pathomechanisms and highlight how GWASs and heritability can be used to improve phenotype extraction from high-dimensional data, such as images. Financial Disclosures: The author(s) have no proprietary or commercial interest in any materials discussed in this article.

10.
Biol Cybern ; 117(1-2): 95-111, 2023 04.
Article in English | MEDLINE | ID: mdl-37004546

ABSTRACT

Deep neural networks have surpassed human performance in key visual challenges such as object recognition, but require a large amount of energy, computation, and memory. In contrast, spiking neural networks (SNNs) have the potential to improve both the efficiency and biological plausibility of object recognition systems. Here we present a SNN model that uses spike-latency coding and winner-take-all inhibition (WTA-I) to efficiently represent visual stimuli using multi-scale parallel processing. Mimicking neuronal response properties in early visual cortex, images were preprocessed with three different spatial frequency (SF) channels, before they were fed to a layer of spiking neurons whose synaptic weights were updated using spike-timing-dependent-plasticity. We investigate how the quality of the represented objects changes under different SF bands and WTA-I schemes. We demonstrate that a network of 200 spiking neurons tuned to three SFs can efficiently represent objects with as little as 15 spikes per neuron. Studying how core object recognition may be implemented using biologically plausible learning rules in SNNs may not only further our understanding of the brain, but also lead to novel and efficient artificial vision systems.


Subject(s)
Models, Neurological , Neuronal Plasticity , Humans , Neuronal Plasticity/physiology , Neural Networks, Computer , Learning/physiology , Visual Perception/physiology
11.
medRxiv ; 2023 Feb 10.
Article in English | MEDLINE | ID: mdl-36798201

ABSTRACT

To provide appropriate levels of stimulation, retinal prostheses must be calibrated to an individual's perceptual thresholds ('system fitting'), despite thresholds varying drastically across subjects, across electrodes within a subject, and over time. Although previous work has identified electrode-retina distance and impedance as key factors affecting thresholds, an accurate predictive model is still lacking. To address these challenges, we 1) fitted machine learning (ML) models to a large longitudinal dataset with the goal of predicting individual electrode thresholds and deactivation as a function of stimulus, electrode, and clinical parameters ('predictors') and 2) leveraged explainable artificial intelligence (XAI) to reveal which of these predictors were most important. Our models accounted for up to 77% of the perceptual threshold response variance and enabled predictions of whether an electrode was deactivated in a given trial with F1 and AUC scores of up to 0.740 and 0.913, respectively. Deactivation and threshold models identified novel predictors of perceptual sensitivity, including subject age, time since blindness onset, and electrode-fovea distance. Our results demonstrate that routinely collected clinical measures and a single session of system fitting might be sufficient to inform an XAI-based threshold prediction strategy, which may transform clinical practice in predicting visual outcomes.

12.
bioRxiv ; 2023 Jan 16.
Article in English | MEDLINE | ID: mdl-36711897

ABSTRACT

Understanding the retina in health and disease is a key issue for neuroscience and neuroengineering applications such as retinal prostheses. During degeneration, the retinal network undergoes complex and multi-stage neuroanatomical alterations, which drastically impact the retinal ganglion cell (RGC) response and are of clinical importance. Here we present a biophysically detailed in silico model of retinal degeneration that simulates the network-level response to both light and electrical stimulation as a function of disease progression. The model is not only able to reproduce common findings about RGC activity in the degenerated retina, such as hyperactivity and increased electrical thresholds, but also offers testable predictions about the underlying neuroanatomical mechanisms. Overall, our findings demonstrate how biophysical changes associated with retinal degeneration affect retinal responses to both light and electrical stimulation, which may further our understanding of visual processing in the retina as well as inform the design and application of retinal prostheses.

13.
Adv Neural Inf Process Syst ; 36: 15341-15357, 2023 Dec.
Article in English | MEDLINE | ID: mdl-39005944

ABSTRACT

Despite their immense success as a model of macaque visual cortex, deep convolutional neural networks (CNNs) have struggled to predict activity in visual cortex of the mouse, which is thought to be strongly dependent on the animal's behavioral state. Furthermore, most computational models focus on predicting neural responses to static images presented under head fixation, which are dramatically different from the dynamic, continuous visual stimuli that arise during movement in the real world. Consequently, it is still unknown how natural visual input and different behavioral variables may integrate over time to generate responses in primary visual cortex (V1). To address this, we introduce a multimodal recurrent neural network that integrates gaze-contingent visual input with behavioral and temporal dynamics to explain V1 activity in freely moving mice. We show that the model achieves state-of-the-art predictions of V1 activity during free exploration and demonstrate the importance of each component in an extensive ablation study. Analyzing our model using maximally activating stimuli and saliency maps, we reveal new insights into cortical function, including the prevalence of mixed selectivity for behavioral variables in mouse V1. In summary, our model offers a comprehensive deep-learning framework for exploring the computational principles underlying V1 neurons in freely-moving animals engaged in natural behavior.

14.
Adv Neural Inf Process Syst ; 36: 79376-79398, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38984104

ABSTRACT

Neuroprostheses show potential in restoring lost sensory function and enhancing human capabilities, but the sensations produced by current devices often seem unnatural or distorted. Exact placement of implants and differences in individual perception lead to significant variations in stimulus response, making personalized stimulus optimization a key challenge. Bayesian optimization could be used to optimize patient-specific stimulation parameters with limited noisy observations, but is not feasible for high-dimensional stimuli. Alternatively, deep learning models can optimize stimulus encoding strategies, but typically assume perfect knowledge of patient-specific variations. Here we propose a novel, practically feasible approach that overcomes both of these fundamental limitations. First, a deep encoder network is trained to produce optimal stimuli for any individual patient by inverting a forward model mapping electrical stimuli to visual percepts. Second, a preferential Bayesian optimization strategy utilizes this encoder to optimize patient-specific parameters for a new patient, using a minimal number of pairwise comparisons between candidate stimuli. We demonstrate the viability of this approach on a novel, state-of-the-art visual prosthesis model. We show that our approach quickly learns a personalized stimulus encoder, leads to dramatic improvements in the quality of restored vision, and is robust to noisy patient feedback and misspecifications in the underlying forward model. Overall, our results suggest that combining the strengths of deep learning and Bayesian optimization could significantly improve the perceptual experience of patients fitted with visual prostheses and may prove a viable solution for a range of neuroprosthetic technologies.

15.
J Neural Eng ; 19(6)2022 12 07.
Article in English | MEDLINE | ID: mdl-36541463

ABSTRACT

Objective.How can we return a functional form of sight to people who are living with incurable blindness? Despite recent advances in the development of visual neuroprostheses, the quality of current prosthetic vision is still rudimentary and does not differ much across different device technologies.Approach.Rather than aiming to represent the visual scene as naturally as possible, aSmart Bionic Eyecould provide visual augmentations through the means of artificial intelligence-based scene understanding, tailored to specific real-world tasks that are known to affect the quality of life of people who are blind, such as face recognition, outdoor navigation, and self-care.Main results.Complementary to existing research aiming to restore natural vision, we propose a patient-centered approach to incorporate deep learning-based visual augmentations into the next generation of devices.Significance.The ability of a visual prosthesis to support everyday tasks might make the difference between abandoned technology and a widely adopted next-generation neuroprosthetic device.


Subject(s)
Facial Recognition , Visual Prosthesis , Humans , Artificial Intelligence , Quality of Life , Blindness/therapy
16.
Front Neurosci ; 16: 901337, 2022.
Article in English | MEDLINE | ID: mdl-36090266

ABSTRACT

Two of the main obstacles to the development of epiretinal prosthesis technology are electrodes that require current amplitudes above safety limits to reliably elicit percepts, and a failure to consistently elicit pattern vision. Here, we explored the causes of high current amplitude thresholds and poor spatial resolution within the Argus II epiretinal implant. We measured current amplitude thresholds and two-point discrimination (the ability to determine whether one or two electrodes had been stimulated) in 3 blind participants implanted with Argus II devices. Our data and simulations show that axonal stimulation, lift and retinal damage all play a role in reducing performance in the Argus 2, by either limiting sensitivity and/or reducing spatial resolution. Understanding the relative role of these various factors will be critical for developing and surgically implanting devices that can successfully subserve pattern vision.

17.
Augment Hum (2022) ; 2022: 82-93, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35856703

ABSTRACT

Bionic vision uses neuroprostheses to restore useful vision to people living with incurable blindness. However, a major outstanding challenge is predicting what people "see" when they use their devices. The limited field of view of current devices necessitates head movements to scan the scene, which is difficult to simulate on a computer screen. In addition, many computational models of bionic vision lack biological realism. To address these challenges, we present VR-SPV, an open-source virtual reality toolbox for simulated prosthetic vision that uses a psychophysically validated computational model to allow sighted participants to "see through the eyes" of a bionic eye user. To demonstrate its utility, we systematically evaluated how clinically reported visual distortions affect performance in a letter recognition and an immersive obstacle avoidance task. Our results highlight the importance of using an appropriate phosphene model when predicting visual outcomes for bionic vision.

18.
J Neurosci ; 42(30): 5882-5898, 2022 07 27.
Article in English | MEDLINE | ID: mdl-35732492

ABSTRACT

The nervous system is under tight energy constraints and must represent information efficiently. This is particularly relevant in the dorsal part of the medial superior temporal area (MSTd) in primates where neurons encode complex motion patterns to support a variety of behaviors. A sparse decomposition model based on a dimensionality reduction principle known as non-negative matrix factorization (NMF) was previously shown to account for a wide range of monkey MSTd visual response properties. This model resulted in sparse, parts-based representations that could be regarded as basis flow fields, a linear superposition of which accurately reconstructed the input stimuli. This model provided evidence that the seemingly complex response properties of MSTd may be a by-product of MSTd neurons performing dimensionality reduction on their input. However, an open question is how a neural circuit could carry out this function. In the current study, we propose a spiking neural network (SNN) model of MSTd based on evolved spike-timing-dependent plasticity and homeostatic synaptic scaling (STDP-H) learning rules. We demonstrate that the SNN model learns compressed and efficient representations of the input patterns similar to the patterns that emerge from NMF, resulting in MSTd-like receptive fields observed in monkeys. This SNN model suggests that STDP-H observed in the nervous system may be performing a similar function as NMF with sparsity constraints, which provides a test bed for mechanistic theories of how MSTd may efficiently encode complex patterns of visual motion to support robust self-motion perception.SIGNIFICANCE STATEMENT The brain may use dimensionality reduction and sparse coding to efficiently represent stimuli under metabolic constraints. Neurons in monkey area MSTd respond to complex optic flow patterns resulting from self-motion. We developed a spiking neural network model that showed MSTd-like response properties can emerge from evolving spike-timing-dependent plasticity with STDP-H parameters of the connections between then middle temporal area and MSTd. Simulated MSTd neurons formed a sparse, reduced population code capable of encoding perceptual variables important for self-motion perception. This model demonstrates that complex neuronal responses observed in MSTd may emerge from efficient coding and suggests that neurobiological plasticity, like STDP-H, may contribute to reducing the dimensions of input stimuli and allowing spiking neurons to learn sparse representations.


Subject(s)
Motion Perception , Animals , Haplorhini , Models, Neurological , Motion Perception/physiology , Neural Networks, Computer , Neuronal Plasticity/physiology , Neurons/physiology , Photic Stimulation/methods , Primates , Temporal Lobe/physiology
19.
Adv Neural Inf Process Syst ; 35: 22671-22685, 2022 Dec.
Article in English | MEDLINE | ID: mdl-37719469

ABSTRACT

Sensory neuroprostheses are emerging as a promising technology to restore lost sensory function or augment human capabilities. However, sensations elicited by current devices often appear artificial and distorted. Although current models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy solves the inverse problem: what is the required stimulus to produce a desired response? Here, we frame this as an end-to-end optimization problem, where a deep neural network stimulus encoder is trained to invert a known and fixed forward model that approximates the underlying biological system. As a proof of concept, we demonstrate the effectiveness of this hybrid neural autoencoder (HNA) in visual neuroprostheses. We find that HNA produces high-fidelity patient-specific stimuli representing handwritten digits and segmented images of everyday objects, and significantly outperforms conventional encoding strategies across all simulated patients. Overall this is an important step towards the long-standing challenge of restoring high-quality vision to people living with incurable blindness and may prove a promising solution for a variety of neuroprosthetic technologies.

20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4477-4481, 2021 11.
Article in English | MEDLINE | ID: mdl-34892213

ABSTRACT

Retinal neuroprostheses are the only FDA-approved treatment option for blinding degenerative diseases. A major outstanding challenge is to develop a computational model that can accurately predict the elicited visual percepts (phosphenes) across a wide range of electrical stimuli. Here we present a phenomenological model that predicts phosphene appearance as a function of stimulus amplitude, frequency, and pulse duration. The model uses a simulated map of nerve fiber bundles in the retina to produce phosphenes with accurate brightness, size, orientation, and elongation. We validate the model on psychophysical data from two independent studies, showing that it generalizes well to new data, even with different stimuli and on different electrodes. Whereas previous models focused on either spatial or temporal aspects of the elicited phosphenes in isolation, we describe a more comprehensive approach that is able to account for many reported visual effects. The model is designed to be flexible and extensible, and can be fit to data from a specific user. Overall this work is an important first step towards predicting visual outcomes in retinal prosthesis users across a wide range of stimuli.


Subject(s)
Phosphenes , Visual Prosthesis , Computer Simulation , Retina
SELECTION OF CITATIONS
SEARCH DETAIL
...