ABSTRACT
Mechanical ventilators are safety-critical devices that help patients breathe, commonly found in hospital intensive care units (ICUs)-yet, the high costs and proprietary nature of commercial ventilators inhibit their use as an educational and research platform. We present a fully open ventilator device-The People's Ventilator: PVP1-with complete hardware and software documentation including detailed build instructions and a DIY cost of $1,700 USD. We validate PVP1 against both key performance criteria specified in the U.S. Food and Drug Administration's Emergency Use Authorization for Ventilators, and in a pediatric context against a state-of-the-art commercial ventilator. Notably, PVP1 performs well over a wide range of test conditions and performance stability is demonstrated for a minimum of 75,000 breath cycles over three days with an adult mechanical test lung. As an open project, PVP1 can enable future educational, academic, and clinical developments in the ventilator space.
Subject(s)
Intensive Care Units , Ventilators, Mechanical , Adult , Child , Humans , Respiration, ArtificialABSTRACT
The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new
Subject(s)
Feedback, Physiological/physiology , Posture/physiology , Animals , Behavior, Animal/physiology , Mice , Neural Networks, Computer , SoftwareABSTRACT
Speech is perceived as a series of relatively invariant phonemes despite extreme variability in the acoustic signal. To be perceived as nearly-identical phonemes, speech sounds that vary continuously over a range of acoustic parameters must be perceptually discretized by the auditory system. Such many-to-one mappings of undifferentiated sensory information to a finite number of discrete categories are ubiquitous in perception. Although many mechanistic models of phonetic perception have been proposed, they remain largely unconstrained by neurobiological data. Current human neurophysiological methods lack the necessary spatiotemporal resolution to provide it: speech is too fast, and the neural circuitry involved is too small. This study demonstrates that mice are capable of learning generalizable phonetic categories, and can thus serve as a model for phonetic perception. Mice learned to discriminate consonants and generalized consonant identity across novel vowel contexts and speakers, consistent with true category learning. A mouse model, given the powerful genetic and electrophysiological tools for probing neural circuits available for them, has the potential to powerfully augment a mechanistic understanding of phonetic perception.