Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Brain Stimul ; 16(3): 840-853, 2023.
Article in English | MEDLINE | ID: mdl-37201865

ABSTRACT

The objective and scope of this Limited Output Transcranial Electrical Stimulation 2023 (LOTES-2023) guidance is to update the previous LOTES-2017 guidance. These documents should therefore be considered together. The LOTES provides a clearly articulated and transparent framework for the design of devices providing limited output (specified low-intensity range) transcranial electrical stimulation for a variety of intended uses. These guidelines can inform trial design and regulatory decisions, but most directly inform manufacturer activities - and hence were presented in LOTES-2017 as "Voluntary industry standard for compliance controlled limited output tES devices". In LOTES-2023 we emphasize that these standards are largely aligned across international standards and national regulations (including those in USA, EU, and South Korea), and so might be better understood as "Industry standards for compliance controlled limited output tES devices". LOTES-2023 is therefore updated to reflect a consensus among emerging international standards, as well as best available scientific evidence. "Warnings" and "Precautions" are updated to align with current biomedical evidence and applications. LOTES standards applied to a constrained device dose range, but within this dose range and for different use-cases, manufacturers are responsible to conduct device-specific risk management.


Subject(s)
Transcranial Direct Current Stimulation , Risk Management
2.
Neural Comput ; 26(8): 1763-809, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24877728

ABSTRACT

Hierarchical temporal memory (HTM) is a biologically inspired framework that can be used to learn invariant representations of patterns in a wide range of applications. Classical HTM learning is mainly unsupervised, and once training is completed, the network structure is frozen, thus making further training (i.e., incremental learning) quite critical. In this letter, we develop a novel technique for HTM (incremental) supervised learning based on gradient descent error minimization. We prove that error backpropagation can be naturally and elegantly implemented through native HTM message passing based on belief propagation. Our experimental results demonstrate that a two-stage training approach composed of unsupervised pretraining and supervised refinement is very effective (both accurate and efficient). This is in line with recent findings on other deep architectures.


Subject(s)
Memory , Neural Networks, Computer , Algorithms , Information Theory , Pattern Recognition, Automated
SELECTION OF CITATIONS
SEARCH DETAIL
...