Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw Learn Syst ; 27(3): 661-73, 2016 Mar.
Article in English | MEDLINE | ID: mdl-26087501

ABSTRACT

This paper presents a programmable analog current-mode circuit used to calculate the distance between two vectors of currents, following two distance measures. The Euclidean (L2) distance is commonly used. However, in many situations, it can be replaced with the Manhattan (L1) one, which is computationally less intensive, whose realization comes with less power dissipation and lower hardware complexity. The presented circuit can be easily reprogrammed to operate with one of these distances. The circuit is one of the components of an analog winner takes all neural network (NN) implemented in the complementary metal-oxide-semiconductor 0.18- [Formula: see text] technology. The learning process of the realized NN has been successfully verified by the laboratory tests of the fabricated chip. The proposed distance calculation circuit (DCC) features a simple structure, which makes it suitable for networks with a relatively large number of neurons realized in hardware and operating in parallel. For example, the network with three inputs occupies a relatively small area of 3900 µm(2). When operating in the L2 mode, the circuit dissipates 85 [Formula: see text] of power from the 1.5 V voltage supply, at maximum data rate of 10 MHz. In the L1 mode, an average dissipated power is reduced to 55 [Formula: see text] from 1.2 V voltage supply, while data rate is 12 MHz in this case. The given data rates are provided for the worst case scenario, where input currents differ by 1%-2% only. In this case, the settling time of the comparators used in the DCC is quite long. However, that kind of situation is very rare in the overall learning process.

2.
Bioresour Technol ; 169: 143-148, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25043347

ABSTRACT

Co-cultivation of fungi may be an excellent system for on-site production of cellulolytic enzymes in a single bioreactor. Enzyme supernatants from mixed cultures of Trichoderma reesei RutC30, with either the novel Aspergillus saccharolyticus AP, Aspergillus carbonarius ITEM 5010 or Aspergillus niger CBS 554.65 cultivated in solid-state fermentation were tested for avicelase, FPase, endoglucanase and beta-glucosidase activity as well as in hydrolysis of pretreated wheat straw. Around 30% more avicelase activity was produced in co-cultivation of T. reesei and A. saccharolyticus than in T. reesei monoculture, suggesting synergistic interaction between those fungi. Fermentation broths of mixed cultures of T. reesei with different Aspergillus strains resulted in approx. 80% efficiency of hydrolysis which was comparable to results obtained using blended supernatants from parallel monocultures. This indicates that co-cultivation of T. reesei with A. saccharolyticus or A. carbonarius could be a competitive alternative for monoculture enzyme production and a cheaper alternative to commercial enzymes.


Subject(s)
Aspergillus/enzymology , Biotechnology/methods , Enzymes/biosynthesis , Trichoderma/enzymology , Triticum/enzymology , Waste Products , Fermentation , Hydrolysis , Triticum/chemistry
3.
Neural Netw ; 25(1): 146-60, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21964449

ABSTRACT

An efficient transistor level implementation of a flexible, programmable triangular function (TF) that can be used as a triangular neighborhood function (TNF) in ultra-low power, self-organizing maps (SOMs) realized as application-specific integrated circuit (ASIC) is presented. The proposed TNF block is a component of a larger neighborhood mechanism, whose role is to determine the distance between the winning neuron and all neighboring neurons. Detailed simulations carried out for the software model of such network show that the TNF forms a good approximation of the gaussian neighborhood function (GNF), while being implemented in a much easier way in hardware. The overall mechanism is very fast. In the CMOS 0.18 µm technology, distances to all neighboring neurons are determined in parallel, within the time not exceeding 11 ns, for an example neighborhood range, R, of 15. The TNF blocks in particular neurons require another 6 ns to calculate the output values directly used in the adaptation process. This is also performed in parallel in all neurons. As a result, after determining the winning neuron, the entire map is ready for the adaptation after the time not exceeding 17 ns, even for large numbers of neurons. This feature allows for the realization of ultra low power SOMs, which are hundred times faster than similar SOMs realized on PC. The signal resolution at the output of the TNF block has a dominant impact on the overall energy consumption as well as the silicon area. Detailed system level simulations of the SOM show that even for low resolutions of 3 to 6 bits, the learning abilities of the SOM are not affected. The circuit performance has been verified by means of transistor level Hspice simulations carried out for different transistor models and different values of supply voltage and the environment temperature - a typical procedure completed in case of commercial chips that makes the obtained results reliable.


Subject(s)
Neural Networks, Computer , Normal Distribution , Electronic Data Processing/statistics & numerical data
4.
IEEE Trans Neural Netw ; 22(12): 2091-104, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22049367

ABSTRACT

We present a new programmable neighborhood mechanism for hardware implemented Kohonen self-organizing maps (SOMs) with three different map topologies realized on a single chip. The proposed circuit comes as a fully parallel and asynchronous architecture. The mechanism is very fast. In a medium sized map with several hundreds neurons implemented in the complementary metal-oxide semiconductor 0.18 µm technology, all neurons start adapting the weights after no more than 11 ns. The adaptation is then carried out in parallel. This is an evident advantage in comparison with the commonly used software-realized SOMs. The circuit is robust against the process, supply voltage and environment temperature variations. Due to a simple structure, it features low energy consumption of a few pJ per neuron per a single learning pattern. In this paper, we discuss different aspects of hardware realization, such as a suitable selection of the map topology and the initial neighborhood range, as the optimization of these parameters is essential when looking from the circuit complexity point of view. For the optimal values of these parameters, the chip area and the power dissipation can be reduced even by 60% and 80%, respectively, without affecting the quality of learning.


Subject(s)
Computing Methodologies , Neural Networks, Computer , Signal Processing, Computer-Assisted/instrumentation , Transistors, Electronic , Equipment Design , Equipment Failure Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...