Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters











Database
Language
Publication year range
1.
Neural Comput ; 36(8): 1601-1625, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-38776959

ABSTRACT

The energy efficiency of hardware implementations of convolutional neural networks (CNNs) is critical to their widespread deployment in low-power mobile devices. Recently, a number of methods have been proposed for providing energy-optimal mappings of CNNs onto diverse hardware accelerators. Their estimated energy consumption is related to specific implementation details and hardware parameters, which does not allow for machine-independent exploration of CNN energy measures. In this letter, we introduce a simplified theoretical energy complexity model for CNNs, based on only a two-level memory hierarchy that captures asymptotically all important sources of energy consumption for different CNN hardware implementations. In this model, we derive a simple energy lower bound and calculate the energy complexity of evaluating a CNN layer for two common data flows, providing corresponding upper bounds. According to statistical tests, the theoretical energy upper and lower bounds we present fit asymptotically very well with the real energy consumption of CNN implementations on the Simba and Eyeriss hardware platforms, estimated by the Timeloop/Accelergy program, which validates the proposed energy complexity model for CNNs.

SELECTION OF CITATIONS
SEARCH DETAIL