ABSTRACT
Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage.
ABSTRACT
It is well-known that artificial neural networks are universal approximators. The classical existence result proves that, given a continuous function on a compact set embedded in an n-dimensional space, there exists a one-hidden-layer feed-forward network that approximates the function. In this paper, a constructive approach to this problem is given for the case of a continuous function on triangulated spaces. Once a triangulation of the space is given, a two-hidden-layer feed-forward network with a concrete set of weights is computed. The level of the approximation depends on the refinement of the triangulation.