What Neural Networks Are (Not) Good For?

Reference

CDMTCS Research Reports CDMTCS-556 (2021)

Degree Grantor

Abstract

Perceptron Neural Networks (PNNs) are essential components of intelligent systems because they produce efficient solutions to problems of overwhelming complexity for conventional computing methods. There are lots of papers showing that PNNs can approximate a wide variety of functions, but comparatively very few discuss their limitations, the scope of this paper. To this aim we define two classes of Boolean functions – sensitive and robust –, and prove that an exponentially large set of sensitive functions are exponentially difficult to compute by multi-layer PNNs (hence incomputable by single-layer PNNs) and a comparatively large set of functions in the second one, but not all, are computable by single-layer PNNs. Finally we used polynomial threshold PNNs to compute all Boolean functions with quantum annealing and present in detail a QUBO computation on the D-Wave Advantage. These results confirm that the successes of PNNs, or lack of them, are in part determined by properties of the learned data sets and suggest that sensitive functions may not be (efficiently) computed by PNNs.

Description

DOI

Related Link

Keywords

ANZSRC 2020 Field of Research Codes