Publication Details

Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers

ANSARI Mohammad S., MRÁZEK Vojtěch, COCKBURN Bruce F., SEKANINA Lukáš, VAŠÍČEK Zdeněk and HAN Jie. Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 2, 2020, pp. 317-328. ISSN 1063-8210.
Czech title
Vylepšení přesnosti a obvodové realizace neuronových sítí pomocí aproximativních násobiček
Type
journal article
Language
english
Authors
Ansari Mohammad S. (UALBERTA)
Mrázek Vojtěch, Ing., Ph.D. (DCSY FIT BUT)
Cockburn Bruce F., Dr. (UALBERTA)
Sekanina Lukáš, prof. Ing., Ph.D. (DCSY FIT BUT)
Vašíček Zdeněk, doc. Ing., Ph.D. (DCSY FIT BUT)
Han Jie, Dr. (UALBERTA)
Keywords

approximate multipliers, Cartesian genetic programming, convolutional neural network, multi-layer perceptron, neural networks

Abstract

Improving the accuracy of a neural network (NN) usually requires using larger hardware that consumes more energy. However, the error tolerance of NNs and their applications allow approximate computing techniques to be applied to reduce implementation costs. Given that multiplication is the most resource-intensive and power-hungry operation in NNs, more economical approximate multipliers (AMs) can significantly reduce hardware costs. In this article, we show that using AMs can also improve the NN accuracy by introducing noise. We consider two categories of AMs: 1) deliberately designed and 2) Cartesian genetic programing (CGP)-based AMs. The exact multipliers in two representative NNs, a multilayer perceptron (MLP) and a convolutional NN (CNN), are replaced with approximate designs to evaluate their effect on the classification accuracy of the Mixed National Institute of Standards and Technology (MNIST) and Street View House Numbers (SVHN) data sets, respectively. Interestingly, up to 0.63% improvement in the classification accuracy is achieved with reductions of 71.45% and 61.55% in the energy consumption and area, respectively. Finally, the features in an AM are identified that tend to make one design outperform others with respect to NN accuracy. Those features are then used to train a predictor that indicates how well an AM is likely to work in an NN.

Published
2020
Pages
317-328
Journal
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 2, ISSN 1063-8210
Publisher
IEEE Computer Society
DOI
UT WoS
000510674300002
EID Scopus
BibTeX
@ARTICLE{FITPUB12066,
   author = "S. Mohammad Ansari and Vojt\v{e}ch Mr\'{a}zek and F. Bruce Cockburn and Luk\'{a}\v{s} Sekanina and Zden\v{e}k Va\v{s}\'{i}\v{c}ek and Jie Han",
   title = "Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers",
   pages = "317--328",
   journal = "IEEE Transactions on Very Large Scale Integration (VLSI) Systems",
   volume = 28,
   number = 2,
   year = 2020,
   ISSN = "1063-8210",
   doi = "10.1109/TVLSI.2019.2940943",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12066"
}
Back to top