Faculty of Information Technology, BUT

Publication Details

Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers

ANSARI Mohammad S., MRÁZEK Vojtěch, COCKBURN Bruce F., SEKANINA Lukáš, VAŠÍČEK Zdeněk and HAN Jie. Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 99, no. 99, pp. 1-12. ISSN 1063-8210.
Type
journal article
Language
english
Authors
Ansari Mohammad S. (UALBERTA)
Mrázek Vojtěch, Ing., Ph.D. (DCSY FIT BUT)
Cockburn Bruce F., Dr. (UALBERTA)
Sekanina Lukáš, prof. Ing., Ph.D. (DCSY FIT BUT)
Vašíček Zdeněk, doc. Ing., Ph.D. (DCSY FIT BUT)
Han Jie, Dr. (UALBERTA)
Keywords
neural networks, approximate multipliers, Cartesian Genetic Programming, MLP, CNN
Abstract
Improving the accuracy of a neural network (NN) usually requires using larger hardware that consumes more energy. However, the error tolerance of NNs and their applications allows approximate computing techniques to be applied to reduce implementation costs. Given that multiplication is the most resource-intensive and power-hungry operation in NNs, more economical approximate multipliers can significantly reduce hardware costs. In this article we show that using approximate multipliers can also improve NN accuracy by introducing noise. We consider two categories of approximate multipliers: (1) deliberately-designed and (2) Cartesian genetic programming (CGP)-based. The exact multipliers in two representative NNs, a multilayer perceptron (MLP) and a convolutional NN (CNN), are replaced with approximate designs to evaluate their effect on the classification accuracy of the MNIST and SVHN datasets, respectively. Interestingly, up to 0.63% improvement in the classification accuracy is achieved with reductions by 71.45% and 61.55% in the energy consumption and area, respectively. Finally, the features in an approximate multiplier are identified that tend to make one design outperform others with respect to NN accuracy. Those features are then used to train a predictor that indicates how well an approximate multiplier is likely to work in a NN.
Published
2019 (in print)
Pages
1-12
Journal
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 99, no. 99, ISSN 1063-8210
Publisher
IEEE Computer Society
DOI
BibTeX
@ARTICLE{FITPUB12066,
   author = "S. Mohammad Ansari and Vojt\v{e}ch Mr\'{a}zek and F. Bruce Cockburn and Luk\'{a}\v{s} Sekanina and Zden\v{e}k Va\v{s}\'{i}\v{c}ek and Jie Han",
   title = "Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers",
   pages = "1--12",
   journal = "IEEE Transactions on Very Large Scale Integration (VLSI) Systems",
   volume = 99,
   number = 99,
   year = 2019,
   ISSN = "1063-8210",
   doi = "10.1109/TVLSI.2019.2940943",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12066"
}
Back to top