Result Details

Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers

ANSARI, M.; MRÁZEK, V.; COCKBURN, B.; SEKANINA, L.; VAŠÍČEK, Z.; HAN, J. Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2020, vol. 28, no. 2, p. 317-328. ISSN: 1063-8210.
Type
journal article
Language
English
Authors
ANSARI, M.
Mrázek Vojtěch, Ing., Ph.D., DCSY (FIT)
COCKBURN, B.
Sekanina Lukáš, prof. Ing., Ph.D., DCSY (FIT)
Vašíček Zdeněk, doc. Ing., Ph.D., DCSY (FIT)
HAN, J.
Abstract

Improvingthe accuracy of a neural network (NN) usually requires using larger hardwarethat consumes more energy. However, the error tolerance of NNs and theirapplications allow approximate computing techniques to be applied to reduceimplementation costs. Given that multiplication is the most resource-intensiveand power-hungry operation in NNs, more economical approximate multipliers(AMs) can significantly reduce hardware costs. In this article, we show that usingAMs can also improve the NN accuracy by introducing noise. We consider twocategories of AMs: 1) deliberately designed and 2) Cartesian genetic programing(CGP)-based AMs. The exact multipliers in two representative NNs, a multilayerperceptron (MLP) and a convolutional NN (CNN), are replaced with approximatedesigns to evaluate their effect on the classification accuracy of the MixedNational Institute of Standards and Technology (MNIST) and Street View HouseNumbers (SVHN) data sets, respectively. Interestingly, up to 0.63% improvementin the classification accuracy is achieved with reductions of 71.45% and 61.55%in the energy consumption and area, respectively. Finally, the features in anAM are identified that tend to make one design outperform others with respectto NN accuracy. Those features are then used to train a predictor thatindicates how well an AM is likely to work in an NN.

Keywords

approximate multipliers, Cartesian genetic programming, convolutional neural network, multi-layer perceptron, neural networks

Published
2020
Pages
317–328
Journal
IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, vol. 28, no. 2, ISSN 1063-8210
DOI
UT WoS
000510674300002
EID Scopus
BibTeX
@article{BUT161464,
  author="ANSARI, M. and MRÁZEK, V. and COCKBURN, B. and SEKANINA, L. and VAŠÍČEK, Z. and HAN, J.",
  title="Improving the Accuracy and Hardware Efficiency of Neural Networks Using Approximate Multipliers",
  journal="IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS",
  year="2020",
  volume="28",
  number="2",
  pages="317--328",
  doi="10.1109/TVLSI.2019.2940943",
  issn="1063-8210",
  url="https://www.fit.vut.cz/research/publication/12066/"
}
Files
Projects
Advanced Methods of Nature-Inspired Optimisation and HPC Implementation for the Real-Life Applications, MŠMT, INTER-EXCELLENCE - Podprogram INTER-COST, LTC18053, start: 2018-06-01, end: 2020-02-29, completed
Research groups
Departments
Back to top