Publication Details

RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks

MARCHISIO Alberto, MRÁZEK Vojtěch, MASSA Andrea, BUSSOLINO Beatrice, MARTINA Mauricio and SHAFIQUE Muhammad. RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks. IEEE Access, vol. 2022, no. 10, pp. 109043-109055. ISSN 2169-3536. Available from: https://ieeexplore.ieee.org/document/9917535
Czech title
RoHNAS: Systém pro automatický návrh architektur neuronových sítí s optimalizací pro odolnost proti útokům a hardwarovou efektivitou pro konvoluční a kapsulové sítě
Type
journal article
Language
english
Authors
Marchisio Alberto (TU-Wien)
Mrázek Vojtěch, Ing., Ph.D. (DCSY FIT BUT)
Massa Andrea (POLITO)
Bussolino Beatrice (POLITO)
Martina Mauricio (POLITO)
Shafique Muhammad (TU-Wien)
URL
Keywords

Adversarial Robustness, Energy Efficiency, Latency, Memory, Hardware-Aware Neural Architecture Search, Evolutionary Algorithm, Deep Neural Networks, Capsule Networks

Abstract

Neural Architecture Search (NAS) algorithms aim at finding efficient Deep Neural Network (DNN) architectures for a given application under given system constraints. DNNs are computationally-complex as well as vulnerable to adversarial attacks. In order to address multiple design objectives, we propose RoHNAS, a novel NAS framework that jointly optimizes for adversarial-robustness and hardware-efficiency of DNNs executed on specialized hardware accelerators. Besides the traditional convolutional DNNs, RoHNAS additionally accounts for complex types of DNNs such as Capsule Networks. For reducing the exploration time, RoHNAS analyzes and selects appropriate values of adversarial perturbation for each dataset to employ in the NAS flow. Extensive evaluations on multi - Graphics Processing Unit (GPU) - High Performance Computing (HPC) nodes provide a set of Pareto-optimal solutions, leveraging the tradeoff between the above-discussed design objectives. For example, a Pareto-optimal DNN for the CIFAR-10 dataset exhibits 86.07 % accuracy, while having an energy of 38.63 mJ, a memory footprint of 11.85 MiB, and a latency of 4.47 ms.

Published
2022
Pages
109043-109055
Journal
IEEE Access, vol. 2022, no. 10, ISSN 2169-3536
Publisher
Institute of Electrical and Electronics Engineers
DOI
UT WoS
000870222300001
EID Scopus
BibTeX
@ARTICLE{FITPUB12427,
   author = "Alberto Marchisio and Vojt\v{e}ch Mr\'{a}zek and Andrea Massa and Beatrice Bussolino and Mauricio Martina and Muhammad Shafique",
   title = "RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks",
   pages = "109043--109055",
   journal = "IEEE Access",
   volume = 2022,
   number = 10,
   year = 2022,
   ISSN = "2169-3536",
   doi = "10.1109/ACCESS.2022.3214312",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/12427"
}
Back to top