Detail výsledku
A Reality Check on Inference at Mobile Networks Edge
Kocour Martin, Ing.
RAMAN, A.
LEONTIADIS, I.
Luque Jordi, FIT (FIT)
SASTRY, N.
NUNEZ-MARTINEZ, L.
PERINO, D.
PERALES, C.
Edge computingis considered a key enabler to deploy ArtificialIntelligenceplatforms to provide real-time applications such asAR/VR or cognitive assistance. Previousworks show computingcapabilitiesdeployed very close to the user can actually reduce theend-to-end latency of such interactiveapplications. Nonetheless,themain performance bottleneck remains in the machine learninginference operation. In this paper, wequestion some assumptionsofthese works, as the network location where edge computing isdeployed, and considered softwarearchitectures within the frame-workof a couple of popular machine learning tasks. Our experimen-tal evaluation shows that after performancetuning that leveragesrecentadvances in deep learning algorithms and hardware, net-work latency is now the main bottleneck onend-to-end applicationperformance.We also report that deploying computing capabilitiesat the first network node still provideslatency reduction but, over-all,it is not required by all applications. Based on our findings, weoverview the requirements and sketch thedesign of an adaptivearchitecturefor general machine learning inference across edgelocations.
Edge computing, Artificial Intelligence
@inproceedings{BUT156850,
author="CARTAS, A. and KOCOUR, M. and RAMAN, A. and LEONTIADIS, I. and LUQUE, J. and SASTRY, N. and NUNEZ-MARTINEZ, L. and PERINO, D. and PERALES, C.",
title="A Reality Check on Inference at Mobile Networks Edge",
booktitle="Proceedings of the 2nd ACM International Workshop on Edge Systems, Analytics and Networking (EDGESYS '19)",
year="2019",
pages="54--59",
publisher="Association for Computing Machinery",
address="Dressden",
doi="10.1145/3301418.3313946",
isbn="978-1-4503-6275-7",
url="https://dl.acm.org/citation.cfm?doid=3301418.3313946"
}