We consider a radio access network slice serving mobile users whose requests imply computing requirements. Service is virtualized over either a powerful but distant cloud infrastructure or an edge computing host. The latter provides less computing and storage capacity with respect to the cloud, but can be reached with much lower delay. A tradeoff thus naturally arises between computing capacity and data transfer latency. We investigate the performance of this service model, discussing how service requests should be routed to edge or cloud servers. We look at the performance of various classes of online algorithms based on different levels of information about the system state. Our investigation is based on analytical models, simulations in OMNeT++, and a prototype implementation over operational cellular networks. First of all, we observe that distributing the load of service requests over edge and cloud is in general beneficial for performance, and simple to implement with a stateless online server selection policy that can be easily configured with near-optimal performance. Second, we shed light on the limited improvements that stateful polices can offer, notwithstanding they base their decisions on the knowledge of server congestion levels or round-trip latency conditions. Third, we unveil that stateful policies are dangerously prone to errors, which may make stateless policies preferable.

Stateful Versus Stateless Selection of Edge or Cloud Servers Under Latency Constraints

Castagno, P;Sereno, M;
2022-01-01

Abstract

We consider a radio access network slice serving mobile users whose requests imply computing requirements. Service is virtualized over either a powerful but distant cloud infrastructure or an edge computing host. The latter provides less computing and storage capacity with respect to the cloud, but can be reached with much lower delay. A tradeoff thus naturally arises between computing capacity and data transfer latency. We investigate the performance of this service model, discussing how service requests should be routed to edge or cloud servers. We look at the performance of various classes of online algorithms based on different levels of information about the system state. Our investigation is based on analytical models, simulations in OMNeT++, and a prototype implementation over operational cellular networks. First of all, we observe that distributing the load of service requests over edge and cloud is in general beneficial for performance, and simple to implement with a stateless online server selection policy that can be easily configured with near-optimal performance. Second, we shed light on the limited improvements that stateful polices can offer, notwithstanding they base their decisions on the knowledge of server congestion levels or round-trip latency conditions. Third, we unveil that stateful policies are dangerously prone to errors, which may make stateless policies preferable.
2022
23rd IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)
Belfast, United Kingdom
14-17 June 2022
Proceedings of 23rd IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)
IEEE
110
119
978-1-6654-0876-9
https://ieeexplore.ieee.org/document/9842843
Edge computing; Radio access network; Performance evaluation
Mancuso, V; Castagno, P; Sereno, M; Ajmone Marsan, M
File in questo prodotto:
File Dimensione Formato  
Edge_or_Cloud__The_original___versione_WOWMOM_2022_.pdf

Accesso riservato

Tipo di file: PREPRINT (PRIMA BOZZA)
Dimensione 1.51 MB
Formato Adobe PDF
1.51 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1888653
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 4
  • ???jsp.display-item.citation.isi??? 2
social impact