Computing is evolving rapidly to cater to the increasing demand for sophisticated services, and Cloud computing lays a solid foundation for flexible on-demand provisioning. However, as the size of applications grows, the centralised client-server approach used by Cloud computing increasingly limits the applications' scalability. To achieve ultra-scalability, cloud/edge/fog computing converges into the compute continuum, completely decentralising the infrastructure to encompass universal, pervasive resources. The compute continuum makes devising applications benefitting from this complex environment a challenging research problem. We put the opportunities the compute continuum offers to the test through a real-world multi-view detection model (MvDet) implemented with the FastFL C/C++ high-performance edge inference framework. Computational performance is discussed considering many experimental scenarios, encompassing different edge computational capabilities and network bandwidths. We obtain up to 1.92x speedup in inference time over a centralised solution using the same devices.

Distributed Edge Inference: an Experimental Study on Multiview Detection

Gianluca Mittone
First
;
Giulio Malenza;Marco Aldinucci;Robert Birke
Last
2024-01-01

Abstract

Computing is evolving rapidly to cater to the increasing demand for sophisticated services, and Cloud computing lays a solid foundation for flexible on-demand provisioning. However, as the size of applications grows, the centralised client-server approach used by Cloud computing increasingly limits the applications' scalability. To achieve ultra-scalability, cloud/edge/fog computing converges into the compute continuum, completely decentralising the infrastructure to encompass universal, pervasive resources. The compute continuum makes devising applications benefitting from this complex environment a challenging research problem. We put the opportunities the compute continuum offers to the test through a real-world multi-view detection model (MvDet) implemented with the FastFL C/C++ high-performance edge inference framework. Computational performance is discussed considering many experimental scenarios, encompassing different edge computational capabilities and network bandwidths. We obtain up to 1.92x speedup in inference time over a centralised solution using the same devices.
2024
3rd Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC) held at the 16th ACM/IEEE International Conference on Utility and Cloud Computing
Taormina, Italia
4-7 dicembre 2023
Proceedings of the IEEE/ACM 16th International Conference on Utility and Cloud Computing
ACM
1
6
9798400702341
https://dl.acm.org/doi/10.1145/3603166.3632561
Edge Inference, Edge Computing, Computing Continuum, Computational Performance, Network Performance
Gianluca Mittone, Giulio Malenza, Marco Aldinucci, Robert Birke
File in questo prodotto:
File Dimensione Formato  
DML_ICC_2023-3.pdf

Accesso aperto

Tipo di file: PREPRINT (PRIMA BOZZA)
Dimensione 22.71 MB
Formato Adobe PDF
22.71 MB Adobe PDF Visualizza/Apri
DML_ICC_2023-2.pdf

Accesso riservato

Tipo di file: PDF EDITORIALE
Dimensione 1.11 MB
Formato Adobe PDF
1.11 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/1950083
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact