DeFog: Fog Computing Benchmarks

Jonathan McChesney, Nan Wang, Ashish Tanwer, Eyal de Lara, Blesson Varghese

Proceedings of the 4th ACM/IEEE Symposium on Edge Computing (SEC), Washington, DC, November 2019

 

Abstract

There are currently no benchmarks that can directly compare the performance of an application across the cloud-only, edge-only and cloud-edge (Fog) deployment platforms to obtain any insight on potential performance improvement. This paper proposes DeFog, a first Fog benchmarking suite to: (i) alleviate the burden of Fog benchmarking by using a standard methodology, and (ii) facilitate the understanding of the target platform by collecting a catalogue of relevant metrics for a set of benchmarks. The current portfolio of DeFog benchmarks comprises six relevant applications conducive to using the edge. Experimental studies are carried out on multiple target platforms to demonstrate the use of DeFog for collecting metrics related to application latencies (communication and computation), for understanding the impact of stress and concurrent users on application latencies, and for understanding the performance of deploying different combination of services of an application across the cloud and edge. DeFog is available for public download (https://github.com/qub-blesson/DeFog).

 

Manuscript

Pdf

 

Bibtex

Bib