Finding the best performance for your workloads on Cloud and Edge through benchmarking


Gabriele Giammatteo, Engineering Ingegneria Informatica S.p.A.

September 18, 2020

Benchmarking is a widely adopted technique to assess performance of computing systems. It consists in executing a known workload in a System Under Test (SUT) to stress the usage of its resources (e.g. cpu, memory, network) and observe the response of the system by measuring one (or more) quality metrics of interest (e.g. I/O throughput, computation latency) that allow to evaluate its performance. In contrast with other approaches like prediction or simulation, benchmarking allows to test the real system, taking into account all the factors that can have an impact on the execution. Data collected by benchmarking multiple systems can be used to rank them by performance and to take an informed decision on the most suitable system for a given workload.

Benchmarks programs are very specialized tools and their evolution followed closely the evolution of computing architectures: from the first benchmarks developed in 1970s to assess the performance of a CPU in floating-point operations, to Grid and HPC benchmarks, to distributed, application-driven benchmarks for the Cloud services and, in the last years, for Edge infrastructures. A recent survey published by Blesson Varghese et al. provides a detailed timeline of development of benchmarking techniques (Varghese, 2020).

In the Cloud era, benchmarks have evolved to adapt to the increasing complexity and distribution of the applications. The benchmarks that were able to test only a single resource (e.g. CPU, network), often referred to as micro (or synthetic, or system-level) benchmarks, have been replaced by application-driven (or macro) benchmarks that are able to generate workloads more similar to the ones of the real applications (Luo, 2012) (Muller, 2014). In Cloud computing, given the high diversity of providers' and the complexity of the applications, it is not trivial to select the best resource for each application component in order to maximize its performance (Varghese, 2019) (Borhani, 2014) or obtain the best performance/cost trade-off (Kousiouris, 2017). For this reason, benchmarking is a valuable tool to support deployment decisions.

When shifting from the Cloud to Edge computing, benchmarking becomes even more valuable because of the high diversity and variability of Edge nodes in terms of hardware, energy consumption, connectivity capabilities, software stacks and configuration. All these factors can highly affect the performance of the workloads. In order to optimize the deployment of an application in such a harsh environment, very precise and updated information on the resources’ capabilities and performance need to be available.

Pledger aims at offering a benchmarking service for Cloud and Edge resources with the objective of supporting decisions for the selection of resources at deploy time and of monitoring resource performance. Benefits will be mainly in the optimization of the deployments. Results obtained from benchmarking will be able to provide accurate predictions on the performance of application workloads for all available Cloud and Edge resources. This will make it possible to select the deployment plan that provides the best performance (or the best cost/performance tradeoff) for the application. In addition, benchmarking will provide hints/triggers for re-deployments of the applications.

To deliver these functionalities, we will advance the capabilities of the Benchmarking Suite [1]: a benchmarking orchestrator solution deployed by Engineering Ingegneria Informatica SpA  in previous EU research projects. The Benchmarking Suite already integrates a set of generic benchmarking tools and a scheduling functionality to automatically execute and store tests on multiple infrastructures. Through Pledger project we will improve the solution to: (a) integrate application-driven benchmark tests specifically developed for Cloud and Edge infrastructures and their use cases (e.g. AI algorithms, stream data analysis), (b) provide a matching mechanism between the real application and the most appropriate benchmarks that can represent it, (c) provide access and analysis of results both through programmatic APIs and GUIs.

As a result, users of Pledger will have a valuable tool to analyse the performance of different Cloud/Edge resources providing data tailored around the specific application workloads (thanks to the matching mechanism) and always fresh (thanks to the scheduling feature). The analysis tools will guide the users in the selection of the best resource(s) for their application and expected cost/performance trade-off.




(Borhani, 2014) Borhani, A., Leitner, P., Lee, B.S., li, X., and Hung, T. 2014. WPress: An Application-Driven Performance Benchmark for Cloud-Based Virtual Machines. In undefined (pp. 101-109).

(Kousiouris, 2017) G. Kousiouris et al., "A Toolkit Based Architecture for Optimizing Cloud Management, Performance Evaluation and Provider Selection Processes," 2017 International Conference on High Performance Computing & Simulation (HPCS), Genoa, 2017, pp. 224-232, doi: 10.1109/HPCS.2017.42

(Luo, 2012) Luo, C., Zhan, J., Jia, Z., Wang, L., Lu, G., Zhang, L., Xu, C.Z., and Sun, N. 2012. CloudRank-D: Benchmarking and ranking cloud computing systems for data processing applications. Frontiers of Computer Science, 6.

(Muller, 2014) Muller, S., Bermbach, D., Tai, S., and Pallas, F. 2014. Benchmarking the Performance Impact of Transport Layer Security in Cloud Database Systems. In undefined (pp. 27-36).

(Varghese, 2019) Varghese, B., Akgun, O., Miguel, I., Thai, L., and Barker, A. 2019. Cloud Benchmarking for Maximising Performance of Scientific Applications. IEEE Transactions on Cloud Computing, 7(1), p.170–182.

(Varghese, 2020) Blesson Varghese, Nan Wang, David Bermbach, Cheol-Ho Hong, Eyal de Lara, Weisong Shi, and Christopher Stewart. (2020). A Survey on Edge Benchmarking.

Add new comment


is loading the page...