Understanding application resource requirements through profiling


A new trend in data-intensive software applications requires their deployment in a public or private Cloud provider mixed with an Edge solution, with the goal of leveraging various dispersed resources of an Infrastructure as a Service (IaaS). However, a number of significant challenges are highlighted prior to most applications' migration to a Cloud/Edge-based version, such as the provider or virtual resource(VMs) resource size selection, in order to achieve particular Quality of Service (QoS) levels, thus meeting the application needs. Migrating to a cloud infrastructure can provide economic, performance, and strategic benefits, but it must be done with caution and care.

Also it is important to consider, that a significant number of challenges arise at the IaaS provider level as well, primarily related to the dynamicity of the examined applications and workloads, the inability to obtain information about the internal client software that is running within the virtual resources, and the performance of the virtual resources in multi-tenant scenarios.

In the Pledger EU Project, one of the main goals is to orchestrate and optimize the deployment and migration of mixed Cloud Edge applications. One of the major software components that aid this procedure is the App Profiler. The App Profiler component profiles application/services that are deployed and used as containers. App Profiler collects resource consumption statistics from a containerized instance and compares them to the resource usage of a known benchmark. Benchmarks are often used to assess the computing capabilities of specific hardware since they are simple to install and finish. This is not true for generic apps; thus, mapping applications to specific benchmarks using the App Profiler can aid in the building of essential information about how a specific program can perform on multiple platforms based on known benchmarks.

The Node RED flow-based programming framework was used to create the application profiling system. This framework contains a large number of unique nodes that aid in the communication of various components to achieve the necessary functionality. It also provides the necessary communication nodes for data exchange and external resource management due to the range of cloud-edge devices and APIs. The application profiling system, as seen in the graphic below, works with multiple container environments (Docker, Kubernetes).

The application profiling system requires two important components in order to function: the profiling component and the model trainer component. The profiling component generates the multidimensional vector that displays the profile of a benchmark's or application's resource use. The model trainer component is in charge of creating the application classification model. The classifier trainer receives all of the data from the profiles; a piece of this data is used for training, while the other subset is used to evaluate the model's validity.


is loading the page...