Smart scheduling to optimise resources and latency on far edge, edge and cloud


Edge computing brings to applications orchestration additional requirements such as local data processing, lower network latency [1] and improved resiliency and autonomy to overcome unattended and disconnected operations [2]. Kubernetes [3] orchestration and its open-source implementations for the edge help to increase the resiliency and autonomy with automatic scaling, self-monitoring and balancing and thousands of production-ready tools and applications.

When it comes to autoscaling though, Kubernetes was initially designed for the cloud, where adequate capacity planning ensures resources availability, and application can scale transparently. Also, hardware capabilities are mostly ignored, as in the cloud there is more homogeneity, and the user can choose which resources to use in advance among few options.

On the edge, similar autoscaling features might not handle resource constraint or hardware variety properly. For example, autoscaling on the edge might not be possible for lack of physical resources, as well as a certain amount of CPU millicores might differ on very different CPU models which on edge might span from IoT boards to local datacentre servers with very powerful GPUs.

In Pledger, we built a comprehensive solution to offer an end-to-end smart orchestration and monitoring on hybrid edge and cloud infrastructures, where benchmarking and app profiling allow to deal with hardware diversity.

The main scenario explored in Pledger is with limited resources on the edge where different application might compete to keep computation on the edge to reduce latency. For some domains, like V2X, latency can be critical and pushing resources to far-edge servers located in the proximity of vehicles can greatly reduce end-to-end latency.

In Pledger, we designed and developed a ConfService tool to simplify the configuration of the infrastructures and Apps used by the DSS. Instructions and demos are available on the project Gitlab repository [7] and the project YouTube channel [8].


Specifically, we prepared initial demo videos to describe the different configuration stages as well as initial DSS optimisation, “Resource”:

·         Configuration of Infrastructure Providers and Service Providers users by an Admin [YouTube]

·         Configuration of Infrastructures available by the Infrastructure Providers [YouTube]

·         Configuration of Projects (with quotas), Apps, Services, SLA by the Service Providers [YouTube]

·         Configuration of Deployment Options by the Service Providers to specify priorities [YouTube]

·         DSS “Resource” optimisation privileging deployment of Apps on edge nodes using SLA violations as feedback [YouTube]

·         DSS “Resource” optimisation placing Apps on cloud nodes then back on edge using SLA violations as feedback [YouTube]



In Pledger, we also designed and implemented multiple optimisation algorithms in the DSS to support different scenarios.

In particular, the “ECODA” optimisation [4] achieves a good trade-off between performance and computational complexity, and therefore it can help achieve strict latency requirements of V2X applications in two-tiers infrastructures to support edge and cloud. Similarly, the “TTODA” optimisation [5] works on three-tiers infrastructures which includes far-edge nodes. Both algorithms outperform a “first come first served” approach that would be applied otherwise by traditional orchestrators. For more information on how to replicate the tests, please refer to the official documentation on Gitlab [6].

So far, the DSS supports any combination of the following scenarios:

-       infrastructure type, two-tiers (cloud and edge) or three-tiers (cloud, edge, far edge),

-       edge resources exclusively managed by Pledger, or in combination with by other orchestrators,

-       edge infrastructure high variety,

-       latency criticality.


Similarly, we prepared another round of detailed demo videos to describe the installation of a test environment and more advanced DSS optimisations:

·         ConfService and DSS installation instructions [YouTube]

·         ConfService and DSS installation on KinD for cloud/edge [YouTube]

·         DSS “Resource” optimisation with scale up/down scenario [YouTube]

·         DSS “Resource” optimisation with scale out scenario [YouTube]

·         DSS “Resource” optimisation offloading on cloud and edge nodes [YouTube]

·         DSS “ECODA” optimisation [YouTube]

·         DSS “ECODA” and “Resource” optimisations combined [YouTube]

·         ConfService and DSS installation on KinD for cloud/edge/faredge [YouTube]

·         DSS “TTODA” optimisation [YouTube]

·         DSS “TTODA” and “Resource” optimisations combined [YouTube]


A new optimisation to include energy consumption on the edge is planned for the next months, stay tuned!

[1] Haja, David & Szalay, Mark & Sonkoly, Balázs & Pongracz, Gergely & Toka, László. (2019). Sharpening Kubernetes for the Edge. 136-137. 10.1145/3342280.3342335.

[2] Xiong, Ying & Sun, Yulin & Xing, Li & Huang, Ying. (2018). Extend Cloud to Edge with KubeEdge. 373-377. 10.1109/SEC.2018.00048.


[4] E. C. Cejudo and M. Shuaib Siddiqui, "An Optimization Framework for Edge-to-Cloud Offloading of Kubernetes Pods in V2X Scenarios," 2021 IEEE Globecom Workshops (GC Wkshps), 2021, pp. 1-6, doi: 10.1109/GCWkshps52748.2021.9682148.

[5] Carmona, E., Iadanza, F., & Siddiqui, M. S. (2022). Optimal Offloading of Kubernetes Pods in Three-Tier Networks. Institute of Electrical and Electronics Engineers (IEEE).





is loading the page...