JITServer: Disaggregated Caching JIT Compiler for the JVM in the Cloud (Extended Abstract)

Alexey Khrabrov, Marius Pirvu, Vijay Sundaresan, Eyal de Lara

3rd Workshop on Resource Disaggregation and Serverless (WORDS'22), San Diego, CA, November 2022

 

Abstract

Java virtual machines (JVMs) rely on just-in-time (JIT) compilers to improve application performance by converting bytecodes into optimized machine code at runtime. Unfortunately, JIT compilation can introduce significant runtime overheads in terms of processing power and memory. The extra CPU cycles needed for compilation can interfere with applications’ progress, delaying their start-up, increasing their warm-up time or affecting the response time and quality of service (QoS). Similarly, the data structures allocated by the JIT compiler create unpredictable spikes in memory usage resulting in higher memory footprint. In our experiments, JIT compilation accounted for up to 50% of CPU time used during the start-up and warm-up phases of the applications, and for up to hundreds of MBs of memory footprint. The competition for resources between the application and the JIT is more intense in CPU and memory constrained environments such as containers and VMs found in cloud datacenters that aim to maximize resource utilization and application density. Automatic scaling of cloud applications is done by launching and shutting down instances based on load. Frequent restarts of applications pose serious challenges to JVM-based workloads due to the high start-up overhead of JIT compilation, which needs to be amortized over a long execution period. Short-running application instances such as function-asa-service (FaaS) are also becoming increasingly common in cloud computing. The memory overhead of JIT compilation is more significant for smaller (in terms of overall memory usage) application instances which are common in the cloud (e.g., microservices).

 

Manuscript

Pdf

 

Slides

Pdf

 

Video

Movie

 

Bibtex

Bib