Over the past years, we have conducted various evaluations focusing on the spawn time and density of urunc-based containers. However, we have not performed an in-depth evaluation of other aspects of the execution model, such as resource usage (CPU, memory, storage) I/O performance etc.. Such an evaluation would help identify bottlenecks and guide future optimizations of urunc. This evaluation should cover a range of scenarios, including microbenchmarks, macrobenchmarks, and representative real-world workloads.
Furthermore, the process should produce a reproducible evaluation suite, including all necessary tools, scripts, and documentation, so that benchmarks can be easily repeated and extended in the future.
Over the past years, we have conducted various evaluations focusing on the spawn time and density of urunc-based containers. However, we have not performed an in-depth evaluation of other aspects of the execution model, such as resource usage (CPU, memory, storage) I/O performance etc.. Such an evaluation would help identify bottlenecks and guide future optimizations of urunc. This evaluation should cover a range of scenarios, including microbenchmarks, macrobenchmarks, and representative real-world workloads.
Furthermore, the process should produce a reproducible evaluation suite, including all necessary tools, scripts, and documentation, so that benchmarks can be easily repeated and extended in the future.