Build Appliance 2023

From Yocto Project
Revision as of 11:54, 14 April 2023 by Trevor Woerner (talk | contribs) (native: update native build time after performing a couple builds)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Build

Using either poky or a no-distro oe-core simply run the following to generate the Build Appliance image:

$ bitbake build-appliance-image

At a minimum the configuration requires:

  • the MACHINE should be set to qemux86-64
  • DISTRO_FEATURES must include opengl, x11, xattr
  • VOLATILE_TMP_DIR must not be set to yes

Usage

Currently, building the build-appliance-image target generates the following artifacts (among others):

  • build-appliance-image*rootfs.wic.vhd
  • build-appliance-image*rootfs.wic.vhdx
  • build-appliance-image*rootfs.wic.vmdk

The resulting *vmdk should be runnable in any of:

  • qemu
  • virtualbox
  • vmware player

qemu

After successfully building the build-appliance-image, from the same shell from which the build was performed, run:

$ runqemu slirp kvm nographic serial tmp-glibc/deploy/images/qemux86-64/build-appliance-image-qemux86-64.wic.vmdk

To quit qemu non-gracefully use: Ctrl-A + x

virtualbox

vmware player

Issues

Qemu has support for virtio-blk and is using that. However, neither virtualbox nor vmware support virtio-blk. The ideal scenario would be to build one image, and for it to be usable in all virtualization platforms.

In-Appliance Build Performance

The goal is to build core-image-sato in each of the virtualization platforms and see which one is able to build the image the fastest.

Some virtualization platforms support some features, while other platforms support other features. On the one hand it would be great to build one image that can be usable in all virtualization platforms, on the other hand it would be nice to see how well these virtualization platforms perform when used optimally. As such, a number of tests will be performed and explained below in order to see how they compare to each other, and how they compare to a build done natively.

It's important to eliminate the effects of connectivity on the build time measurements. As such each run will start with a $ bitbake core-image-sato --runall=fetch before performing the actual $ bitbake core-image-sato build. Also each build will be run from poky at commit 311c76c8e8cf39fa41456561148cebe2b8b3c057.

native

Virtualization platforms are not allowed to use all of the host's resources (CPU, memory). Therefore comparing the performance of a $ bitbake core-image-sato build in each of the virtualization platforms versus the same build performed directly on my host machine (i.e. without virtualization) would not provide honest results since none of the virtualized builds would be able to use the same amount of resources for their builds as would my host machine.

Also, it is hard to setup a build in such a way that the system is dedicated to nothing but the build under test. I didn't go out of my way to try to create a completely quiet system on which to perform the builds, but I did take some steps such as:

  • to exit all browsers
  • to stop the mlocate process (which happened to be occurring at the time)
  • stop all nightly or automatically triggered background builds (jenkins)
  • and to otherwise not use the system while performing the benchmarks

In the interest of providing a baseline, a $ bitbake core-image-sato build performed directly on my host machine (i.e. without any resource constraints) takes:

real    78m28.352s
user    0m54.934s
sys     0m5.896s

My host build machine has:

  • 2x E5-2630 v4 Xeon CPUs, each has 10 real cores plus 10 threads for a total of 40 cores+threads
  • 128 GB RAM

Another test we can perform is to use cgroups to create a bucket on the host machine with reduced system resources, in which we perform the same build. Of all the virtualization products, vmware is the most restrictive as it only allows us to use up to 16 CPUs and 64 GB of RAM. Therefore perform all tests using these parameters.

Using cgroups:

# cgcreate -a trevor -t trevor -g memory,cpuset:buildappliance
$ echo 0 > /sys/fs/cgroup/cpuset/buildappliance/cpuset.mems
$ echo "0-15" > /sys/fs/cgroup/cpuset/buildappliance/cpuset.cpus
$ echo 1 > /sys/fs/cgroup/cpuset/buildappliance/cpuset.cpu_exclusive
$ echo 1 > /sys/fs/cgroup/cpuset/buildappliance/cpuset.mem_exclusive
$ echo 0-1 > /sys/fs/cgroup/cpuset/buildappliance/cpuset.mems
$ echo 68719476736 > /sys/fs/cgroup/memory/buildappliance/memory.limit_in_bytes
$ cgexec -g memory,cpuset:buildappliance /bin/bash
$ time bitbake core-image-sato
# cgdelete memory,cpuset:buildappliance

The result is:

real    107m4.550s
user    0m41.856s
sys     0m4.859s

qemu