Image tests

From Yocto Project
Jump to navigationJump to search

About the the testimage class

The build system has the ability to run a series of automated tests for qemu images.

All the tests are actually commands run on the target system over ssh.

The tests themselves are written in Python, making use of the unittest module.

The class that enables this is testimage.bbclass (which handles loading the tests and starting the qemu image)


Enabling and running the tests

Requirements

You should be aware of the following:

  • runqemu script needs sudo access for setting up the tap interface, so you need to make sure it can do that non-interactively. That means you need to do one of the following:
    • add NOPASSWD for your user in /etc/sudoers either for ALL commands, either just for runqemu-ifup (but you need to provide the full path and that can change if you have multiple poky clones)
      • on some distributions you also need to comment out "Defaults requiretty" in /etc/sudoers
    • manually configure a tap interface for your system
    • run as root the script in scripts/runqemu-gen-tapdev which should generate a list of tap devices (that's usually done in AutoBuilder-like setups)
  • the DISPLAY variable needs to be set so that means you need to have an X server available (e.g start vncserver for a headless machine)
  • some of the tests (in particular smart tests) start a http server on a random high number port, used to serve files to the target. The smart module serves ${DEPLOY_DIR}/rpm so it can run smart channel commands. That means your host's firewall must accept incoming connections from 192.168.7.0/24 (the default class used for tap0 devices by runqemu)

Known bugs/limitations

Usage

To use it add "testimage" to global inherit and call your target image with -c testimage, like this:

  • for example build a qemu core-image-sato: bitbake core-image-sato
  • add INHERIT += "testimage" in local.conf
  • then call "bitbake core-image-sato -c testimage". That will run a standard suite of tests.

All test files are currently in meta/lib/oeqa/runtime. The file names themselves are the actual tests names we use, also called test modules. A module can have multiple classes and test methods, usually grouped together by the area tested (e.g: tests for systemd go in meta/lib/oeqa/runtime/systemd.py).

A layer can add its own tests in <meta-layer>/lib/oeqa/runtime, provided it extends BBPATH as normal in its layer.conf (test module names shouldn't collide though with those in core).

You can change the tests run by appending or overrding the TEST_SUITES variable in local.conf. Each name in TEST_SUITES represents a required test for the image. That means that no module skipping is allowed, even if the test isn't suitable for the image (e.g running the rpm tests on a image without rpm). Appending "auto" to TEST_SUITES means that it will try to run all tests that are suitable for the image (each test decides that on it's own).

Note that the order in TEST_SUITES is important (it's the order modules run) and it influences tests dependencies. That means that tests that depend on other tests (e.g ssh depends on the ping test) should be added last (there is no re-ordering/dependency handling by the test class, it just respects the order). Each module can have multiple classes with multiple test methods (and Python unittest rules apply here).

In short:

  • to run the default tests for core-image-sato you don't need to change TEST_SUITES (just call bitbake core-image-sato -c testimage like above)
  • The default for core-image-sato is defined as: DEFAULT_TEST_SUITES_pn-core-image-sato = "ping ssh df connman syslog xorg scp vnc date rpm smart dmesg"
  • to add your own test to the list of the defaults add: TEST_SUITES_append = " mytest"
  • to run a specific list of tests: TEST_SUITES = "ping ssh rpm" (order is important)

Good to know

Once you call the testimage task (bitbake <my-image> -c testimage) a couple of things happen:

  • a copy of the rootfs is done in ${WORKDIR}/testimage
  • the image is booted under qemu using the standard runqemu script
  • there is a timeout of 500 seconds by default for the boot process to reach the login prompt (you can change the timeout by setting TEST_QEMUBOOT_TIMEOUT in local.conf)
  • once the boot process reached the login prompt the tests are run (you can find the full boot log in ${WORKDIR}/testimage/qemu_boot_log)
  • each test module is loaded in the order found in TEST_SUITES (the full output of the commands ran over ssh is found in ${WORKDIR}/testimgage/ssh_target_log)
  • if there are no fails, the task will end successfully. You can find the output from the unittest in the task log (in ${WORKDIR}/temp/log.do_testimage)



Log for a custom, systemd-enabled image that has package-management feature and TEST_SUITES = "ping ssh rpm auto" in local.conf

$ cat tmp/work/qemux86_64-poky-linux/core-image-base/1.0-r0/temp/log.do_testimage
DEBUG: Executing python function do_testimage
NOTE: Created listening socket for qemu serial console on: 127.0.0.1:56358
NOTE: DISPLAY value: :0
NOTE: rootfs file: /home/stefans/yocto/builds/firefly/tmp/work/qemux86_64-poky-linux/core-image-base/1.0-r0/testimage/core-image-base-qemux86-64-testimage.ext3
NOTE: Qemu log file: /home/stefans/yocto/builds/firefly/tmp/work/qemux86_64-poky-linux/core-image-base/1.0-r0/testimage/qemu_boot_log.20130819115123
NOTE: SSH log file: /home/stefans/yocto/builds/firefly/tmp/work/qemux86_64-poky-linux/core-image-base/1.0-r0/testimage/ssh_target_log.20130819115123
NOTE: runqemu started, pid is 2979
NOTE: waiting at most 60 seconds for qemu pid
NOTE: qemu started - qemu procces pid is 3061
NOTE: IP found: 192.168.7.2
NOTE: Waiting at most 500 seconds for login banner
NOTE: Connection from 127.0.0.1:44406
NOTE: Reached login banner
NOTE: Test modules  ['oeqa.runtime.ping', 'oeqa.runtime.ssh', 'oeqa.runtime.rpm', 'oeqa.runtime.multilib', 'oeqa.runtime.smart', 'oeqa.runtime.dmesg', 'oeqa.runtime.df', 'oeqa.runtime.connman', 'oeqa.runtime.gcc', 'oeqa.runtime.xorg', 'oeqa.runtime.syslog', 'oeqa.runtime.systemd']
NOTE: Found 31 tests
test_ping (oeqa.runtime.ping.PingTest) ... ok
test_ssh (oeqa.runtime.ssh.SshTest) ... ok
test_rpm_help (oeqa.runtime.rpm.RpmHelpTest) ... ok
test_rpm_query (oeqa.runtime.rpm.RpmQueryTest) ... ok
skipped "multilib: this isn't a multilib:lib32 image"
test_smart_help (oeqa.runtime.smart.SmartHelpTest) ... ok
test_smart_info (oeqa.runtime.smart.SmartQueryTest) ... ok
test_smart_query (oeqa.runtime.smart.SmartQueryTest) ... ok
test_dmesg (oeqa.runtime.dmesg.DmesgTest) ... ok
test_df (oeqa.runtime.df.DfTest) ... ok
skipped 'connman: No connman package in image'
skipped "gcc: Image doesn't have tools-sdk in IMAGE_FEATURES"
skipped "xorg: target doesn't have x11 in IMAGE_FEATURES"
test_syslog_help (oeqa.runtime.syslog.SyslogTest) ... ok
test_syslog_running (oeqa.runtime.syslog.SyslogTest) ... ok
test_syslog_logger (oeqa.runtime.syslog.SyslogTestConfig) ... ok
test_syslog_restart (oeqa.runtime.syslog.SyslogTestConfig) ... ok
test_syslog_startup_config (oeqa.runtime.syslog.SyslogTestConfig) ... skipped 'Not appropiate for systemd image'
test_systemd_version (oeqa.runtime.systemd.SystemdBasicTest) ... ok
test_systemd_disable (oeqa.runtime.systemd.SystemdTests) ... ok
test_systemd_enable (oeqa.runtime.systemd.SystemdTests) ... ok
test_systemd_failed (oeqa.runtime.systemd.SystemdTests) ... ok
test_systemd_list (oeqa.runtime.systemd.SystemdTests) ... ok
test_systemd_service (oeqa.runtime.systemd.SystemdTests) ... ok
test_systemd_start (oeqa.runtime.systemd.SystemdTests) ... ok
test_systemd_stop (oeqa.runtime.systemd.SystemdTests) ... ok

----------------------------------------------------------------------
Ran 22 tests in 48.492s

OK (skipped=5)
NOTE: All required tests passed
DEBUG: Python function do_testimage finished

As you can see some tests passed and some of them were skipped (because they weren't applicable for this image). And while I haven't added systemd tests to TEST_SUITES the tests were run (because of auto).



Let's see what happens if I use TEST_SUITES = "ping ssh gcc" for a core-image-sato image (which doesn't have the tools-sdk feature):

--snip--
NOTE: Reached login banner
NOTE: Test modules  ['oeqa.runtime.ping', 'oeqa.runtime.ssh', 'oeqa.runtime.gcc']
NOTE: Found 5 tests
test_ping (oeqa.runtime.ping.PingTest) ... ok
test_ssh (oeqa.runtime.ssh.SshTest) ... ok
ERROR

======================================================================
ERROR: setUpModule (oeqa.runtime.gcc)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/home/stefans/z/poky/meta/lib/oeqa/runtime/gcc.py", line 8, in setUpModule
    skipModule("Image doesn't have tools-sdk in IMAGE_FEATURES")
  File "/home/stefans/z/poky/meta/lib/oeqa/oetest.py", line 108, in skipModule
    "\nor the image really doesn't have the requred feature/package when it should." % (modname, reason))
Exception: 
Test gcc wants to be skipped.
Reason is: Image doesn't have tools-sdk in IMAGE_FEATURES
Test was required in TEST_SUITES, so either the condition for skipping is wrong
or the image really doesn't have the requred feature/package when it should.

----------------------------------------------------------------------
Ran 2 tests in 6.255s

FAILED (errors=1)
NOTE: Sending SIGTERM to runqemu
DEBUG: Python function do_testimage finished
ERROR: Function failed: Some tests failed. You should check the task log and the ssh log. (ssh log is /home/stefans/z/poky/build/tmp/work/qemux86_64-poky-linux/core-image-sato/1.0-r0/testimage/ssh_target_log.20130827122341
  • First, it tells us it loaded the module we required (ping, ssh and gcc) and that there are 5 tests (because the gcc module has 3 test methods)
  • It starts running the tests
  • the gcc module will error out giving us a traceback of why that happened. Because gcc was a required test, it wasn't skipped like earlier, instead it was marked as an error.



Some random examples from the ssh log in ${WORKDIR}/testimgage/ssh_target_log (only partial output shown here from df, syslog, xorg, date, rpm, dmesg tests)

Good to know:

  • Q: why is there a . /etc/profile before each command? A: Because of the default PATH (/bin:/usr/bin) when running commands over ssh (the answer is a bit more complex, let's just say we need to source /etc/profile for extending PATH)
  • while it might look that the commands aren't properly escaped those ssh commands are actually run through Python's subprocess module with shell=False (so copy-paste of the commands in your shell won't work unless you properly escape them)
  • there is a default timeout of 300 seconds for each command (though a test can overwrite that or run a command with no timeout). There is no timeout for scp commands though.
  • the tests can use the return code and/or the output to decide if they fail/pass.
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; uname -a
Linux qemux86-64 3.10.10-yocto-standard #1 SMP PREEMPT Tue Sep 10 11:23:42 EEST 2013 x86_64 GNU/Linux
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; df / | sed -n '2p' | awk '{print $4}'
111614
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; /sbin/syslogd --help
BusyBox v1.21.1 (2013-09-10 11:49:24 EEST) multi-call binary.

Usage: syslogd [OPTIONS]
[SSH command returned]: 1
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; ps | grep -i [s]yslogd
  668 root      4716 S    /sbin/syslogd -n -O /var/log/messages
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; logger foobar && test -e /var/log/messages && grep foobar /var/log/messages || logread | grep foobar
Sep 10 09:20:57 qemux86-64 user.notice root: foobar
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; /etc/init.d/syslog restart
Stopping syslogd/klogd: stopped syslogd (pid 668)
stopped klogd (pid 670)
done
Starting syslogd/klogd: done
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; echo "LOGFILE=/var/log/test" >> /etc/syslog-startup.conf

[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; /etc/init.d/syslog restart
Stopping syslogd/klogd: stopped syslogd (pid 799)
stopped klogd (pid 801)
done
Starting syslogd/klogd: done
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; logger foobar && grep foobar /var/log/test
Sep 10 09:21:01 qemux86-64 user.notice root: foobar
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; sed -i 's#LOGFILE=/var/log/test##' /etc/syslog-startup.conf

[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; /etc/init.d/syslog restart
Stopping syslogd/klogd: stopped syslogd (pid 814)
stopped klogd (pid 816)
done
Starting syslogd/klogd: done
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; cat /var/log/Xorg.0.log | grep -v "(EE) error," | grep -v "PreInit" | grep -v "evdev:" | grep -v "glx" | grep "(EE)"

[SSH command returned]: 1
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; ps |  grep -v xinit | grep [X]org
  594 root     79776 S <  /usr/bin/Xorg :0 -br -pn
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; date +"%Y-%m-%d %T"
2013-09-10 09:21:14
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; date -s "2016-08-09 10:00:00"
Tue Aug  9 10:00:00 UTC 2016
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; date -R
Tue, 09 Aug 2016 10:00:01 +0000
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; date -s "2013-09-10 09:21:14"
Tue Sep 10 09:21:14 UTC 2013
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; rpm -q rpm
rpm-5.4.9-r63.x86_64
[SSH command returned]: 0
[Running SCP]$ scp -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR /home/stefans/z/poky/build/tmp/deploy/rpm/x86_64/rpm-doc-5.4.9-r63.x86_64.rpm root@192.168.7.2:/tmp/rpm-doc.rpm

[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; rpm -ivh /tmp/rpm-doc.rpm
Preparing...                ##################################################
rpm-doc                     ##################################################
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; rpm -e rpm-doc

[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; rm -f /tmp/rpm-doc.rpm

[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; smart channel -y --add x86_64 type=rpm-md baseurl=http://192.168.7.1:54711/rpm/x86_64

[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; smart channel -y --add qemux86_64 type=rpm-md baseurl=http://192.168.7.1:54711/rpm/qemux86_64

[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; smart update
Loading cache...
Updating cache...               ######################################## [100%]

                                                                               
Fetching information for 'x86_64'...
                                                                               
-> http://192.168.7.1:54711/rpm/x86_64/repodata/repomd.xml
repomd.xml                      ######################################## [ 16%]
                                                                               
-> http://192.168.7.1:54711/rpm/x86_64/repodata/primary.xml.gz
primary.xml.gz                  ######################################## [ 25%]
                                                                               
-> http://192.168.7.1:54711/rpm/x86_64/repodata/filelists.xml.gz
filelists.xml.gz                ######################################## [ 33%]
                                                                               
Fetching information for 'all'...
                                                                               
-> http://192.168.7.1:54711/rpm/all/repodata/repomd.xml
repomd.xml                      ######################################## [ 50%]
                                                                               
-> http://192.168.7.1:54711/rpm/all/repodata/filelists.xml.gz
filelists.xml.gz                ######################################## [ 58%]
                                                                               
-> http://192.168.7.1:54711/rpm/all/repodata/primary.xml.gz
primary.xml.gz                  ######################################## [ 66%]
                                                                               
Fetching information for 'qemux86_64'...
                                                                               
-> http://192.168.7.1:54711/rpm/qemux86_64/repodata/repomd.xml
repomd.xml                      ######################################## [ 83%]
                                                                               
-> http://192.168.7.1:54711/rpm/qemux86_64/repodata/primary.xml.gz
primary.xml.gz                  ######################################## [ 91%]
                                                                               
-> http://192.168.7.1:54711/rpm/qemux86_64/repodata/filelists.xml.gz
filelists.xml.gz                ######################################## [100%]

Updating cache...               ######################################## [100%]

Channels have 5009 new packages.
Saving cache...
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; smart remove -y psplash-default
Loading cache...
Updating cache...               ######################################## [100%]

Computing transaction...                                                                               
Committing transaction...
Preparing...                    ######################################## [  0%]
   1:Removing psplash-default   ######################################## [100%]
update-alternatives: removing //usr/bin/psplash as no more alternatives exist for it


Removing packages (1):
  psplash-default-0.1+git0+afd4e228c6-r15@x86_64                                

50.9kB will be freed.
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; smart install -y psplash-default
Loading cache...
Updating cache...               ######################################## [100%]

Computing transaction...

Installing packages (1):
  psplash-default-0.1+git0+afd4e228c6-r15@x86_64                                

22.3kB of package files are needed. 50.9kB will be used.

                                                                               
Fetching packages...
                                                                               
-> http://192.168.7.1:54711/.../psplash-default-0.1+git0+afd4e228c6-r15.x86_64.rpm
psplash-default-0.1+git0+afd4.. ######################################## [100%]
                                                                               
Committing transaction...
Preparing...                    ######################################## [  0%]
   1:Installing psplash-default ######################################## [100%]
Output from psplash-default-0.1+git0+afd4e228c6-r15@x86_64:
update-alternatives: Linking //usr/bin/psplash to /usr/bin/psplash-default


Saving cache...
[SSH command returned]: 0
[Running]$ ssh -l root -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o LogLevel=ERROR 192.168.7.2 . /etc/profile; dmesg | grep -v mmci-pl18x | grep -v "error changing net interface name" | grep -i error

[SSH command returned]: 1

Writing new tests

All new test files should go in meta/lib/oeqa/runtime. The file names themselves are the actual tests names we use, also called test modules. A layer can add its own tests in <meta-layer>/lib/oeqa/runtime, provided it extends BBPATH as normal in its layer.conf (test module names shouldn't collide though with those in core).

Test modules are found in meta/lib/oeqa/runtime and they can use code from meta/lib/oeqa/utils which are helper classes for extra stuff (like starting an http server)

You should start by copying an existing module, e.g syslog.py or gcc.py are good examples, and go from there.

You'll see that all test classes inherit oeRuntimeTest (found in meta/lib/oetest.py). This base class offers some helper attributes. Here's a short list:

Class methods:

  • hasPackage(pkg): returns True if pkg is in the installed package list of the image (based on WORKDIR/installed_pkgs.txt which is generated at do.rootfs)
  • hasFeature(feature): returns True if feature is in IMAGE_FEATURES or DISTRO_FEATURES
  • restartTarget(params): restarts the qemu image optionally passing params to runqemu's qemuparams (e.g "-m 1024" for more memory)

Class attributes:

  • pscmd: equals "ps -ef" if procps is installed in the image else it's "ps" (busybox)
  • tc: called text context, this gives access to other attributes:
    • d: the bitbake data store ( so you can do stuff like oeRuntimeTest.tc.d.getVar("VIRTUAL-RUNTIME_init_manager"))
    • testslist and testsrequired: used internally, tests shouldn't need them
    • filesdir: absolute path to meta/lib/oeqa/runtime/files (which contains helper files for tests meant for copying on the target, like small .c files to get compiled)
    • qemu: accces to the QemuRunner object, the class that boots the image. Useful attributes:
      • ip: the machine's IP
      • host_ip: host IP, only used by smart tests
      • other stuff not relevant for tests
    • target: SSHControl object, used for running commands on the image
      • host: same as qemu.ip, used internally, not really used in tests
      • timeout: global timeout for commands ran on the target for this instance (default: 300).
      • run(cmd, timeout=None): The single most used method. Basically a wrapper for: 'ssh root@host "cmd"'. It returns a tuple: (status, output) which are what their names says: the return code of 'cmd' and whatever output that produces. The optional timeout argument represents the number of seconds it should wait for 'cmd' to return (if None the default instance's timeout is used which is 300 now, if 0 it runs forever or until the command returns).
      • copy_to(localpath, remotepath): basically: 'scp localpath root@ip:remotepath'
      • copy_from(remotepath, localpath): basically: 'scp root@host:remotepath localpath'

Instance attributes:

  • target: copy of the above - target is both an instance and a class attribute (so tests can use self.target.run(cmd) in instance methods instead of oeRuntimeTest.tc.target.run(cmd))


Let's have a look at meta/lib/oeqa/runtime/gcc.py:

import unittest
import os
from oeqa.oetest import oeRuntimeTest, skipModule
from oeqa.utils.decorators import *

def setUpModule():
    if not oeRuntimeTest.hasFeature("tools-sdk"):
        skipModule("Image doesn't have tools-sdk in IMAGE_FEATURES")


class GccCompileTest(oeRuntimeTest):

    @classmethod
    def setUpClass(self):
        oeRuntimeTest.tc.target.copy_to(os.path.join(oeRuntimeTest.tc.filesdir, "test.c"), "/tmp/test.c")
        oeRuntimeTest.tc.target.copy_to(os.path.join(oeRuntimeTest.tc.filesdir, "testmakefile"), "/tmp/testmakefile")

    def test_gcc_compile(self):
        (status, output) = self.target.run('gcc /tmp/test.c -o /tmp/test -lm')
        self.assertEqual(status, 0, msg="gcc compile failed, output: %s" % output)
        (status, output) = self.target.run('/tmp/test')
        self.assertEqual(status, 0, msg="running compiled file failed, output %s" % output)

    def test_gpp_compile(self):
        (status, output) = self.target.run('g++ /tmp/test.c -o /tmp/test -lm')
        self.assertEqual(status, 0, msg="g++ compile failed, output: %s" % output)
        (status, output) = self.target.run('/tmp/test')
        self.assertEqual(status, 0, msg="running compiled file failed, output %s" % output)

    def test_make(self):
        (status, output) = self.target.run('cd /tmp; make -f testmakefile')
        self.assertEqual(status, 0, msg="running make failed, output %s" % output)

    @classmethod
    def tearDownClass(self):
        oeRuntimeTest.tc.target.run("rm /tmp/test.c /tmp/test.o /tmp/test /tmp/testmakefile")

Here's a breakdown of what happens when this module is loaded by the python unittest loader:

  • setUpModule: - although this is optional, it's found in almost all modules and allows for checking of certain feature/packages in an image (it's also the way TEST_SUITES = "auto" works, which loads all tests but skips them based on this)
  • The actual test class has two class methods: setUpClass and tearDownClass which are run before all, respectively at the end of the test methods. These are called test fixtures and are used for setting up tests (like copying files on the target in this case). Exceptions thrown in setUpModule/setUpClass and setUp methods lead to marking the test as an ERROR not FAIL.
  • the test methods themselves just call some commands on the target and assert the return code of those. Assert execeptions lead to FAILs.

The syslog.py module is a bit more complex:

import unittest
from oeqa.oetest import oeRuntimeTest, skipModule
from oeqa.utils.decorators import *

def setUpModule():
    if not oeRuntimeTest.hasPackage("syslog"):
        skipModule("No syslog package in image")

class SyslogTest(oeRuntimeTest):

    @skipUnlessPassed("test_ssh")
    def test_syslog_help(self):
        (status,output) = self.target.run('/sbin/syslogd --help')
        self.assertEqual(status, 1, msg="status and output: %s and %s" % (status,output))

    @skipUnlessPassed("test_syslog_help")
    def test_syslog_running(self):
        (status,output) = self.target.run(oeRuntimeTest.pscmd + ' | grep -i [s]yslogd')
        self.assertEqual(status, 0, msg="no syslogd process, ps output: %s" % self.target.run(oeRuntimeTest.pscmd)[1])


class SyslogTestConfig(oeRuntimeTest):

    @skipUnlessPassed("test_syslog_running")
    def test_syslog_logger(self):
        (status,output) = self.target.run('logger foobar && test -e /var/log/messages && grep foobar /var/log/messages || logread | grep foobar')
        self.assertEqual(status, 0, msg="Test log string not found in /var/log/messages. Output: %s " % output)

    @skipUnlessPassed("test_syslog_running")
    def test_syslog_restart(self):
        if "systemd" != oeRuntimeTest.tc.d.getVar("VIRTUAL-RUNTIME_init_manager"):
            (status,output) = self.target.run('/etc/init.d/syslog restart')
        else:
            (status,output) = self.target.run('systemctl restart syslog.service')

    @skipUnlessPassed("test_syslog_restart")
    @skipUnlessPassed("test_syslog_logger")
    @unittest.skipIf("systemd" == oeRuntimeTest.tc.d.getVar("VIRTUAL-RUNTIME_init_manager"), "Not appropiate for systemd image")
    def test_syslog_startup_config(self):
        self.target.run('echo "LOGFILE=/var/log/test" >> /etc/syslog-startup.conf')
        (status,output) = self.target.run('/etc/init.d/syslog restart')
        self.assertEqual(status, 0, msg="Could not restart syslog service. Status and output: %s and %s" % (status,output))
        (status,output) = self.target.run('logger foobar && grep foobar /var/log/test')
        self.assertEqual(status, 0, msg="Test log string not found. Output: %s " % output)
        self.target.run("sed -i 's#LOGFILE=/var/log/test##' /etc/syslog-startup.conf")
        self.target.run('/etc/init.d/syslog restart')

There are two test classes here, each with their methods making use of more of the attributes from oeRuntimeTest.

This also makes use of unittest's skip decorators and our own decorator skipUnlessPassed which uses test methods names for skipping - basically some kind of dependencies between them. skipUnlessPassed is misleading and there is gotcha here: it only works when for ordered tests (that's why the order in TEST_SUITES is important and the order/name of the test methods). Why? Because of the way unittest counts passed tests. A passed test is one which isn't skipped, failed or error and this becomes a problem when the respective test method hasn't run yet (so trying to depend on some test that gets run after your module won't work as expected). That is there is almost no distinction between a test which has passed and one which hasn't run yet. (see python's unittest sources in result.py)

One more thing: be inventive with the shell commands you run and make them so you can rely on them and have one good return code for success. Sometimes you do need to parse output, see df.py and date.py for examples.