|
|
Line 166: |
Line 166: |
| matchbox-termin(1036) open ("/tmp/vte3FS2LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600) | | matchbox-termin(1036) open ("/tmp/vte3FS2LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600) |
| matchbox-termin(1036) open ("/tmp/vteJMC7LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600) | | matchbox-termin(1036) open ("/tmp/vteJMC7LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600) |
|
| |
|
| |
| The first thing you need to do is get systemtap on the host system (if you've built an -sdk- Yocto image, the systemtap runtime is already installed and ready to go on the target, nothing to do there). The easiest way to do that is to clone the systemtap repo and build it. You also need to download the latest elfutils in order to compile systemtap.
| |
|
| |
| First, get elfutils from [https://fedorahosted.org/releases/e/l/elfutils/] and unpack it e.g.
| |
|
| |
| $ bunzip2 -c elfutils-0.152.tar.bz2 | tar xvf -
| |
|
| |
| Clone the systemtap repo, and check out a working branch using the commit id matching the systemtap SRCREV in the systemtap recipe in meta/recipes-kernel/systemtap_git.bb:
| |
|
| |
| SRCREV = "820f2d22fc47fad6e09ba886efb9b91e1247cb39"
| |
|
| |
| $ git clone git://sources.redhat.com/git/systemtap.git
| |
| $ mv systemtap systemtap-1.6
| |
| $ cd systemtap-1.6
| |
| $ git checkout -b yocto 820f2d22fc47fad6e09ba886efb9b91e1247cb39
| |
| $ ./configure --with-elfutils=../elfutils-0.152 --prefix=/home/trz/systemtap-1.6
| |
| $ make install
| |
|
| |
| At this point, (in this example) you have an installed systemtap that you can use to cross-compile probes for the target.
| |
|
| |
| Basically, here's the form of the 'stap' invocation you need to make (using the example of the trace_open.stp script) in order to generate a kernel module to run on the host:
| |
|
| |
| ${SYSTEMTAP_HOST_INSTALLDIR}/bin/stap -gv -a ${SYSTEMTAP_TARGET_ARCH}
| |
| -B CROSS_COMPILE="${STAGING_BINDIR_TOOLCHAIN}/${STAGING_BINDIR_TOOLPREFIX}-"
| |
| -r ${TARGET_KERNEL_BUILDDIR} -I ${SYSTEMTAP_HOST_INSTALLDIR}/share/systemtap/tapset
| |
| -R ${SYSTEMTAP_HOST_INSTALLDIR}/share/systemtap/runtime -m trace_open trace_open.stp
| |
|
| |
| Thankfully, the 'crosstap' script below hides all this, but you do still need to fill in the blanks in the script with with the appropriate values for your system:
| |
|
| |
| *SYSTEMTAP_HOST_INSTALLDIR is the directory we installed systemtap into (/home/trz/systemtap-1.6 above).
| |
| *SYSTEMTAP_TARGET_ARCH is the arch that will be passed on to the compiler. In the case of emenlow (x86) that's "i386".
| |
| *STAGING_BINDIR_TOOLCHAIN is the location of the toolchain binaries used to compile the target. In my case, with my Yocto base directory at "/home/trz/work/dev", it would be "/home/trz/work/dev/build/tmp/sysroots/x86_64-linux/usr/bin/emen-poky-linux".
| |
| *STAGING_BINDIR_TOOLPREFIX is the prefix the binaries in the STAGING_BINDIR_TOOLCHAIN directory have (look there to see). In the case of emenlow (x86), that would be "i586-poky-linux".
| |
| *TARGET_KERNEL_BUILDDIR is the location of the kernel build directory. In my case (emenlow x86), that's "/home/trz/work/dev/build/tmp/work/emenlow-poky-linux/linux-yocto-3.0.3+git0+d1cd5c80ee97e81e130be8c3de3965b770f320d6_0+aae69fdf104b0a9d7b3710f808aac6ab303490f7-r1/linux-emenlow-standard-build.
| |
|
| |
| Once you know all those values, you can then substitute them into the above invocation (or into the 'crosstap' script below) and you should be able to generate a module (trace_open.ko using the example) that you can then copy over to the target machine and run there:
| |
|
| |
| $ scp trace_open.ko root@192.168.7.2:trace_open.ko
| |
| $ ssh -t root@192.168.7.2 staprun trace_open.ko
| |
|
| |
| You should then see the output of the script in your terminal session. If the script doesn't have an automatic termination condition e.g. after 4 secs, Ctrl-C will terminate it, and print out the information the script generates upon termination (if it does).
| |
|
| |
| You should also clean up by removing the module on the target:
| |
|
| |
| $ ssh -t root@192.168.7.2 trace_open.ko
| |
|
| |
| The following script, 'crosstap', automates all of the above and makes invoking systemtap scripts on the target a bit easier, but is just an example and is unsupported. Usage is basically the following (again using the above example):
| |
|
| |
| $ crosstap trace_open.stp root@192.168.7.2
| |
|
| |
| Here's example output from running the above on an emenlow system:
| |
|
| |
| trz@elmorro:~/stap-tests/cross$ ./crosstap ../scripts/tutorial/trace_open.stp root@192.168.7.2
| |
| WARNING: kernel release/architecture mismatch with host forces last-pass 4.
| |
| Pass 1: parsed user script and 75 library script(s) using 60404virt/20848res/2056shr kb, in 100usr/10sys/104real ms.
| |
| Pass 2: analyzed script: 2 probe(s), 8 function(s), 22 embed(s), 0 global(s) using 210076virt/44108res/2956shr kb, in 430usr/60sys/500real ms.
| |
| Pass 3: translated to C into "/tmp/stapQL56Ll/trace_open.c" using 207808virt/46548res/5480shr kb, in 380usr/0sys/383real ms.
| |
| trace_open.ko
| |
| Pass 4: compiled C into "trace_open.ko" in 1270usr/190sys/2040real ms.
| |
| copying stap module trace_open.ko
| |
| root@192.168.7.2's password:
| |
| trace_open.ko 100% 259KB 259.0KB/s 00:00
| |
| executing stap module trace_open.ko
| |
| root@192.168.7.2's password:
| |
| ls(1853) open ("/etc/ld.so.cache", O_RDONLY)
| |
| ls(1853) open ("/lib/librt.so.1", O_RDONLY)
| |
| ls(1853) open ("/usr/lib/libcap.so.2", O_RDONLY)
| |
| ls(1853) open ("/lib/libc.so.6", O_RDONLY)
| |
| ls(1853) open ("/lib/libpthread.so.0", O_RDONLY)
| |
| ls(1853) open (".", O_RDONLY|O_CLOEXEC|O_DIRECTORY|O_LARGEFILE|O_NONBLOCK|O_CLOEXEC)
| |
| sh(1823) open ("/etc/passwd", O_RDONLY|O_CLOEXEC|O_CLOEXEC)
| |
| hald-addon-stor(1656) open ("/dev/sdb", O_RDONLY|O_LARGEFILE)
| |
| hald-addon-stor(1656) open ("/dev/sdb", O_RDONLY|O_LARGEFILE)
| |
| hald-addon-stor(1656) open ("/dev/sdb", O_RDONLY|O_LARGEFILE)
| |
| Connection to 192.168.1.7 closed.
| |
| removing stap module trace_open.ko
| |
| root@192.168.7.2's password:
| |
| Connection to 192.168.7.2 closed.
| |
|
| |
| The trace_open.stp script is about as simple a script as you can have (aside from a helloworld.stp script). Just to show that that you can in fact run some pretty interesting and non-trivial scripts, here's the iostats.stp script from the systemtap examples dir, which basically aggregates read/write activity over a period of time:
| |
|
| |
| #! /usr/bin/env stap
| |
| global opens, reads, writes, totals
| |
|
| |
| probe begin { printf("starting probe\n") }
| |
|
| |
| probe syscall.open {
| |
| e=execname();
| |
| opens[e] <<< 1 # statistics array
| |
| }
| |
|
| |
| probe syscall.read.return {
| |
| count = $return
| |
| if ( count >= 0 ) {
| |
| e=execname();
| |
| reads[e] <<< count # statistics array
| |
| totals[e] += count
| |
| }
| |
| }
| |
|
| |
| probe syscall.write.return {
| |
| count = $return
| |
| if (count >= 0 ) {
| |
| e=execname();
| |
| writes[e] <<< count # statistics array
| |
| totals[e] += count
| |
| }
| |
| }
| |
|
| |
| probe end {
| |
| printf("\n%16s %8s %8s %8s %8s %8s %8s %8s\n",
| |
| "", "", "", "read", "read", "", "write", "write")
| |
| printf("%16s %8s %8s %8s %8s %8s %8s %8s\n",
| |
| "name", "open", "read", "KB tot", "B avg", "write", "KB tot", "B avg")
| |
| foreach (name in totals- limit 20) { # sort by total io
| |
| printf("%16s %8d %8d %8d %8d %8d %8d %8d\n",
| |
| name, @count(opens[name]),
| |
| @count(reads[name]),
| |
| (@count(reads[name]) ? @sum(reads[name])>>10 : 0 ),
| |
| (@count(reads[name]) ? @avg(reads[name]) : 0 ),
| |
| @count(writes[name]),
| |
| (@count(writes[name]) ? @sum(writes[name])>>10 : 0 ),
| |
| (@count(writes[name]) ? @avg(writes[name]) : 0 ))
| |
| }
| |
| }
| |
|
| |
| And here's the output of the script using 'crosstap' to run the script on an emenlow target - this output was generated following a fresh boot and browser invocation visiting an external website (notice that this script was terminated using Ctrl-C, at which point the results are printed in tabular form:
| |
|
| |
| trz@elmorro:~/stap-tests/cross$ ./crosstap ../scripts/iostats.stp root@192.168.7.2
| |
| WARNING: kernel release/architecture mismatch with host forces last-pass 4.
| |
| Pass 1: parsed user script and 75 library script(s) using 60444virt/20892res/2056shr kb, in 140usr/10sys/164real ms.
| |
| Pass 2: analyzed script: 5 probe(s), 2 function(s), 22 embed(s), 4 global(s) using 210232virt/44264res/2952shr kb, in 540usr/50sys/589real ms.
| |
| Pass 3: translated to C into "/tmp/staprO74T8/iostats.c" using 207964virt/46724res/5496shr kb, in 330usr/10sys/436real ms.
| |
| iostats.ko
| |
| Pass 4: compiled C into "iostats.ko" in 1830usr/140sys/2471real ms.
| |
| copying stap module iostats.ko
| |
| root@192.168.7.2's password:
| |
| iostats.ko 100% 289KB 289.3KB/s 00:00
| |
| executing stap module iostats.ko
| |
| root@192.168.7.2's password:
| |
| starting probe
| |
| ^C
| |
| read read write write
| |
| name open read KB tot B avg write KB tot B avg
| |
| Xorg 0 392 745 1947 0 0 0
| |
| web2 253 406 249 628 168 86 528
| |
| matchbox-deskto 2 213 6 31 1 0 4
| |
| matchbox-window 0 57 3 66 0 0 0
| |
| matchbox-panel 1 12 2 218 0 0 0
| |
| dropbear 0 4 0 16 3 0 37
| |
| connman-applet 0 2 0 48 0 0 0
| |
| stapio 0 63 0 0 1 0 15
| |
| Connection to 192.168.1.7 closed.
| |
| removing stap module iostats.ko
| |
| root@192.168.7.2's password:
| |
| Connection to 192.168.7.2 closed.
| |
|
| |
| The output of the above two scripts should give you some idea of the power these types of open-ended system-wide queries can provide, and give you a good starting point for your own experimentation
| |
|
| |
| With the 'utrace' feature, systemtap can also probe userspace applications. Here's an example of a probe that prints the name, pid, and the probe point description for every system call made in any application:
| |
|
| |
| probe process.syscall, process.end
| |
| {
| |
| printf ("name: %s, pid: %d, ppid: %s\n", execname(), pid(), pp())
| |
| }
| |
|
| |
| and sample output:
| |
|
| |
| name: dropbear, pid: 1128, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: dropbear, pid: 1128, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: dropbear, pid: 1128, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: dropbear, pid: 1128, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: dropbear, pid: 1128, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: stapio, pid: 1130, ppid: process.syscall
| |
| name: dropbear, pid: 1128, ppid: process.syscall
| |
|
| |
| If we specify a particular application, we can get further information, such as the syscall numbers for each system call for a given application:
| |
|
| |
| probe process("/usr/sbin/dropbearmulti").syscall
| |
| {
| |
| printf ("syscall: %d, name: %s, pid: %d, ppid: %s\n", $syscall, execname(), pid(), pp())
| |
| }
| |
|
| |
| Here's example output on a sugarbay system:
| |
|
| |
| syscall: 0, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 0, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 23, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 1, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 23, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 0, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 23, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 1, name: dropbear, pid: 1239, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
|
| |
| And the same program run on qemuppc:
| |
|
| |
| syscall: 142, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 13, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 3, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 3, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 13, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 142, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 13, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
| syscall: 4, name: dropbear, pid: 1984, ppid: process("/usr/sbin/dropbearmulti").syscall
| |
|
| |
| You can see that the syscall numbers are different, and also that we actually had to specify 'dropbearmulti' as the actual application name, which 'dropbear' is actually a symbolic link to, and as such wouldn't work as an executable name to systemtap.
| |
|
| |
| Note that userspace probes are only supported on x86 and qemuppc systems - utrace isn't supported on arm or mips.
| |
|
| |
| 'crosstap' script:
| |
|
| |
| #!/bin/bash
| |
| # usage: crosstap <systemtap script> <user@target-addr>
| |
| #
| |
| # NOTE: SYSTEMTAP_HOST_INSTALLDIR, SYSTEMTAP_TARGET_ARCH,
| |
| # STAGING_BINDIR_TOOLCHAIN, STAGING_BINDIR_TOOLPREFIX, and
| |
| # TARGET_KERNEL_BUILDDIR must be set to the appropriate host/yocto
| |
| # build-tree paths for this script to work correctly. The values
| |
| # below are only examples.
| |
|
| |
| # where systemtap was installed on the host
| |
| SYSTEMTAP_HOST_INSTALLDIR="/home/trz/systemtap-1.6"
| |
|
| |
| # i386 for 32-bit x86, arm for arm
| |
| SYSTEMTAP_TARGET_ARCH="i386"
| |
|
| |
| # where to find compiler executables, passed on to kernel kbuild
| |
| STAGING_BINDIR_TOOLCHAIN="/home/trz/work/dev/build/tmp/sysroots/x86_64-linux/us\
| |
| r/bin/emen-poky-linux"
| |
| STAGING_BINDIR_TOOLPREFIX="i586-poky-linux"
| |
|
| |
| # pointer to configured/built kernel tree
| |
| TARGET_KERNEL_BUILDDIR="/home/trz/work/dev/build/tmp/work/emenlow-poky-linux/li\
| |
| nux-yocto-3.0.3+git0+d1cd5c80ee97e81e130be8c3de3965b770f320d6_0+aae69fd\
| |
| f104b0a9d7b3710f808aac6ab303490f7-r1/linux-emenlow-standard-build"
| |
|
| |
| script_name=$(basename "$1")
| |
| script_base=${script_name%.*}
| |
|
| |
| ${SYSTEMTAP_HOST_INSTALLDIR}/bin/stap -gv -a ${SYSTEMTAP_TARGET_ARCH} -B CROSS_\
| |
| COMPILE="${STAGING_BINDIR_TOOLCHAIN}/${STAGING_BINDIR_TOOLPREFIX}-" -r ${TARGET\
| |
| _KERNEL_BUILDDIR} -I ${SYSTEMTAP_HOST_INSTALLDIR}/share/systemtap/tapset -R ${S\
| |
| YSTEMTAP_HOST_INSTALLDIR}/share/systemtap/runtime -m $script_base $1
| |
|
| |
| echo "copying stap module $script_base.ko"
| |
| scp $script_base.ko $2:$script_base.ko
| |
|
| |
| echo "executing stap module $script_base.ko"
| |
| ssh -t $2 staprun $script_base.ko
| |
|
| |
| echo "removing stap module $script_base.ko"
| |
| ssh -t $2 rm $script_base.ko
| |
|
| |
| For many more examples, and documentation, see [http://sourceware.org/systemtap/].
| |
|
| |
| '''Current status:'''
| |
|
| |
| For all of the below, the host was an x86_64 system, with systemtap installed on the host as:
| |
|
| |
| SYSTEMTAP_HOST_INSTALLDIR="/home/trz/systemtap-1.6"
| |
|
| |
| *x86 - has been tested on actual x86 hardware (emenlow and sugarbay) and everything so far seems to work nicely - all the test scripts in the 'Test Examples' section below worked fine.
| |
|
| |
| 'crosstap' variables used (machine = 'emenlow'):
| |
| SYSTEMTAP_TARGET_ARCH="i386"
| |
| STAGING_BINDIR_TOOLCHAIN="/home/trz/work/dev/build/tmp/sysroots/x86_64-linux/us\
| |
| r/bin/emen-poky-linux"
| |
| STAGING_BINDIR_TOOLPREFIX="i586-poky-linux"
| |
| TARGET_KERNEL_BUILDDIR="/home/trz/work/dev/build/tmp/work/emenlow-poky-linux/li\
| |
| nux-yocto-3.0.3+git0+d1cd5c80ee97e81e130be8c3de3965b770f320d6_0+aae69fd\
| |
| f104b0a9d7b3710f808aac6ab303490f7-r1/linux-emenlow-standard-build"
| |
|
| |
| *qemux86 - everything so far seems to work nicely - all the test scripts in the 'Test Examples' section below worked fine.
| |
|
| |
| 'crosstap' variables used (machine = 'qemux86'):
| |
|
| |
| SYSTEMTAP_TARGET_ARCH="i386"
| |
| STAGING_BINDIR_TOOLCHAIN="/usr/local/src/yocto/b/build/tmp/sysroots/x86_64-linux/usr/bin/i586-poky-linux"
| |
| STAGING_BINDIR_TOOLPREFIX="i586-poky-linux"
| |
| TARGET_KERNEL_BUILDDIR="/usr/local/src/yocto/b/build/tmp/work/qemux86-poky-linu\
| |
| x/linux-yocto-3.0.3+git0+7102097a25c7658e0f4d4dc71844e0ff6c446b25_0+a9d833fda9\
| |
| 0e2f1257888a97e092135610b5f259-r14/linux-common-pc-standard-build"
| |
|
| |
| *qemux86_64 - everything so far seems to work nicely - all the test scripts in the 'Test Examples' section below worked fine.
| |
|
| |
| 'crosstap' variables used (machine = 'x86_64'):
| |
|
| |
| SYSTEMTAP_TARGET_ARCH="x86_64"
| |
| STAGING_BINDIR_TOOLCHAIN="/usr/local/src/yocto/b/build/tmp/sysroots/x86_64-linux/usr/bin/x86_64-poky-linux"
| |
| STAGING_BINDIR_TOOLPREFIX="x86_64-poky-linux"
| |
| TARGET_KERNEL_BUILDDIR="/usr/local/src/yocto/b/build/tmp/work/qemux86-64-poky-linux/linux-yocto- 3.0.3+git0+582a28e4bc966ea367cbc2dc1f0de89dd4e7c3d8_0+35521a5a70316785a67aca1de1d39a7b84c49ccf-r1/linux-common_pc_64-standard-build"
| |
|
| |
| *qemuppc - all the test scripts in the 'Test Examples' section below worked fine, except for trace_open.stp, which failed with semantic errors due to flags and filename variables not being accessible, looks like a syscalls tapset issue wrt ppc.
| |
|
| |
| 'crosstap' variables used (machine = 'qemuppc'):
| |
|
| |
| SYSTEMTAP_TARGET_ARCH="powerpc"
| |
| STAGING_BINDIR_TOOLCHAIN="/usr/local/src/yocto/b/build/tmp/sysroots/x86_64-linux/usr/bin/ppc603e-poky-linux"
| |
| STAGING_BINDIR_TOOLPREFIX="powerpc-poky-linux"
| |
| TARGET_KERNEL_BUILDDIR="/usr/local/src/yocto/b/build/tmp/work/qemuppc-poky-linux/linux-yocto-3.0.3+git0+582a28e4bc966ea367cbc2dc1f0de89dd4e7c3d8_0+ee9510116f63aabb852708747bd0e3c32eeaf5bf-r1/linux-qemu_ppc32-standard-build"
| |
|
| |
| *qemuarm - everything so far seems to work nicely - all the test scripts in the 'Test Examples' section below worked fine. arm doesn't support ARCH_HAVE_TRACEHOOKS so doesn't support utrace/uprobes and therefore the utrace examples won't work on arm.
| |
|
| |
| 'crosstap' variables used (machine = 'qemuarm'):
| |
|
| |
| SYSTEMTAP_TARGET_ARCH="arm"
| |
| STAGING_BINDIR_TOOLCHAIN="/usr/local/src/yocto/b/build/tmp/sysroots/x86_64-linux/usr/bin/armv5te-poky-linux-gnueabi/"
| |
| STAGING_BINDIR_TOOLPREFIX="arm-poky-linux-gnueabi"
| |
| TARGET_KERNEL_BUILDDIR="/usr/local/src/yocto/b/build/tmp/work/qemuarm-poky-linux-gnueabi/linux-yocto-3.0.3+git0+582a28e4bc966ea367cbc2dc1f0de89dd4e7c3d8_0+8cc1674d61d0e0e1bf006164074cffd1071a3a52-r1/linux-arm_versatile_926ejs-standard-build"
| |
|
| |
| For many more examples, and documentation, see [http://sourceware.org/systemtap/].
| |
|
| |
| '''Test Examples:'''
| |
|
| |
| Below is the set of examples the above were (minimally) tested with:
| |
|
| |
| helloworld.stp:
| |
|
| |
| probe begin
| |
| {
| |
| print ("hello world\n")
| |
| exit ()
| |
| }
| |
|
| |
| trace_open.stp:
| |
|
| |
| probe syscall.open
| |
| {
| |
| printf ("%s(%d) open (%s)\n", execname(), pid(), argstr)
| |
| }
| |
|
| |
| probe timer.ms(9000) # after 9 seconds
| |
| {
| |
| exit ()
| |
| }
| |
|
| |
| embedded_c.stp:
| |
|
| |
| // /home/trz/systemtap-1.3/bin/stap -g embedded-c.stp -x 22870
| |
| // emacs(22870)
| |
|
| |
| %{
| |
| #include <linux/sched.h>
| |
| #include <linux/list.h>
| |
| %}
| |
|
| |
| function task_execname_by_pid:string (pid:long) %{
| |
| struct task_struct *p;
| |
| struct list_head *_p, *_n;
| |
| list_for_each_safe(_p, _n, ¤t->tasks) {
| |
| p = list_entry(_p, struct task_struct, tasks);
| |
| if (p->pid == (int)THIS->pid)
| |
| snprintf(THIS->__retvalue, MAXSTRINGLEN, "%s", p->comm);
| |
| }
| |
| %}
| |
|
| |
| probe begin
| |
| {
| |
| printf("%s(%d)\n", task_execname_by_pid(target()), target())
| |
| exit()
| |
| }
| |
|
| |
| timer_jiffies.stp:
| |
|
| |
| global count_jiffies, count_ms
| |
|
| |
| probe timer.jiffies(100)
| |
| {
| |
| count_jiffies ++
| |
| }
| |
|
| |
| probe timer.ms(100)
| |
| {
| |
| count_ms ++
| |
| }
| |
|
| |
| probe timer.ms(12345)
| |
| {
| |
| hz=(1000*count_jiffies) / count_ms
| |
| printf ("jiffies:ms ratio %d:%d => CONFIG_HZ=%d\n",
| |
| count_jiffies, count_ms, hz)
| |
| exit ()
| |
| }
| |
|
| |
| iostats.stp:
| |
|
| |
| #! /usr/bin/env stap
| |
| global opens, reads, writes, totals
| |
|
| |
| probe begin { printf("starting probe\n") }
| |
|
| |
| probe syscall.open {
| |
| e=execname();
| |
| opens[e] <<< 1 # statistics array
| |
| }
| |
|
| |
| probe syscall.read.return {
| |
| count = $return
| |
| if ( count >= 0 ) {
| |
| e=execname();
| |
| reads[e] <<< count # statistics array
| |
| totals[e] += count
| |
| }
| |
| }
| |
|
| |
| probe syscall.write.return {
| |
| count = $return
| |
| if (count >= 0 ) {
| |
| e=execname();
| |
| writes[e] <<< count # statistics array
| |
| totals[e] += count
| |
| }
| |
| }
| |
|
| |
| probe end {
| |
| printf("\n%16s %8s %8s %8s %8s %8s %8s %8s\n",
| |
| "", "", "", "read", "read", "", "write", "write")
| |
| printf("%16s %8s %8s %8s %8s %8s %8s %8s\n",
| |
| "name", "open", "read", "KB tot", "B avg", "write", "KB tot", "B avg")
| |
| foreach (name in totals- limit 20) { # sort by total io
| |
| printf("%16s %8d %8d %8d %8d %8d %8d %8d\n",
| |
| name, @count(opens[name]),
| |
| @count(reads[name]),
| |
| (@count(reads[name]) ? @sum(reads[name])>>10 : 0 ),
| |
| (@count(reads[name]) ? @avg(reads[name]) : 0 ),
| |
| @count(writes[name]),
| |
| (@count(writes[name]) ? @sum(writes[name])>>10 : 0 ),
| |
| (@count(writes[name]) ? @avg(writes[name]) : 0 ))
| |
| }
| |
| }
| |
|
| |
| utrace.stp:
| |
|
| |
| // adapted from systemtap langref
| |
|
| |
| probe process.syscall, process.end
| |
| {
| |
| printf ("name: %s, pid: %d, ppid: %s\n", execname(), pid(), pp())
| |
| }
| |
|
| |
| utrace2.stp:
| |
|
| |
| // adapted from systemtap langref, show syscalls made by processes
| |
|
| |
| //probe process("/usr/sbin/dropbear").syscall
| |
|
| |
| probe process("/usr/sbin/dropbearmulti").syscall
| |
| {
| |
| printf ("syscall: %d, name: %s, pid: %d, ppid: %s\n", $syscall, execname(), pid(), pp())
| |
| }
| |
Tracing and Profiling in Yocto
Yocto bundles a number of tracing and profiling tools - this 'HOWTO' describes their basic usage and more importantly shows by example how they fit together and how to make use of them to solve real-world problems.
The tools presented are for the most part completely open-ended and have quite good and/or extensive documentation of their own which can be used to solve just about any problem you might come across in Linux. Each section that describes a particular tool has links to that tool's documentation and website.
The purpose of this 'HOWTO' is to present a set of common and generally useful tracing and profiling idioms along with their application (as appropriate) to each tool, in the context of a general-purpose 'drill-down' methodology that can be applied to solving a large number (90%?) of problems. For help with more advanced usages and problems, please see the documentation and/or websites listed for each tool.
General Setup
Most of the tools are available only in 'sdk' images or in images built after adding 'tools-profile' to your local.conf. So, in order to be able to access all of the tools described here, please first build and boot an 'sdk' image e.g.
$ bitbake core-image-sato-sdk
or alternatively by adding 'tools-profile' to the EXTRA_IMAGE_FEATURES line in your local.conf:
EXTRA_IMAGE_FEATURES = "debug-tweaks tools-profile"
If you use the 'tools-profile' method, you don't need to build an sdk image - the tracing and profiling tools will be included in non-sdk images as well e.g.:
$ bitbake core-image-sato
Overall Architecture of the Linux Tracing and Profiling Tools
It may seem surprising to see a section covering an 'overall architecture' for what seems to be a random collection of tracing tools that together make up the Linux tracing and profiling space. The fact is, however, that in recent years this seemingly disparate set of tools has started to converge on a 'core' set of underlying mechanisms:
- static tracepoints
- dynamic tracepoints
- the perf_events subsystem
- debugfs
A Few Real-world Examples
Custom Top
Yocto Bug 3049
Slow write speed on live images with denzil
Autodidacting the Graphics Stack
Using ftrace, perf, and systemtap to learn about the i915 graphics stack.
Determining whether 3-D rendering is using the hardware (without special test-suites)
The standard (simple) 3-D graphics programs can't always be used to unequivocally determine whether hardware rendering or a fallback software rendering mode is being used e.g. PVR graphics. We can however use the tracing tools to unequivocally determine whether hardware or software rendering is being used regardless of what the test programs are telling us, or in spite of the fact that we may be using a proprietary stack.
This example will provide a simple yes/no test based on tracing output.
Basic Usage (with examples) for each of the Yocto Tracing Tools
perf
ftrace
trace-cmd/kernelshark
oprofile
sysprof
LTTng (Linux Trace Toolkit, next generation)
Setup
NOTE: The lttng support in Yocto 1.3 (danny) needs the following poky commits applied in order to work:
If you also want to view the LTTng traces graphically, you also need to download and install/run the 'SR1' or later Juno release of eclipse e.g.:
http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/juno/SR1/eclipse-cpp-juno-SR1-linux-gtk-x86_64.tar.gz
Collecting and Viewing a Trace in Eclipse
Once you've applied the above commits and built and booted your image (you need to build the core-image-sato-sdk image or the other methods described in the General Setup section), you're ready to start tracing.
First, start eclipse and open the 'LTTng Kernel' perspective by selecting the following menu item:
Window | Open Perspective | Other...
In the dialog box that opens, select 'LTTng Kernel' from the list.
Back at the main menu, select the following menu item:
File | New | Project...
In the dialog box that opens, select the 'Tracing | Tracing Project' wizard and press 'Next>'.
Give the project a name and press 'Finish'.
That should result in an entry in the 'Project' subwindow.
In the 'Control' subwindow just below it, press 'New Connection'.
Add a new connection, giving it the hostname or IP address of the target system.
Also provide the username and password of a qualified user (a member of the 'tracing' group) or root account on the target system.
Also, provide appropriate answers to whatever else is asked for e.g. 'secure storage password' can be anything you want
blktrace
blktrace is a tool for tracing and reporting low-level disk I/O. blktrace provides the tracing half of the equation; its output can be piped into the blkparse program, which renders the data in a human-readable form and does some basic analysis:
$ blktrace /dev/sda -o - | blkparse -i -
systemtap
SystemTap is a system-wide script-based tracing and profiling tool.
SystemTap scripts are C-like programs that are executed in the kernel to gather/print/aggregate data extracted from the context they end up being invoked under.
For example, this probe from the SystemTap tutorial [1] simply prints a line every time any process on the system open()s a file. For each line, it prints the executable name of the program that opened the file, along with its pid, and the name of the file it opened (or tried to open), which it extracts from the open syscall's argstr.
probe syscall.open
{
printf ("%s(%d) open (%s)\n", execname(), pid(), argstr)
}
probe timer.ms(4000) # after 4 seconds
{
exit ()
}
Normally, to execute this probe, you'd simply install systemtap on the system you want to probe, and directly run the probe on that system e.g. assuming the name of the file containing the above text is trace_open.stp:
# stap trace_open.stp
What systemtap does under the covers to run this probe is 1) parse and convert the probe to an equivalent 'C' form, 2) compile the 'C' form into a kernel module, 3) insert the module into the kernel, which arms it, and 4) collect the data generated by the probe and display it to the user.
In order to accomplish steps 1 and 2, the 'stap' program needs access to the kernel build system that produced the kernel that the probed system is running. In the case of a typical embedded system (the 'target'), the kernel build system unfortunately isn't typically part of the image running on the target. It is normally available on the 'host' system that produced the target image however; in such cases, steps 1 and 2 are executed on the host system, and steps 3 and 4 are executed on the target system, using only the systemtap 'runtime'.
The systemtap support in Yocto assumes that only steps 3 and 4 are run on the target; it is possible to do everything on the target, but this section assumes only the typical embedded use-case.
So basically what you need to do in order to run a systemtap script on the target is to 1) on the host system, compile the probe into a kernel module that makes sense to the target, 2) copy the module onto the target system and 3) insert the module into the target kernel, which arms it, and 4) collect the data generated by the probe and display it to the user.
Unfortunately, the process detailed below isn't as simple as 'stap script.stp', but I have created a simple script that does simplify usage quite a bit (see the 'crosstap' script below).
$ cd /path/to/yocto
$ source oe-init-build-env
### Shell environment set up for builds. ###
You can now run 'bitbake <target>'
Common targets are:
core-image-minimal
core-image-sato
meta-toolchain
meta-toolchain-sdk
adt-installer
meta-ide-support
You can also run generated qemu images with a command like 'runqemu qemux86'
Once you've done that, you can cd to whatever directory contains your scripts and use 'crosstap' to run the script:
$ cd /path/to/my/systemap/script
$ crosstap root@192.168.7.2 trace_open.stp
If you get an error connecting to the target e.g.:
$ crosstap root@192.168.7.2 trace_open.stp
error establishing ssh connection on remote 'root@192.168.7.2'
Try ssh'ing to the target and see what happens:
$ ssh root@192.168.7.2
A lot of the time, connection problems are due specifying a wrong IP address or having a 'host key verification error'.
If everything worked as planned, you should see something like this (enter the password when prompted, or press enter if its set up to use no password):
$ crosstap root@192.168.7.2 trace_open.stp
root@192.168.7.2's password:
matchbox-termin(1036) open ("/tmp/vte3FS2LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600)
matchbox-termin(1036) open ("/tmp/vteJMC7LW", O_RDWR|O_CREAT|O_EXCL|O_LARGEFILE, 0600)