Technical FAQ: Difference between revisions
PaulEggleton (talk | contribs) |
PaulEggleton (talk | contribs) |
||
Line 323: | Line 323: | ||
=== Can I disable shared state? === | === Can I disable shared state? === | ||
You cannot, no. Shared state is an intrinsic part of staging files into the sysroot. It is possible to construct a recipe that bypasses sstate for some tasks (the kernel does this), however this is quite difficult and if not done properly will lead to many other problems. | You cannot, no. Shared state (sstate) is an intrinsic part of staging files into the sysroot. It is possible to construct a recipe that bypasses sstate for some tasks (the kernel does this), however this is quite difficult and if not done properly will lead to many other problems. | ||
Almost always when you are having a problem with shared state the issue is either (a) you're adding/changing files in the sysroot directly (i.e. outside sstate control), or (b) what is being placed into the sysroot isn't relocatable. The solution for (a) is do not do that - files should always be installed under ${D} within do_install and then a subset of those are staged into the sysroot automatically. For (b) you need to fix or adapt the hardcoded path(s) - if the program reads (or can be made to read) each path from an environment variable, then you can use the create_wrapper utility function to create a wrapper script that will set the path appropriately. Run | Almost always when you are having a problem with shared state the issue is either (a) you're adding/changing files in the sysroot directly (i.e. outside sstate control), or (b) what is being placed into the sysroot isn't relocatable. The solution for (a) is do not do that - files should always be installed under <code>${D}</code> within <code>do_install</code> and then a subset of those are staged into the sysroot automatically. For (b) you need to fix or adapt the hardcoded path(s) - if the program reads (or can be made to read) each path from an environment variable, then you can use the <code>create_wrapper</code> utility function to create a wrapper script that will set the path appropriately. Run <code>git grep create_wrapper</code> in the <code>meta</code> subdirectory to see examples. | ||
=== Files I installed into /opt or some other path never make it into the sysroot but I need them - how do I fix this? === | === Files I installed into /opt or some other path never make it into the sysroot but I need them - how do I fix this? === |
Revision as of 01:15, 1 July 2020
NOTE: This is currently a draft. Not sure where this should end up but I've been gathering these based on my interactions with people on IRC and email over the years. - PaulEggleton (talk) 21:13, 27 June 2016 (PDT) |
Basics
How do I figure out which version/codename/bitbake version matches up with which?
There is a table in the Releases page on the Yocto Project wiki.
How do I control what's in the final image?
Each image is defined by its own recipe, and that recipe specifies a list of packages that the image should contain. See Customising Images within the Yocto Project development manual for further details.
Note: if you're doing anything more than basic experimentation / testing then you almost certainly should create your own image recipe rather than using one of the example images e.g. core-image-minimal - though you can certainly start by copying one of the example images. This way you have easier control over what goes into the image.
Where do I find build logs?
For the overall build, the output of bitbake gets logged to tmp/log/cooker/<machine>.
For each individual recipe, there is a "temp" directory under the work directory for the recipe that contains log.<taskname> and run.<taskname> files - the logs and the runfiles respectively. Within the build system this directory is pointed to by the T variable, so if you need to you can find it by using bitbake -e
:
bitbake -e <recipename> | grep ^T=
How do I add a patch to a recipe?
There are two concerns - how the recipe can fetch the patch and how it can be applied. For fetching, patch files are usually placed in a subdirectory next to the recipe; by default this directory should be named "files" or the the recipe name without any class prefix or suffix (for example for both "xyz" and "xyz-native" the subdirectory would be "xyz"). A pointer to it then needs to be added to SRC_URI
within the recipe, which usually takes the form file://<patchname>.patch
- i.e. just the filename, no path. If more than one subdirectory needs to be stripped off the paths in the patch (i.e. you need more than the equivalent of the -p1 option to the patch command) then you can add ;striplevel=<number>
to the end of the patch entry in SRC_URI (without any spaces).
As with any modification, if the patch you are applying is a customisation that you do not intend to send to be incorporated in the layer you are modifying, then instead of adding the patch to the recipe directly then you should consider applying it in a bbappend within your own custom layer. This makes things easier if you later want to update the layer in question and the recipe has been modified upstream - you avoid effectively forking the layer.
The devtool
utility can help you modify the sources for a recipe and create a patch - basically devtool modify
, edit the sources, commit the changes with git commit
, then devtool finish
(or devtool update-recipe
in versions older than 2.2). Since devtool modify
gives you a git tree to work with, you can of course use something like git am
to apply existing patches this way. For more details see Use devtool modify to Enable Work on Code Associated with an Existing Recipe within the Yocto Project Development manual.
What does "native" mean?
The "native" suffix identifies recipes (and variants of recipes) that produce files intended for the build host, as opposed to the target machine. This is usually for tools that are needed during the build process (such as automake).
What does "nativesdk" mean?
The "nativesdk" prefix identifies recipes (and variants of recipes) that produce files intended for the host portion of the standard SDK, or for things which are constructed like an SDK such as buildtools-tarball. These are built for SDKMACHINE which may or may not be the same architecture as the build host.
I have two recipes and one needs to access files provided by another - how can that work?
Instead of providing direct access from a recipe to another's build tree (which wouldn't be practical with OpenEmbedded since the build tree (or "workdir") is temporary), we create a "sysroot" where files that are intended to be shared between recipes get copied. The sysroot is managed by the build system and you should not copy files in there directly - instead, you install files under ${D} as normal during do_install and then the build system will copy a subset of those to the sysroot. There is a seperate sysroot for each machine being built for. In a recipe you can get the path of the sysroot and various standard directories under it using the STAGING_* variables.
Often, for commonly-used build systems such as autotools and cmake you don't need to worry about these details - those systems and the environment that OpenEmbedded sets up for them will ensure that files get installed and picked up in the correct locations. However if the software your recipe is building has custom build scripts / makefiles and it takes shortcuts that don't account for cross-compilation or the use of a sysroot, then you will need to make appropriate adjustments.
How do I enable package management in the final image?
Add package-management
to IMAGE_FEATURES
(or EXTRA_IMAGE_FEATURES
). You should then be able to use dnf/rpm, opkg, or apt-get/dpkg from the running system depending on the packaging format you have selected through PACKAGE_CLASSES. For more information see Using Runtime Package Management in the Yocto Project Development manual.
What do ?=, ??=, := etc. do within a recipe/config file?
See the Basic Syntax section of the BitBake manual for details.
Layers
See http://www.openembedded.org/Layers_FAQ
Troubleshooting
I've created a recipe but it's not showing up in my image, what's going on?
Creating a recipe (or adding a layer to your configuration with a desired recipe in it) only makes it available to the build system, it doesn't change what goes into the image. For that, see How do I control what's in the final image? above.
I set a variable but it doesn't seem to be having an effect, how do I fix this?
First, double-check that you haven't misspelled the variable name.
The main tool to help troubleshoot any variable-related issue is bitbake -e
- this lists all the variables and the complete history of how each one has been set (use bitbake -e recipename
if you're dealing with issues in a variable value within a recipe as opposed to the global level). Usually it's best to pipe this through less
so you can easily see the history - within less you can press / to search for the variable name. Often you will be dealing with the behaviour of a variable within the context of a specific recipe, so specify that recipe on the bitbake -e
command line to get the variables as set within the context of the recipe rather than the global context.
If you're setting a variable in a bbappend, double-check that the bbappend is actually being applied - see the next question.
I've created a bbappend for a recipe but what I'm setting there isn't having any effect, how do I fix this?
Here are some things to check:
- Check if the layer the bbappend is in is listed in
bitbake-layers show-layers
. If it isn't, you need to edit your bblayers.conf and ensure the path to the layer is included in the BBLAYERS value - Check that the bbappend is being picked up by running
bitbake-layers show-appends
- if your bbappend file isn't listed, it could be named incorrectly (such that it doesn't match the recipe name) or it may be that the BBFILES value in the conf/layer.conf for the layer containing the bbappend file doesn't include an expression that will match the bbappend files. - If there are multiple versions of the recipe you have bbappended, it could be that the actual recipe being built is a different version than the one you have bbappended.
bitbake-layers show-recipes <recipename>
will list all the versions, with the first one listed being the one that will be built. If this is the case there are several different solutions to this - (a) Rename your bbappend to match the version being built, (b) use a % wildcard in your bbappend so it will apply to any version, (c) setPREFERRED_VERSION_<recipename>
in the configuration to select a the version you want to be built. - Finally, as with any other issue with setting variables, use
bitbake -e recipename | less
and search with/
to see the history of how the variable has been set - you may find that the value you're trying set is being overridden.
I'm getting warnings that a recipe is tainted - what does this mean?
Usually this happens because you have used I used bitbake's -f
or -C
option to force a task to re-execute. The assumption is that if you forced a task, it is possible that a rebuild from scratch would not include whatever changes you made that necessitated forcing (e.g. if you modified the source in the work directory for the recipe and then ran bitbake -c compile -f
). Generally, forcing a task should be reserved for situations where the build system has failed to detect a change you made rather than for everyday usage - if you're finding yourself needing to do it regularly then either there's a bug, you're doing something wrong, or perhaps you're using -f
or -C
when it's not really needed. Running bitbake -c clean
on the recipe will get rid of the taint flag.
There is one other situation where we apply a taint, and that is bitbake -c menuconfig
on the kernel. In this case, the configuration has been saved into the work directory for the kernel, but that is temporary - any rebuild from scratch will use the default configuration, so it is a reminder that you need to take the configuration and apply it back to the metadata and then run bitbake -c clean
on the kernel recipe.
I'm fetching from a git repository over ssh / http / https but it's not fetching properly, how do I fix this?
Bitbake expects the prefix of entries in SRC_URI to specify the fetcher to be used, not the actual protocol. Thus, instead of:
# This will NOT work SRC_URI = "http://git.example.com/repository"
You should specify:
# This is better SRC_URI = "git://git.example.com/repository;protocol=http"
The same applies for ssh and https.
I tried bitbake <some target package name> that I know exists and it told me that nothing PROVIDES this...?
There are two namespaces that bitbake concerns itself with - recipe names (a.k.a. build time targets) and package names (a.k.a. runtime targets). You can specify a build time target on the bitbake command line, but not a runtime target; you need to find the recipe that provides the package you are trying to build and build that instead (or simply add that package to your image and build the image). In current versions bitbake will at least tell you which recipes have matching or similar-sounding runtime provides (RPROVIDES) so that you'll usually get a hint on which recipe you need to build.
I've included a package in my image but files I expect to be there are missing, what's the issue?
Check the simple stuff: verify that the package is really in the image - look at the manifest file next to the image to ensure the package is listed. Also if you're flashing the image, double-check that you did indeed flash the right image and if there are multiple partitions / storage devices on your board or device that you're booting the one that you think you are.
Once you're sure of the above, it may be a matter of the package splitting - a lot of recipes split less commonly used components out to separate packages, so it's possible that the files you are looking for are in a different package. You can look at the recipe for this (look for PACKAGES and FILES statements) or assuming the recipe has been built, you can use oe-pkgdata-util list-pkgs -p recipename
and oe-pkgdata-util list-pkg-files
to inspect the packages provided by the recipe and the files they contain. Once you find the right package you can add it to your image.
I'm required to set LIC_FILES_CHKSUM but the software I'm building doesn't have a license statement, what do I do?
Ideally, all software should come with some kind of license statement so that the terms of distribution are clearly stated (especially if its source code is made publicly available); if not a text file describing the license then at the very least a line or two in the accompanying documentation, README file or source header comments. Assuming there is a license statement somewhere but not in a form you can point to with LIC_FILES_CHKSUM as part of the source tree, you can point LIC_FILES_CHKSUM to one of the generic license files in ${COMMON_LICENSE_DIR} (meta/files/common-licenses/), or alternatively you can include a file containing the license statement in a "files" subdirectory next to the recipe (or subdirectory named the same as the recipe - see how such files are handled in other recipes), point to it in SRC_URI using file://, then add it to LIC_FILES_CHKSUM. It is worth noting however that LIC_FILES_CHKSUM is intended to give you a warning if upstream changes its license terms when you do an upgrade of the recipe, and by pointing it to this common license file that is part of the metadata, that mechanism will not function. You may wish to consider encouraging the upstream provider of the software your recipe is building to follow best practices and include a proper license statement, so that you can point to it in a future version. At minimum if you do use such workarounds, you will need to take extra care when upgrading the recipe in future in case the upstream provider changes the license terms.
If there really is no license stated at all anywhere for the software (and this is unfortunately not uncommon on github, for example) then you should really contact upstream - if there's no license, then technically you really shouldn't be distributing it until that's clarified with the original author(s).
I am getting a package QA error / warning when building a recipe, how do I solve it?
There are some general and specific recommendations in the QA Errors and Warnings section of the Yocto Project Reference Manual.
I am getting "taskhash mismatch" errors, what does this mean and how do I fix it?
Bitbake parses the metadata (recipes, classes and configuration) repeatedly during its operation, and this error means that the result of parsing changed between one parse and the next. Two situations that can cause this:
- One of the parsed files changed in between e.g. you edited a recipe or performed a git operation (e.g. git checkout) during the build. Do not make changes to the metadata while a build is running. If you run the build again the error should not recur.
- Alternatively, there is something in the metadata that results in a variable expanding to a different value each time it is parsed. This is often something time-related e.g. a timestamp which is calculated every time an expression is expanded. The solution is to ensure the value is calculated once per build and then the expression expands to the same value for the duration of the build.
Building on a system with a GRSec kernel doesn't work well, is that supported?
No, grsec isn't really supported. The list of distros that are supported (tested) is in the Yocto mega manual for each release. You can refer to the work-around given in this defect: https://bugzilla.yoctoproject.org/show_bug.cgi?id=10885
Working around Firejail
For users of Parrot OS and other secured Linux distros, you will find that your bitbake fetch commands refuse to work, yet you can manually run wget and retrieve the packages with no problem. This is due to Poky creating links to all the tools it requires, in particular 'wget', 'ssh' and 'strings', using the links to these tools in the /usr/local/bin/ directory which all redirect to firejail. To fix the problem you can cd into <your Yocto install directory>/poky/build/tmp/hosttools directory and replace these links with ones redirecting to the actual executables under the /usr/bin directory.
Dependencies
How do I find out why something is being built?
bitbake -g <recipe>
will produce some .dot files that allow you to see the dependency relationships - usually pn-depends.dot holds the answers although sometimes you may need to look at task-depends.dot if the dependency is only in the form of a task dependency. Note that these graphs are much too large for most graphviz visualisation tools to process, so you'll probably find it's easiest to view them with "less" or a text editor and search for the item you're looking for.
How do I find out why something is in my image?
Enable the buildhistory class and build the image again, and it will write out a depends.dot file containing the relationships between packages in the final image. If the package name isn't mentioned it is probably explicitly mentioned in IMAGE_INSTALL or being brought in via IMAGE_FEATURES.
See Maintaining Build Output Quality in the Yocto Project Reference manual which covers how to enable buildhistory and the output it produces.
How do I view the .dot files produced by bitbake -g or buildhistory?
The size of some of these .dot graphs (particularly those produced by bitbake -g
) is a little large for most viewers / processing tools, and unfortunately this isn't something that can be fixed - it's just the nature of the dependency relationships between targets and tasks within OpenEmbedded. Usually if you're just after answering a simple dependency question you can figure it out by viewing it with less
and using its built-in search function (or alternatively your favourite text editor).
You can try xdot which will work well for some of the graphs, but the task graph produced by bitbake -g
for something like an image in particular is likely to be too large to view within it.
Why are all of these -native items being built when my host distro has some of these available?
It's complicated. In some cases the software in question isn't widely packaged by common Linux distributions. In other cases we need to apply patches to the software, use a more up-to-date version than commonly packaged or build it with a particular configuration. In general it just helps us isolate ourselves from potential problems caused by differences in host Linux distributions. For the most part the time spent building the native tools that are definitely provided by the host distro are dwarfed by the time spent building things that definitely aren't provided, such as the C library for the target and the cross-compiling toolchain.
I disabled runtime package management and yet it still seems to be building rpm/opkg, why?
The build system always uses a package manager on the host to assemble images, because it is usually the best tool for this job. This is completely independent of whether the package manager is available in the target image - "package-management" being in IMAGE_FEATURES (possibly indirectly via EXTRA_IMAGE_FEATURES) controls whether the package manager is used at runtime i.e. whether it (and its associated package database) will be present in the target image.
Why is opkg-native / opkg-utils being built when I don't have ipk packaging enabled?
opkg-utils provides update-alternatives which is the default tool used to manage the alternatives system (for selecting between multiple providers of the same file, e.g. busybox and bash both provide /bin/sh).
Why is rpm-native being built when I don't have rpm packaging enabled?
rpm-native is needed for two things in the generic packaging code implemented in the package class:
- Debug symbol splitting - rpm-native provides the debugedit tool which this code uses
- Per-file dependencies - although this was originally just feeding into rpm when rpm was being used, it also now gets verified by QA checks regardless of which packaging backend is in use.
I see a recipe built, but building an image containing the corresponding package fails at do_rootfs because it can't find the package. How does this happen?
(For ipk, the error is "Couldn't find anything to satisfy '<package>'"; for rpm it is "<package> not found in the base feeds (<architecture list>)".)
Usually this is because the recipe claimed to provide the specified package (via PACKAGES or PACKAGES_DYNAMIC) but it wasn't actually produced, possibly because it ended up empty (since by default empty packages aren't produced), but the image or some other package still has a dependency that pulls in the specified package. If this is a recipe you are writing yourself the probable cause is your recipe isn't installing any files and thus the main package for the recipe is empty. Fix do_install (or what do_install is already running, e.g. make install) such that files are installed into the correct location such that they can then subsequently be packaged, and then all should be well.
In other situations the reference to the package in question is spurious and either it should be removed entirely or there's another package that should be used instead. For example, the avahi and dhcp recipes both have an empty main package since the client and server are split out into their own packages, and those are the ones you should be using instead (avahi-daemon, avahi-utils, dhcp-server, dhcp-client - there are other packages as well, please see How do I find out what packages are produced by a recipe?.) You could argue that these recipes shouldn't claim to provide the main package, or they should have a main package that depends on all the other packages (as some other recipes do).
X11 and various other items are being built but I'm only building core-image-minimal - why?
This is where it helps to understand the difference between build-time dependencies and runtime dependencies - often, a recipe will require things at build time (for example tools that help the build process, or to satisfy optional dependencies) that it doesn't necessarily need at runtime. The default configuration includes "x11
" in DISTRO_FEATURES
, and thus anything that can optionally support X11 will have its X11 support enabled; however when it comes to actually producing the image there won't be any X11 packages included as long as there are no hard dependencies and there aren't any X11 packages explicitly requested.
If you never intend to use X11, you can set your own DISTRO_FEATURES
value that excludes x11
(note lower case, as with all feature names) and then X11 support will be disabled at build time and these items won't even be built.
How do I avoid the kernel itself being pulled into my image when installing kernel modules?
By default, the kernel class sets a dependency on the kernel-base package (which kernel modules always depend on) onto kernel-image, which contains the actual kernel binary. If you don't want this, set the following either in your kernel recipe or at the configuration level:
RDEPENDS_${KERNEL_PACKAGE_NAME}-base = ""
Note: for older releases (pre-2.5) do this instead:
RDEPENDS_kernel-base = ""
Misc
How do I remove a value from a list variable?
For variables that are expected to contain a space-separated list of items, BitBake supports a _remove operator to remove items from it. See Removal (override style syntax) in the BitBake user manual.
NOTE: the _remove operation is final - you cannot "undo" it with other operations elsewhere, thus you should really only make use of it in your distro / local configuration and not in layers that you expect others to re-use for different purposes (and therefore they may need to undo your changes). An alternative way to effectively remove an item is to set the list outright to include all the items minus the one you want to remove.
How do I change how my recipe is built depending on what image I'm building?
The short answer is you cannot - the reason is that OpenEmbedded builds packages based on the overall configuration, and then the image only selects which of these packages should go into the final image. However, there are some solutions that do allow you to achieve the desired result:
- Have separate packages for the two different versions. This could take the form of different recipes or you could do it within the same recipe. The two packages do have to have different names however; this may create problems if you have other packages that depend on the package.
- Use a postprocessing function within the image(s) - within the image recipe, define a shell or python function that makes the desired changes to the files in the image and add a call to it to ROOTFS_POSTPROCESS_COMMAND within the image recipe. Note that this may not be appropriate if you have runtime package management enabled since the postprocessing will only happen at image creation time and not if the package is installed later on at runtime - you may need to use a postinstall script instead in this case.
- Use a postinstall script (pkg_postinst_<package> function) within the recipe. In order to work, the postinstall script will need to be able to determine what to do when it's run - this may not be practical depending on what you're trying to achieve.
Can I use a toolchain built by OE as the external toolchain?
In general, this is not recommended and not something that is tested or directly supported out of the box. If you are wanting to do this solely as a means of speeding up the build, it is strongly suggested that you use shared state instead.
There is a meta-sourcery layer available to enable support for the CodeSourcery toolchain, you may be able to use this as a template for bringing in an external toolchain however there are no guarantees.
When I run bitbake -c devshell it looks like it's running as root! How is that possible?
It's not running as the actual root user, it's just pretending for the benefit of programs that run under it (including your shell) that it is, via pseudo. This is important, because you normally want any owner/group/permission values that you set on files to be reflected in files that the recipe installs and packages and thus reflected in the final image - without this mechanism the actual build would have to run as root which would be very risky. There are no actual elevated privileges through this mechanism however, so you need not be worried.
Why does OE use pseudo? Why not use fakeroot / fakechroot instead?
Splitting this up into two questions - we use pseudo (not to be confused with sudo!) because we want to be able to create images containing files have the correct permissions and ownership, e.g. files owned by root, without the user running the build system having to have that privilege. By using LD_PRELOAD to intercept function calls, pseudo creates an environment for programs running underneath it where it appears as if the running user has those privileges (and the results of any operations persist within the pseudo environment, i.e. you can write a file as root and it will appear to be owned by root while still running under pseudo). This allows us to run builds entirely as a normal user without needing extra privileges. Without pseudo we would require running the build system under sudo or as root - which would be ill-advised for things such as "make install" in case it happened to be broken and tried to write to / instead of somewhere under the work directory for the recipe; a broken recipe could easily end up destroying your system in that case.
To answer the second part, why we use pseudo instead of fakeroot / fakechroot, see WhyNotFakeroot on the pseudo wiki.
How do I find out what packages are produced by a recipe?
The Toaster web UI provides easy ways to query this.
In the 1.8 (fido) release and newer you can use the following command, assuming the recipe has already been built:
oe-pkgdata-util list-pkgs -p recipename
Alternatively you can look in the "packages-split" subdirectory under the work directory for the recipe - each package produced by the recipe will have a subdirectory under that. If you're not sure how to find the work directory you can run the following command:
bitbake -e recipename | grep ^WORKDIR=
Before a recipe gets built it is a bit trickier, since the system often doesn't know exactly which packages will be produced until do_package time; this is particularly true for recipes that package plugins or modules (e.g. kernel modules). You can get a reasonable idea though by looking at the value of PACKAGES (and PACKAGES_DYNAMIC for recipes that produce plugins).
How do I find out which package contains a particular file (or python module)?
oe-pkgdata-util has a find-path subcommand that will tell you exactly this. For example:
$ oe-pkgdata-util find-path /etc/network/interfaces init-ifupdown: /etc/network/interfaces
Wildcards are allowed anywhere in the path (but you should enclose such expressions in quotes to avoid the shell itself attempting to expand the wildcard):
$ oe-pkgdata-util find-path "*/fstrim" util-linux-bash-completion: /usr/share/bash-completion/completions/fstrim util-linux-ptest: /usr/lib/util-linux/ptest/fstrim util-linux-dbg: /sbin/.debug/fstrim util-linux-fstrim: /sbin/fstrim
As a specific example of where this can be useful, our Python packaging is a bit more granular than most typical distributions, allowing you to tune the contents of your image to just what you need. However, that does mean you may have trouble figuring out which package provides a particular module. oe-pkgdata-util find-path can also be used for this. For example, to find the package containing the "shutil" module, run this:
$ oe-pkgdata-util find-path "*/shutil.*" python3-shell: /usr/lib/python3.5/shutil.py python3-shell: /usr/lib/python3.5/__pycache__/shutil.cpython-35.opt-2.pyc python3-shell: /usr/lib/python3.5/__pycache__/shutil.cpython-35.opt-1.pyc python3-shell: /usr/lib/python3.5/__pycache__/shutil.cpython-35.pyc
Thus the package you are looking for is python3-shell. (Note that you could use */shutil.py, but if the module you are looking for is written in C as some of them are, that won't match it.)
I have a local source tree I want to build instead of the upstream source a recipe normally fetches, how do I do that?
If it's for development purposes i.e. you have your own local source tree you want to work on and have built, then run:
devtool modify -n <recipename> path/to/sourcetree/
Once you are done you can use devtool finish
or devtool reset
(depending on the situation) to return to building the source specified in the recipe.
Alternatively if it's more permanent, use the externalsrc
class - you can inherit this in the original recipe or a bbappend:
inherit externalsrc EXTERNALSRC = "/path/to/sources"
If you're going to use it across a number of recipes you can inherit it globally at the configuration level (perhaps via an inc file that you include/require there):
INHERIT += "externalsrc" EXTERNALSRC_pn-<recipename> = "/path/to/sources"
How do I specify the default shell? (e.g. bash instead of busybox)
It depends what you mean. As far as which provides /bin/sh, this is controlled through the alternatives system, and by default bash has a higher priority than busybox, so simply installing bash into your image will automatically have /bin/sh link to bash rather than busybox.
If you mean you want a user's login shell to be a specific shell, you'll need to modify /etc/passwd. One fairly easy way to achieve this is to use the extrausers class in your image recipe:
inherit extrausers EXTRA_USERS_PARAMS = "usermod -s /bin/bash <username>; "
How do I get "full" versions of typical shell commands?
Most of the shell commands in our images are provided by busybox by default, and are very much simplified compared to what you would have on a typical Linux system in order to save space. If you need the full versions, most of them are built and packaged by the coreutils recipe (for disk and other typical utilities) and procps (for ps, etc). You may also want to install bash for more typical shell built-in commands. There is also a core-image-full-cmdline image if you want a base image that is already set up to provide a more typical Linux command-line experience. (Note: these will of course use up more disk space and memory.)
How do I allow a variable's value through from the external environment?
Add the variable's name to the BB_ENV_EXTRAWHITE in the external environment before running bitbake. Note that the oe-init-build-env script sets a default for this which you will want to preserve, so add to the default value rather than overwriting it.
Alternatively if you just want to get the external value of a variable from python code within the metadata, you can use the BB_ORIGENV variable which itself contains a datastore of the original environment. For example to get the value of the DISPLAY variable from the environment within a python function you would do this:
display = d.getVar("BB_ORIGENV", False).getVar("DISPLAY")
Note that you must specify "false" for the expand parameter when getting the BB_ORIGENV variable, because it's not a string and therefore cannot be expanded in the normal manner.
Why is bitbake showing "AUTOINC" in the version for some recipes?
Recipes where you see AUTOINC within the version in the console output during a build will be those that set PV
to include "${SRCPV}"
to get the SCM revision (e.g. the git hash) in the package version. In order to have the version increment properly, there needs to be a number in front of the revision which automatically increments each time the revision changes (assuming you have a PR server enabled), which is where AUTOINC comes in. During the build, AUTOINC is a stand-in for this auto-incrementing number, and later during do_package
it gets replaced with the real number so that the packages produced at the end have the full version number.
Why are .so files in the -dev package instead of the main package for a recipe?
In standard Unix library packaging, non-versioned .so symlinks (e.g. /usr/lib/libgd.so) are intended for development purposes only. At runtime, binaries should be linked to the major-versioned .so file/symlink e.g. /usr/lib/libgd.so.3. This (theoretically) allows multiple major versions of the same library as well as binaries that depend upon each of them to coexist on the same system. If the library is versioned but you have a binary that links to the unversioned .so file, it has almost certainly been linked incorrectly.
Non-symlink .so files on the other hand are sometimes produced and are entirely legal - however these will be picked up in the -dev package in OpenEmbedded simply by virtue of their name, which is almost always not what you want. In this case you can do one of two things:
- Fix the build of the library so it gets versioned. This may not always be appropriate, especially not for things like plugins.
- Set FILES_${PN}-dev within the recipe so that it does not include ${FILES_SOLIBSDEV}. If the software the recipe is building also produces symlink .so files you'll need to set FILES_${PN}-dev such that those do still get packaged in the -dev package though, or you'll get a package QA warning.
You cannot, no. Shared state (sstate) is an intrinsic part of staging files into the sysroot. It is possible to construct a recipe that bypasses sstate for some tasks (the kernel does this), however this is quite difficult and if not done properly will lead to many other problems.
Almost always when you are having a problem with shared state the issue is either (a) you're adding/changing files in the sysroot directly (i.e. outside sstate control), or (b) what is being placed into the sysroot isn't relocatable. The solution for (a) is do not do that - files should always be installed under ${D}
within do_install
and then a subset of those are staged into the sysroot automatically. For (b) you need to fix or adapt the hardcoded path(s) - if the program reads (or can be made to read) each path from an environment variable, then you can use the create_wrapper
utility function to create a wrapper script that will set the path appropriately. Run git grep create_wrapper
in the meta
subdirectory to see examples.
Files I installed into /opt or some other path never make it into the sysroot but I need them - how do I fix this?
OpenEmbedded only stages a subset of files that are installed into ${D}
by do_install
so that the sysroot doesn't fill up with unneeded files. You have two choices in this situation:
- install the files into a more standard location which is part of the subset, or
- adjust the subset to include the paths you are installing to.
Usually option 1 is recommended. If you really do need to adjust the subset, you can append the path (more specifically, the part below ${D}
) to SYSROOT_DIRS
within your recipe. For example:
SYSROOT_DIRS += "/opt"
I have some software which needs to build a binary that it then runs as part of its own build process, how do I make this work?
Whilst it is possible to do this within a single recipe building for the target, it is tricky to do so because in that context everything is set up for cross-compiling for the target, and you would have to undo all of that to build host tools. The standard and much easier way of handling this is to create a native variant of the recipe using BBCLASSEXTEND and have your host tools built within that, and then have the target variant depend on the native variant. For example, assume your recipe were called xyz (xyz_1.1.bb), then you would include something like this in the recipe:
DEPENDS_append_class-target = " xyz-native" ... BBCLASSEXTEND += "native"
The host tools will then be built and installed into the sysroot in the native variant ready for when the target variant starts building. If the software you are building didn't intend for those tools to be installed outside of the build tree then you may need to patch the build process (e.g. the makefile) in order to install them and possibly also for the target side to find them in the sysroot. Additionally, for performance since you only need the tools in the native variant, you may also choose to disable building everything except those tools there - e.g. by using _native overrides for variables such as EXTRA_OECONF or functions such as do_configure.
How do I fetch from two git repositories in the same recipe?
By default, sources fetched from git within a recipe are unpacked into ${WORKDIR}/git, however that only works for a single repository. If you want to fetch from more than one, you need to change the path each repository is unpacked to. This is easy to do, just add ;destsuffix=<subdir>
to the end of each URL in SRC_URI (replacing <subdir>
with the name of the subdirectory). You may then need to change S to match whichever of these you want to be considered the root of the source tree - or alternatively you can specify destsuffix such that repositories beyond the first go into a subdirectory under the default "git" subdirectory. For example, from the gst-libav recipe:
... SRC_URI = " \ git://anongit.freedesktop.org/gstreamer/gst-libav;branch=1.8;name=base \ git://anongit.freedesktop.org/gstreamer/common;destsuffix=git/common;name=common \ ... " ... S = "${WORKDIR}/git" ...
(Here we're using the default of "git" for the first repository, so we don't need to specify destsuffix
for the first URL.)
I'm building a native recipe and I notice that the install path has the full path to the root directory repeated - why?
It does look a little odd, but the reason for doing this is that native targets are meant to run on the system they're built on and run in the location they're installed to. This means they install to a destination of "/" and PREFIX is inside the native sysroot directory. We install them to a DESTDIR to allow us to manipulate them before they then get moved to a final DESTDIR of "/".
Most Makefiles handle this correctly by doing:
DESTDIR ?= "" prefix ?= "/usr" bindir ?= "$(prefix)/bin"
and then, importantly, install in the form:
install -d $(DESTDIR)$(bindir)
so both prefix and DESTDIR are used. Whilst this is a convention, its a widely adopted and followed one. You can call into a custom makefile and set the variables manually if the makefile doesn't follow the convention.
How do I generate static libraries?
Its possible you have conf/distro/include/no-static-libs.inc included in your build - poky does this by default. The include list at the top of the bitbake -e output will tell you for certain.
If so, you can remove that or set:
DISABLE_STATIC = ""
as it would currently be set to this if that include file is included:
DISABLE_STATIC = " --disable-static"
Poky disables building static libraries by default as for the most part they're a waste of space/time.
Can I conditionally inherit a class in a recipe?
Yes, you can. What makes this possible is that the inherit
keyword will not complain if what comes after it expands to being empty, so you can use in-line python to do something like this:
inherit ${@bb.utils.contains('PACKAGECONFIG', 'scripting', 'perlnative', '', d)}
The above example will inherit the perlnative
class if "scripting" is in the value of the PACKAGECONFIG
variable, otherwise it will do nothing.
You could of course put this into a variable if you prefer:
SOMEVAR = "${@bb.utils.contains('PACKAGECONFIG', 'scripting', 'perlnative', '', d)}" inherit ${SOMEVAR}
How do I collect the source revisions fetched by each recipe?
If you have recipes where SRCREV = "${AUTOREV}"
then you won't necessarily know exactly which revisions were built after the fact - it will be whatever was current at the time. You also might alternatively just want to get all of the revisions. Either way, to do this, enable buildhistory by setting the following in your local.conf:
INHERIT += "buildhistory" BUILDHISTORY_COMMIT = "1"
(The last line is not required with version 2.5 and onwards as it is the default, but will do no harm.)
Once you have enabled buildhistory, you then need to build your image again so that buildhistory has a chance to record history data for it. Following that you can run buildhistory-collect-srcrevs
(with -a
if you want to see all revisions, not just the ones where AUTOREV was used) and it will output the revisions in a form you can use in a .inc file that you can require
from your configuration if you want to fix the build to those revisions.
For more information see the Maintaining Build Output Quality section of the Yocto Project Development manual, which covers the buildhistory class in detail.
How do I do an offline build with recipes that have SRCREV = "${AUTOREV}"
set?
If you set BB_NO_NETWORK = "1"
and you have recipes that have SRCREV = "${AUTOREV}"
, you will get an error because the build system will try to check the latest revision on startup and be immediately blocked by BB_NO_NETWORK
. There are two ways to handle this:
A) See the previous question "How do I collect the source revisions fetched by each recipe?" and use the output generated by buildhistory-collect-srcrevs
as a .inc file in your configuration in order to fix the revisions at the ones which were most recently built.
or
B) Set BB_SRCREV_POLICY = "cache"
in your configuration. This will use the last cached revision. (The disadvantage of this method is that it is a little more difficult to preserve or share with others the fixed revisions.)
Note that in either case if you later want to build the latest version again, you will of course need to undo the configuration changes.
Is it possible to append a bbclass file (like bbappends do for recipes)?
No, see the next question for details.
How do I override a bbclass file?
This is tricky - bbclass files are found via BBPATH, which is added to by each layer.conf either by prepending or appending. Assuming you are putting your bbclass in a custom layer, you will probably want to have your layer's layer.conf prepend to BBPATH, but then you will also need to make sure that your layer does not appear before any other layer that is also prepending and overriding the same class.
Another alternative is to have an additional class which makes the appropriate changes to the environment, and then you will need to inherit that class after (and in the same manner as) the original class. This is slightly cleaner but can be annoying to enable particularly if the class is inherited by a number of recipes, and won't work if you want to alter the behaviour of a class inherited by recipes you don't control. (If you want a class to be inherited for all images (i.e. all recipes inheriting the image
class) you can inject additional classes by setting IMAGE_CLASSES; similarly for the kernel there is KERNEL_CLASSES).
Ultimately, overriding bbclass files is not good practice long term - you are opening yourself up to maintenance issues when the original class changes, and the override is fragile as hinted above. The best solution is to try to get whatever changes you need into the original class; this does of course require additional work and time though.
There's a bbappend in a layer I'm using that defines a do_something_append()
and I want to append to that function also, how do I do this?
Simply create a bbappend in your layer and define your own do_something_append()
, and your commands will be executed as well as those of the other bbappend.
You might assume that defining do_something_append()
will overwrite any previously defined do_something_append()
, as would be the case with do_something()
in the same situation, but that is not the case - the key is that _append
(and _prepend
, _remove
, etc.) are operators and they will be applied in sequence, where that sequence is the order in which they are parsed (which for bbappends will be in ascending layer priority order).