Quick-Tip: Use host ssh-agent in Docker

Introduction

As I described in another post, I usually do all my Yocto builds inside a docker container. This worked well, but when I was in a project where some of the recipes needed to clone git repositories using ssh-keys I realized I needed a nice way to share my host systems keys automatically. Luckily, this is just what ssh-agent do, so all that had to be done was to make sure that the system inside the docker container could access the hosts ssh-agent socket.

The Solution

The environment variable SSH_AUTH_SOCK is used to determine the path to the socket used for communicating with ssh-agent. This means that on the host system we can use this to find the path to the socket, so that it can be bind-mounted into the container. Then we tell docker to set SSH_AUTH_SOCK inside the container to the path where it was mounted.

In the previous post I set up my alias used to start the containers to:

alias pokydocker='docker run --rm -it -v ${PWD}:${PWD} pokyextended --workdir ${PWD}'

If we now pass an extra -v option for the bind-mount, and a -e for setting up SSH_AUTH_SOCK, all should be good.

alias pokydocker='docker run --rm -it -v ${PWD}:${PWD} -v ${SSH_AUTH_SOCK}:/ssh.socket -e SSH_AUTH_SOCK=/ssh.socket --workdir ${PWD}'

The Aftermath

And with this, I lived happily ever after, right? Not really. All was working well until one day when I was working remotely and wanted to start a Yocto build over ssh. When I connect to my host system over ssh there’s nothing that starts an ssh-agent automatically, so SSH_AUTH_SOCK was empty and then fetching the sources would fail inside the container.

So I figured I should make sure ssh-agent is start on ssh logins, and then all would be good again. Said and done, and it worked fine until that time I started my build in a screen session and realized that the ssh-agent was killed when I logged out. Actually it took me time to realize what was going on, and one or more bad words might have been uttered. In the end I realized that the easiest way was to just use the socket from my local session (I basically always have a local session running), which is available under /run/user/1000/keyring/ssh, by running export SSH_AUTH_SOCK=/run/user/1000/keyring/ssh after login in via ssh, but before starting the docker container.

Using QtCreator for non-Qt CMake projects with Yocto generated SDKs

Introduction

Many of the embedded projects I work on use the Yocto Project to create an image for some embedded target. Usually there’s a base system that doesn’t change that often, and then there’s one or more project specific components that runs on this base system.

In most cases those components are can be decoupled from the actual target and target system enough so that the development can be done natively on the developers systems, and then just integrated into the Yocto build once the changes are done. But sometimes it is just more convenient to build and test things directly on the target, or maybe a bug only showing on target needs to be debugged, and then it can be quite nice to use the same IDE I use for most development which is QtCreator. I think QtCreator is quite nice even for projects not using Qt.

The Problem

When using the Yocto Project to build Linux images for a target, you basically get the SDK generation for free. It’s just a matter of running the populate_sdk task for the image, and you get a nicely packaged SDK-installer which contains everything you need to build for that system: toolchain, sysroot and a selection of native tools that might be needed in the process. It also includes a script that sets the shell up for cross-compilation by setting environment variables like CC, CFLAGS, CONFIGURE_FLAGS etc.

This script however doesn’t work out-of-the-box with QtCreator. The issue is that this scripts sets up CC/CXX/CPP so that the machine specific flags are part of those variables instead of CFLAGS/CXXFLAGS/CPPFLAGS, and when you set up the SDK in QtCreator those flags are not picked up properly so the compiler doesn’t generate compatible binaries.

The Solution

You can solve this by manually editing the script, but in the spirit of “fix things once” I prefer to address this directly in the SDK generation. The script that sets up the environment has some support for this kind of modifications by looking in two different environment-setup.d/ directories and pulling in all .sh files in those directories. So all that needs to be done is provide a script that takes all -m<something> options set in CC/CXX/CPP and add those to CFLAGS/CXXFLAGS/CPPFLAGS.

So to automate this I add a recipe to the projects meta-layer which installs this script, and then a small change is needed in the image recipe to pull in this package into the SDK. And yes, the sed part can probably be done in a much nicer way, but it works.

/path/to/meta-project/recipes-devtools/sdk-qtcreator-fix/files/qt-creator-fixes.sh

CC_MFLAGS=`echo $CC | sed -e 's/^/ /g' -e 's/ [^-][^ ]*//g' -e 's/ -[^m][^ ]*//g'`
export CFLAGS="$CC_MFLAGS $CFLAGS"

CXX_MFLAGS=`echo $CXX | sed -e 's/^/ /g' -e 's/ [^-][^ ]*//g' -e 's/ -[^m][^ ]*//g'`
export CXXFLAGS="$CXX_MFLAGS $CXXFLAGS"

CPP_MFLAGS=`echo $CPP | sed -e 's/^/ /g' -e 's/ [^-][^ ]*//g' -e 's/ -[^m][^ ]*//g'`
export CPPFLAGS="$CPP_MFLAGS $CPPFLAGS"

/path/to/meta-project/recipes-devtools/sdk-qtcreator-fix/nativesdk-qtcreator-fix.bb

DESCRIPTION = "Fixes for SDK use in QtCreator"
LICENSE = "CLOSED"

SRC_URI = "file://qt-creator-fixes.sh"

inherit nativesdk

do_configure[noexec] = "1"
do_compile[noexec] = "1"

do_install() {
    install -Dm0644 ${WORKDIR}/qt-creator-fixes.sh ${D}${SDKPATHNATIVE}/environment-setup.d/qt-creator-fixes.sh
}

FILES_${PN} = "${SDKPATHNATIVE}/environment-setup.d"

And then add the following line to the image recipe used for SDK generation , to make sure that the script is included in the generated SDKs.

TOOLCHAIN_HOST_TASK_append = " nativesdk-qtcreator-fix"

The End

I hope that this can be of help for anyone wanting to use QtCreator with Yocto generated SDK. I’m might do more tutorial like follow-up up posts on this, where I walk you through the process of setting building an SDK for Raspberry Pi that works with both qmake and CMake based projects. It’s even possible to ship a script that help you set up QtCreator, so you don’t have to click around as much in the UI.

Simplify your Yocto builds using docker

Introduction

Yocto is a great set of tools which makes it easy to build and maintain your custom linux images for a big variety of hardware. But even if it is continually improved with regards to reproduceability etc, there is still always some dependencies toward the host system used when building. As a user of Debian unstable I sometimes find that I have too new versions of packages, causing issues when building older Yocto releases and sometimes even the latest release. I could of course solve this by sticking to a stable Debian release, or maybe the latest Ubuntu LTS version, but then I have to wait longer for new improvements in e.g. gnome or other projects I use.

Using docker containers

Luckily there’s a great way to address this, so that I can continue to use a very modern distro and still be able to work with older Yocto releases. A sub-project within the Yocto project called CROPS develops tools making Yocto more cross-platform. One thing that they do is that they provide docker images which contain everything you need to build using docker.

You can launch one of those base container for Yocto builds using e.g.

docker run --rm -it crops/yocto:ubuntu-16.04-base

To make it a bit more useful I usually make sure that the current directory and all it’s content is available in docker, that way I can easily access all the sources and build artifact from my regular host system as well.

docker run --rm -it -v ${PWD}:${PWD} crops/yocto:ubuntu-16.04-base --workdir ${PWD}

User id mismatch

The solution above works well in most single user systems, since the first user of the host and the first user of the docker container will likely share the same uid and thus be seen as the same user from a file system perspective. This will however be a problem if multiple users share a build server, but luckily the CROPS people has a solution for this as well. In addition to the crops/yocto docker images they have also created crops/poky which is based on crops/yocto but adds some scripts which allows you to have the same user id inside of docker as inside. The way that it works is that it sets the user id in the container to be the same as the owner of the directory passed in as –workdir.

To use this container instead, use:

docker run --rm -it -v ${PWD}:${PWD} crops/poky:ubuntu-16.04 --workdir ${PWD}

Extending with additional tools

The images provided by CROPS are quite minimalistic, and I usually end up wanting some extra tools in there like vim for editing files and rpm for inspecting generated packages. Luckily this is quite easy to do using docker.

Here’s an example of a Dockerfile extending the crops/poky image with vim and rpm:

FROM crops/poky:ubuntu-16.04

USER root
RUN apt-get update && apt-get install -y vim rpm
RUN rm -rf /var/lib/apt/lists/*

You can then build this image using:

docker build -t pokyextended /path/to/dir/of/dockerfile/

When the image is built you can use it by just replacing crops/poky:ubuntu-16.04 in the command from before:

docker run --rm -it -v ${PWD}:${PWD} pokyextended --workdir ${PWD}

In order to make it a bit more convenient I use an alias in my ~/.bash_aliases do to this:

alias pokydocker='docker run --rm -it -v ${PWD}:${PWD} pokyextended --workdir ${PWD}'

With this my workflow for starting a build is quite simple:

  • Create a new directory: mkdir my_new_build && cd my_new_build
  • Get all sources, I usually use googles “repo” tool to fetch multiple repositories.
  • Launch container: pokydocker
  • Source oe-init-build-env and start building

So the only overhead of using docker is running “pokydocker” from the directory to start the docker container, and I’m guaranteed to have the same environment regardless of my build host.