Welcome back to a new post at thoughts-on-coding.com. This time I would like to give an introduction into the structure of my customized GitHub project template for C++. This C++ project template is consisting of a dummy library and an application that is build by cmake. The project is utilizing a debian based and customized docker image to run continuous integration builds (cmake) and executes tests (ctest) on CircleCI.
The dependencies of the project are resolved via vcpkg which can be customized for each deriving project. Additionally, the project is also generating doxygen based code documentation which is then published to the gh-pages branch. As always the repository can be found and forked at GitHub and is available as v1.1.0.
General Project Structure #
The project structure is rather simple. It contains
- An app folder with the actual main function, and a lib folder with all project-related libraries. Both are build by cmake.
- A buildutils directory which holds the python script necessary to convert ctest results into JUnit from.
- An images folder containing all project-related images
- The CircleCI configuration file
.circleci/config.yml
- A Dockerfile as which is used to build the docker image necessary to build the project on CircleCI
- A Doxyfile which configures the documentation generation with doxygen
Project tree structure with app, lib, build, buildutils, docs, and images directories
CircleCI Configuration #
Let's start with the CircleCI configuration file .circleci/config.yml
. At first, an executor (1) using a customized docker image (2) is defined. The executor allows the definition of the environment a specific job in CircleCI is running in. Instead of defining a docker image for each job, an executor can be reused. The project is build upon two jobs, build and test. Each job contains several steps that have to be fulfilled to make a job successful. The build job is getting the sources from GitHub via the checkout
command. Afterward, the project dependencies are setup by installing them with vcpkg (3). Then the project tooling is setup (4) and the build process gets started (5). At the end of the build job, it is necessary to persist (6) the resulting artifacts with a workspace to use the build results in the following test job. This is necessary because the jobs are independent of each other due to the volatile nature of docker.
First, the test job is starting by loading the persisted workspace (7) into the job. The actual invocation of ctest (8) might appear a bit strange because of the || true
. The reason is that ctest always returns if a test has been successful or not, but in case a test is failing also the CircleCI job would fail which then would prevent the following job steps. At the end of the job step the test results generated by ctest need to be converted (9) into a JUnit XML conform structure before the test results get published (10).
The workflow defined at the end of the .circleci/config.yml
file is defining the actual workflow the build process is following. In this case, the build job is invoked first, followed by the test job which is depending on a successful build job.
version: 2.1
executors: #(1)
exectr:
docker:
- image: dockerben/cpptemplate:latest #(2)
jobs:
build:
executor: exectr #(1)
steps:
- checkout
- run:
name: Install vcpkg dependencies
command: ./../../vcpkg/vcpkg install fmt doctest #(3)
- run:
name: Create build directories
command: |
mkdir -p build
mv buildutils build/buildutils
- run:
name: Setup cmake and build artifacts
command: |
cd build
cmake -DCMAKE_TOOLCHAIN_FILE=/vcpkg/scripts/buildsystems/vcpkg.cmake .. #(4)
cmake --build . --config Release #(5)
- persist_to_workspace: #(6)
root: .
paths: build
test:
executor: exectr #(1)
steps:
- attach_workspace: #(7)
at: .
- run:
name: Create test directory
command: |
cd build
mkdir -p Test
- run:
name: Execute Tests
command: |
cd build
ctest --no-compress-output -T Test || true #(8)
- run:
name: Transform test results into JUnit conform notation
command: |
python3 build/buildutils/ctest2JUnit.py build build/buildutils/CTest2JUnit.xsl > build/Test/results.xml #(9)
- store_test_results: #(10)
path: build/Test
workflows:
version: 2
build-and-test:
jobs:
- build #(11)
- test: #(12)
requires:
- build
CircleCI screenshot with a successful workflow
With the CircleCI CLI there is also a nice tool to test our CircleCI configuration by sudo circleci local execute
or sudo circleci local execute JOBNAME
for a specific job to run. The only drawback is that it's not supporting workflows right now, but still, it's enough to test the configuration without nee to commit, maybe broken, code to our repositories.
Custom Docker Image #
CircleCI is using docker images to provide the build environment and therefore it can be completely customized to our needs. Unfortunately, they don't provide a C++ ready pre-build docker image yet so we have to define our own. If you're not already familiar with the docker basics you should have a look at the docker overview.
To build a custom docker image we need to define a, rather simple, Dockerfile which starts with a base image (1) where we want to derive from. To build our C++ project we need to install several tools and libraries (2), such as gcc, clang and so on. Additional python is necessary to convert the ctest results into JUnit XML. To resolve the project dependencies we are installing vcpkg (3) and clean up afterward (4) to reduce the resulting docker image size. With 649MB it's anyway quite a big image and there might be tricks to reduce it's size further which I'm not aware of right now.
FROM debian:stable-slim #(1)
LABEL maintainer="ben.mahr@gmail.com" \
description="Image which consists of C++ related build tools." \
version="1.0"
RUN apt-get update -y && \
apt-get install -y --no-install-recommends \ #(2)
git \
openssh-server \
curl \
gcc \
g++ \
clang \
build-essential \
cmake \
unzip \
tar \
gzip \
sudo \
python3 \
python3-defusedxml \
python3-lxml \
libssl-dev \
libffi-dev \
ca-certificates && \
apt-get autoclean && \
apt-get autoremove && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
ENV VCPKG_ROOT /vcpkg
ENV CMAKE_TOOLCHAIN_FILE ${VCPKG_ROOT}/scripts/buildsystems/vcpkg.cmake
RUN git clone https://github.com/microsoft/vcpkg.git && \ #(3)
.${VCPKG_ROOT}/bootstrap-vcpkg.sh -disableMetrics && \
sudo .${VCPKG_ROOT}/vcpkg integrate install && \
rm -rf ${VCPKG_ROOT}/buildtrees/* && \ #(4)
rm -rf ${VCPKG_ROOT}/downloads/* #(4)
The image can then be build (make sure you're at the same directory as the Dockerfile is) by sudo docker build -t IMAGENAME .
, afterward the resulting image needs to be tagged by sudo docker tag IMAGENAME:TAG DOCKERHUBUSERNAME/IMAGENAME:TAG
and then can be pushed to dockerhub by sudo docker push DOCKERHUBUSERNAME/IMAGENAME:TAG
.
Converting Test Results #
Because ctest results are not JUnit XML conform, which is a requirement by CircleCI, we have to convert them which is done by a python script and an xsl transformation (ref. post by sakra). Both can be found within the buildutils folder. The python script is using the ElementTree API of the xml processing package lxml to process the xslt file. The script starts by resolving the test results folder, which contains the Test.xml
, by reading the first line of the TAG
file (1). In my case e.g. the folder name, derived from the timestamp the tests have been invoked, is called 20200419-1806
. Afterward, the Test.xml
file gets parsed (2) and the xslt file is then transformed into an XML tree pointing to its root by the etree.XML()
function (3) which is then converted into an XSLT object (4) which transforms the Test.xml
(5) into JUnit XML.
from lxml import etree
import io
import sys
TAGfile = open(sys.argv[1]+"/Testing/TAG", 'r') #(1)
dirname = TAGfile.readline().strip()
xslfile = open(sys.argv[2], 'r')
xslcontent = xslfile.read()
xmldoc = etree.parse(sys.argv[1]+"/Testing/"+dirname+"/Test.xml") #(2)
xslt_root = etree.XML(xslcontent) #(3)
transform = etree.XSLT(xslt_root) #(4)
result_tree = transform(xmldoc) #(5)
print(result_tree)
Documentation Generation and Deployment #
For the generation and deployment of the documentation to a gh-pages branch, which is published by GitHub, we use GitHub-Actions. The configuration .github/workflows/doxygen.yml
of the GitHub-Action starts by defining the events (1) which will invoke the action. In our case, we want to run the action when someone is pushing or doing a pull request to the master branch or a change happens to a Pages-enabled branch. Then the doxygen-action
is loaded (2) which simply needs a path to a Doxyfile (3). The Doxyfile itself is configuring, besides many other settings, the output format (in our case just HTML) GENERATE_HTML = YES
, the output path HTML_OUTPUT = docs
, and the path to related images IMAGE_PATH = images
. It is clear that the docker configuration needs to be customized for each project. By annotating the README.md
with {#mainpage}
we can use this file as a start page of the documentation. The final step is loading the actions-gh-pages
(4) action which is deploying to a gh-pages branch whatever can be found inside the publish_dir
(5). To avoid the automatic build invocation, by CircleCI, of the gh-pages branch we add a [ci skip]
statement at each commit message (6). Otherwise, we would always have a broken build caused by the gh-pages branch.
name: Documentation
on: #(1)
push:
branches:
- master
pull_request:
branches:
- master
page_build:
jobs:
doxygen:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- name: Build documentation
uses: mattnotmitt/doxygen-action@v1 #(2)
with:
doxyfile-path: ./Doxyfile #(3)
- name: Deploy documentation
uses: peaceiris/actions-gh-pages@v3 #(4)
with:
github_token: $
publish_dir: ./docs #(5)
commit_message: '[ci skip] $' #(6)
Circleci screenshot with a master and skipped branch
tl;dr #
With GitHub template repositories and a bit of customized tooling, including a CI pipeline and automated documentation generation and deployment, we have a good and easy to extend starting point for many C++ projects. Although GitHub template repositories can be used for all certain kinds of projects, it is extremely helpful for C++ projects which are mostly a bit more complex in their setup.
Since you've made it this far, sharing this article on your favorite social media network and giving feedback would be highly appreciated 💖!
Published