Guest speakers: Jennifer Buchmüller and Terry Cojean (Karlsruhe Institute of Technology)
Most of the representative HPC community codes and infrastructures are made up of large software components developed over long amounts of time by many different developers. Many of them are also still focused on a single type of computing resource, e.g., large clusters with general purpose CPUs built around a very fast interconnect network. A large number of active developers contributing to the same code base increases the probability of new bugs being introduced because not all possible configurations are being tested. This either decreases the quality of the code, or requires significant amounts of human resources to be spent on integration tests before a new release is made. Continuous Integration (CI) can reduce both the amount of time necessary for these tests, as well as provide near instant feedback after bugs that cause build failures have been introduced, by automatically building all possible configurations at given points in time (e.g., after every source code commit). In addition, Continuous Benchmarking (CB) frames not only unified testing but also a continuous benchmark environment. Finally, Continuous Deployment (CD) enables scientists to provide efficient, reliable and sustainable HPC software. We will introduce the operational CI/CB/CD (Cx) service located at KIT. It provides support for automated Cx, workflows on the HPC systems and other systems within the HPC realm, e.g., the Future Technologies Partition (FTP). A detailed description can be found here: https://www.nhr.kit.edu/userdocs/ci/ This is already available for current users of multiple research communities. As an example we will present the Ginkgo library and how it realizes Cx.