From b333b06772c89d96aacb5490d6a219fba7c09cc6 Mon Sep 17 00:00:00 2001 From: Mitja Felicijan Date: Thu, 12 Feb 2026 20:57:17 +0100 Subject: Engage! --- llama.cpp/ci/README.md | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 llama.cpp/ci/README.md (limited to 'llama.cpp/ci/README.md') diff --git a/llama.cpp/ci/README.md b/llama.cpp/ci/README.md new file mode 100644 index 0000000..d25bdd2 --- /dev/null +++ b/llama.cpp/ci/README.md @@ -0,0 +1,33 @@ +# CI + +This CI implements heavy-duty workflows that run on self-hosted runners. Typically the purpose of these workflows is to +cover hardware configurations that are not available from Github-hosted runners and/or require more computational +resource than normally available. + +It is a good practice, before publishing changes to execute the full CI locally on your machine. For example: + +```bash +mkdir tmp + +# CPU-only build +bash ./ci/run.sh ./tmp/results ./tmp/mnt + +# with CUDA support +GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt + +# with SYCL support +source /opt/intel/oneapi/setvars.sh +GG_BUILD_SYCL=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt + +# with MUSA support +GG_BUILD_MUSA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt + +# etc. +``` + +# Adding self-hosted runners + +- Add a self-hosted `ggml-ci` workflow to [[.github/workflows/build.yml]] with an appropriate label +- Request a runner token from `ggml-org` (for example, via a comment in the PR or email) +- Set-up a machine using the received token ([docs](https://docs.github.com/en/actions/how-tos/manage-runners/self-hosted-runners/add-runners)) +- Optionally update [ci/run.sh](https://github.com/ggml-org/llama.cpp/blob/master/ci/run.sh) to build and run on the target platform by gating the implementation with a `GG_BUILD_...` env -- cgit v1.2.3