1# CI
 2
 3This CI implements heavy-duty workflows that run on self-hosted runners. Typically the purpose of these workflows is to
 4cover hardware configurations that are not available from Github-hosted runners and/or require more computational
 5resource than normally available.
 6
 7It is a good practice, before publishing changes to execute the full CI locally on your machine. For example:
 8
 9```bash
10mkdir tmp
11
12# CPU-only build
13bash ./ci/run.sh ./tmp/results ./tmp/mnt
14
15# with CUDA support
16GG_BUILD_CUDA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
17
18# with SYCL support
19source /opt/intel/oneapi/setvars.sh
20GG_BUILD_SYCL=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
21
22# with MUSA support
23GG_BUILD_MUSA=1 bash ./ci/run.sh ./tmp/results ./tmp/mnt
24
25# etc.
26```
27
28# Adding self-hosted runners
29
30- Add a self-hosted `ggml-ci` workflow to [[.github/workflows/build.yml]] with an appropriate label
31- Request a runner token from `ggml-org` (for example, via a comment in the PR or email)
32- Set-up a machine using the received token ([docs](https://docs.github.com/en/actions/how-tos/manage-runners/self-hosted-runners/add-runners))
33- Optionally update [ci/run.sh](https://github.com/ggml-org/llama.cpp/blob/master/ci/run.sh) to build and run on the target platform by gating the implementation with a `GG_BUILD_...` env