1# Development and Testing
2
3## Development
4
5### Code Generation
6
7The backend uses code generation from YAML configuration:
8
9```bash
10# Regenerate protocol code
11cd ggml-virtgpu/
12python regenerate_remoting.py
13```
14
15### Adding New Operations
16
171. Add function definition to `ggmlremoting_functions.yaml`
182. Regenerate code with `regenerate_remoting.py`
193. Implement guest-side forwarding in `virtgpu-forward-*.cpp`
204. Implement host-side handling in `backend-dispatched-*.cpp`
21
22## Testing
23
24This document provides instructions for building and testing the GGML-VirtGPU backend on macOS with containers.
25
26### Prerequisites
27
28The testing setup requires:
29
30- macOS host system
31- Container runtime with `libkrun` provider (podman machine)
32- Access to development patchset for VirglRenderer
33
34### Required Patchsets
35
36The backend requires patches that are currently under review:
37
38- **Virglrenderer APIR upstream PR**: https://gitlab.freedesktop.org/virgl/virglrenderer/-/merge_requests/1590 (for reference)
39- **MacOS Virglrenderer (for krunkit)**: https://gitlab.freedesktop.org/kpouget/virglrenderer/-/tree/main-macos
40- **Linux Virglrenderer (for krun)**: https://gitlab.freedesktop.org/kpouget/virglrenderer/-/tree/main-linux
41
42### Build Instructions
43
44#### 1. Build ggml-virtgpu-backend (Host-side, macOS)
45
46```bash
47# Build the backend that runs natively on macOS
48mkdir llama.cpp
49cd llama.cpp
50git clone https://github.com/ggml-org/llama.cpp.git src
51cd src
52
53LLAMA_MAC_BUILD=$PWD/build/ggml-virtgpu-backend
54
55cmake -S . -B $LLAMA_MAC_BUILD \
56 -DGGML_NATIVE=OFF \
57 -DLLAMA_CURL=ON \
58 -DGGML_REMOTINGBACKEND=ONLY \
59 -DGGML_METAL=ON
60
61TARGETS="ggml-metal"
62cmake --build $LLAMA_MAC_BUILD --parallel 8 --target $TARGETS
63
64# Build additional tools for native benchmarking
65EXTRA_TARGETS="llama-run llama-bench"
66cmake --build $LLAMA_MAC_BUILD --parallel 8 --target $EXTRA_TARGETS
67```
68
69#### 2. Build virglrenderer (Host-side, macOS)
70
71```bash
72# Build virglrenderer with APIR support
73mkdir virglrenderer
74git clone https://gitlab.freedesktop.org/kpouget/virglrenderer -b main-macos src
75cd src
76
77VIRGL_BUILD_DIR=$PWD/build
78
79# -Dvenus=true and VIRGL_ROUTE_VENUS_TO_APIR=1 route the APIR requests via the Venus backend, for easier testing without a patched hypervisor
80
81meson setup $VIRGL_BUILD_DIR \
82 -Dvenus=true \
83 -Dapir=true
84
85ninja -C $VIRGL_BUILD_DIR
86```
87
88#### 3. Build ggml-virtgpu (Guest-side, Linux)
89
90Option A: Build from a script:
91
92```bash
93# Inside a Linux container
94mkdir llama.cpp
95git clone https://github.com/ggml-org/llama.cpp.git src
96cd src
97
98LLAMA_LINUX_BUILD=$PWD//build-virtgpu
99
100cmake -S . -B $LLAMA_LINUX_BUILD \
101 -DGGML_VIRTGPU=ON
102
103ninja -C $LLAMA_LINUX_BUILD
104```
105
106Option B: Build container image with frontend:
107
108```bash
109cat << EOF > remoting.containerfile
110FROM quay.io/fedora/fedora:43
111USER 0
112
113WORKDIR /app/remoting
114
115ARG LLAMA_CPP_REPO="https://github.com/ggml-org/llama.cpp.git"
116ARG LLAMA_CPP_VERSION="master"
117ARG LLAMA_CPP_CMAKE_FLAGS="-DGGML_VIRTGPU=ON"
118ARG LLAMA_CPP_CMAKE_BUILD_FLAGS="--parallel 4"
119
120RUN dnf install -y git cmake gcc gcc-c++ libcurl-devel libdrm-devel
121
122RUN git clone "\${LLAMA_CPP_REPO}" src \\
123 && git -C src fetch origin \${LLAMA_CPP_VERSION} \\
124 && git -C src reset --hard FETCH_HEAD
125
126RUN mkdir -p build \\
127 && cd src \\
128 && set -o pipefail \\
129 && cmake -S . -B ../build \${LLAMA_CPP_CMAKE_FLAGS} \\
130 && cmake --build ../build/ \${LLAMA_CPP_CMAKE_BUILD_FLAGS}
131
132ENTRYPOINT ["/app/remoting/src/build/bin/llama-server"]
133EOF
134
135mkdir -p empty_dir
136podman build -f remoting.containerfile ./empty_dir -t localhost/llama-cpp.virtgpu
137```
138
139### Environment Setup
140
141#### Set krunkit Environment Variables
142
143```bash
144# Define the base directories (adapt these paths to your system)
145VIRGL_BUILD_DIR=$HOME/remoting/virglrenderer/build
146LLAMA_MAC_BUILD=$HOME/remoting/llama.cpp/build-backend
147
148# For krunkit to load the custom virglrenderer library
149export DYLD_LIBRARY_PATH=$VIRGL_BUILD_DIR/src
150
151# For Virglrenderer to load the ggml-remotingbackend library
152export VIRGL_APIR_BACKEND_LIBRARY="$LLAMA_MAC_BUILD/bin/libggml-virtgpu-backend.dylib"
153
154# For llama.cpp remotingbackend to load the ggml-metal backend
155export APIR_LLAMA_CPP_GGML_LIBRARY_PATH="$LLAMA_MAC_BUILD/bin/libggml-metal.dylib"
156export APIR_LLAMA_CPP_GGML_LIBRARY_REG=ggml_backend_metal_reg
157```
158
159#### Launch Container Environment
160
161```bash
162# Set container provider to libkrun
163export CONTAINERS_MACHINE_PROVIDER=libkrun
164podman machine start
165```
166
167#### Verify Environment
168
169Confirm that krunkit is using the correct virglrenderer library:
170
171```bash
172lsof -c krunkit | grep virglrenderer
173# Expected output:
174# krunkit 50574 user txt REG 1,14 2273912 10849442 ($VIRGL_BUILD_DIR/src)/libvirglrenderer.1.dylib
175```
176
177### Running Tests
178
179#### Launch Test Container
180
181```bash
182# Optional model caching
183mkdir -p models
184PODMAN_CACHE_ARGS="-v models:/models --user root:root --cgroupns host --security-opt label=disable -w /models"
185
186podman run $PODMAN_CACHE_ARGS -it --rm --device /dev/dri localhost/llama-cpp.virtgpu
187```
188
189#### Test llama.cpp in Container
190
191```bash
192
193# Run performance benchmark
194/app/remoting/build/bin/llama-bench -m ./llama3.2
195```
196
197Expected output (performance may vary):
198```
199| model | size | params | backend | ngl | test | t/s |
200| ------------------------------ | ---------: | ---------: | ---------- | --: | ------------: | -------------------: |
201| llama 3B Q4_K - Medium | 1.87 GiB | 3.21 B | ggml-virtgpu | 99 | pp512 | 991.30 ± 0.66 |
202| llama 3B Q4_K - Medium | 1.87 GiB | 3.21 B | ggml-virtgpu | 99 | tg128 | 85.71 ± 0.11 |
203```
204
205### Troubleshooting
206
207#### SSH Environment Variable Issues
208
209⚠️ **Warning**: Setting `DYLD_LIBRARY_PATH` from SSH doesn't work on macOS. Here is a workaround:
210
211**Workaround 1: Replace system library**
212```bash
213VIRGL_BUILD_DIR=$HOME/remoting/virglrenderer/build # ⚠️ adapt to your system
214BREW_VIRGL_DIR=/opt/homebrew/Cellar/virglrenderer/0.10.4d/lib
215VIRGL_LIB=libvirglrenderer.1.dylib
216
217cd $BREW_VIRGL_DIR
218mv $VIRGL_LIB ${VIRGL_LIB}.orig
219ln -s $VIRGL_BUILD_DIR/src/$VIRGL_LIB
220```