diff options
| author | Mitja Felicijan <mitja.felicijan@gmail.com> | 2026-02-12 20:57:17 +0100 |
|---|---|---|
| committer | Mitja Felicijan <mitja.felicijan@gmail.com> | 2026-02-12 20:57:17 +0100 |
| commit | b333b06772c89d96aacb5490d6a219fba7c09cc6 (patch) | |
| tree | 211df60083a5946baa2ed61d33d8121b7e251b06 /llama.cpp/examples/sycl/README.md | |
| download | llmnpc-b333b06772c89d96aacb5490d6a219fba7c09cc6.tar.gz | |
Engage!
Diffstat (limited to 'llama.cpp/examples/sycl/README.md')
| -rw-r--r-- | llama.cpp/examples/sycl/README.md | 41 |
1 files changed, 41 insertions, 0 deletions
diff --git a/llama.cpp/examples/sycl/README.md b/llama.cpp/examples/sycl/README.md new file mode 100644 index 0000000..8819d87 --- /dev/null +++ b/llama.cpp/examples/sycl/README.md @@ -0,0 +1,41 @@ +# llama.cpp/example/sycl + +This example program provides the tools for llama.cpp for SYCL on Intel GPU. + +## Tool + +|Tool Name| Function|Status| +|-|-|-| +|llama-ls-sycl-device| List all SYCL devices with ID, compute capability, max work group size, ect.|Support| + +### llama-ls-sycl-device + +List all SYCL devices with ID, compute capability, max work group size, ect. + +1. Build the llama.cpp for SYCL for the specified target *(using GGML_SYCL_TARGET)*. + +2. Enable oneAPI running environment *(if GGML_SYCL_TARGET is set to INTEL -default-)* + +``` +source /opt/intel/oneapi/setvars.sh +``` + +3. Execute + +``` +./build/bin/llama-ls-sycl-device +``` + +Check the ID in startup log, like: + +``` +found 2 SYCL devices: +| | | | |Max | |Max |Global | | +| | | | |compute|Max work|sub |mem | | +|ID| Device Type| Name|Version|units |group |group|size | Driver version| +|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------| +| 0| [level_zero:gpu:0]| Intel Arc A770 Graphics| 1.3| 512| 1024| 32| 16225M| 1.3.29138| +| 1| [level_zero:gpu:1]| Intel UHD Graphics 750| 1.3| 32| 512| 32| 62631M| 1.3.29138| + +``` + |
