summaryrefslogtreecommitdiff
path: root/llama.cpp/.github/ISSUE_TEMPLATE
diff options
context:
space:
mode:
Diffstat (limited to 'llama.cpp/.github/ISSUE_TEMPLATE')
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/010-bug-compilation.yml88
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/011-bug-results.yml115
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/019-bug-misc.yml103
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/020-enhancement.yml51
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/030-research.yml52
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/040-refactor.yml28
-rw-r--r--llama.cpp/.github/ISSUE_TEMPLATE/config.yml11
7 files changed, 448 insertions, 0 deletions
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/010-bug-compilation.yml b/llama.cpp/.github/ISSUE_TEMPLATE/010-bug-compilation.yml
new file mode 100644
index 0000000..c106f47
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/010-bug-compilation.yml
@@ -0,0 +1,88 @@
+name: Bug (compilation)
+description: Something goes wrong when trying to compile llama.cpp.
+title: "Compile bug: "
+labels: ["bug-unconfirmed", "compilation"]
+body:
+ - type: markdown
+ attributes:
+ value: >
+ Thanks for taking the time to fill out this bug report!
+ This issue template is intended for bug reports where the compilation of llama.cpp fails.
+ Before opening an issue, please confirm that the compilation still fails
+ after recreating the CMake build directory and with `-DGGML_CCACHE=OFF`.
+ If the compilation succeeds with ccache disabled you should be able to permanently fix the issue
+ by clearing `~/.cache/ccache` (on Linux).
+ - type: textarea
+ id: commit
+ attributes:
+ label: Git commit
+ description: Which commit are you trying to compile?
+ placeholder: |
+ $git rev-parse HEAD
+ 84a07a17b1b08cf2b9747c633a2372782848a27f
+ validations:
+ required: true
+ - type: dropdown
+ id: operating-system
+ attributes:
+ label: Operating systems
+ description: Which operating systems do you know to be affected?
+ multiple: true
+ options:
+ - Linux
+ - Mac
+ - Windows
+ - BSD
+ - Other? (Please let us know in description)
+ validations:
+ required: true
+ - type: dropdown
+ id: backends
+ attributes:
+ label: GGML backends
+ description: Which GGML backends do you know to be affected?
+ options: [AMX, BLAS, CPU, CUDA, HIP, Metal, Musa, RPC, SYCL, Vulkan, OpenCL, zDNN]
+ multiple: true
+ validations:
+ required: true
+ - type: textarea
+ id: info
+ attributes:
+ label: Problem description & steps to reproduce
+ description: >
+ Please give us a summary of the problem and tell us how to reproduce it.
+ If you can narrow down the bug to specific compile flags, that information would be very much appreciated by us.
+ placeholder: >
+ I'm trying to compile llama.cpp with CUDA support on a fresh install of Ubuntu and get error XY.
+ Here are the exact commands that I used: ...
+ validations:
+ required: true
+ - type: textarea
+ id: first_bad_commit
+ attributes:
+ label: First Bad Commit
+ description: >
+ If the bug was not present on an earlier version: when did it start appearing?
+ If possible, please do a git bisect and identify the exact commit that introduced the bug.
+ validations:
+ required: false
+ - type: textarea
+ id: command
+ attributes:
+ label: Compile command
+ description: >
+ Please provide the exact command you used to compile llama.cpp. For example: `cmake -B ...`.
+ This will be automatically formatted into code, so no need for backticks.
+ render: shell
+ validations:
+ required: true
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: >
+ Please copy and paste any relevant log output, including any generated text.
+ This will be automatically formatted into code, so no need for backticks.
+ render: shell
+ validations:
+ required: true
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/011-bug-results.yml b/llama.cpp/.github/ISSUE_TEMPLATE/011-bug-results.yml
new file mode 100644
index 0000000..31202df
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/011-bug-results.yml
@@ -0,0 +1,115 @@
+name: Bug (model use)
+description: Something goes wrong when using a model (in general, not specific to a single llama.cpp module).
+title: "Eval bug: "
+labels: ["bug-unconfirmed", "model evaluation"]
+body:
+ - type: markdown
+ attributes:
+ value: >
+ Thanks for taking the time to fill out this bug report!
+ This issue template is intended for bug reports where the model evaluation results
+ (i.e. the generated text) are incorrect or llama.cpp crashes during model evaluation.
+ If you encountered the issue while using an external UI (e.g. ollama),
+ please reproduce your issue using one of the examples/binaries in this repository.
+ The `llama-completion` binary can be used for simple and reproducible model inference.
+ - type: textarea
+ id: version
+ attributes:
+ label: Name and Version
+ description: Which version of our software are you running? (use `--version` to get a version string)
+ placeholder: |
+ $./llama-cli --version
+ version: 2999 (42b4109e)
+ built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
+ validations:
+ required: true
+ - type: dropdown
+ id: operating-system
+ attributes:
+ label: Operating systems
+ description: Which operating systems do you know to be affected?
+ multiple: true
+ options:
+ - Linux
+ - Mac
+ - Windows
+ - BSD
+ - Other? (Please let us know in description)
+ validations:
+ required: true
+ - type: dropdown
+ id: backends
+ attributes:
+ label: GGML backends
+ description: Which GGML backends do you know to be affected?
+ options: [AMX, BLAS, CPU, CUDA, HIP, Metal, Musa, RPC, SYCL, Vulkan, OpenCL, zDNN]
+ multiple: true
+ validations:
+ required: true
+ - type: textarea
+ id: hardware
+ attributes:
+ label: Hardware
+ description: Which CPUs/GPUs are you using?
+ placeholder: >
+ e.g. Ryzen 5950X + 2x RTX 4090
+ validations:
+ required: true
+ - type: textarea
+ id: model
+ attributes:
+ label: Models
+ description: >
+ Which model(s) at which quantization were you using when encountering the bug?
+ If you downloaded a GGUF file off of Huggingface, please provide a link.
+ placeholder: >
+ e.g. Meta LLaMA 3.1 Instruct 8b q4_K_M
+ validations:
+ required: false
+ - type: textarea
+ id: info
+ attributes:
+ label: Problem description & steps to reproduce
+ description: >
+ Please give us a summary of the problem and tell us how to reproduce it.
+ If you can narrow down the bug to specific hardware, compile flags, or command line arguments,
+ that information would be very much appreciated by us.
+
+ If possible, please try to reproduce the issue using `llama-completion` with `-fit off`.
+ If you can only reproduce the issue with `-fit on`, please provide logs both with and without `--verbose`.
+ placeholder: >
+ e.g. when I run llama-completion with `-fa on` I get garbled outputs for very long prompts.
+ With short prompts or `-fa off` it works correctly.
+ Here are the exact commands that I used: ...
+ validations:
+ required: true
+ - type: textarea
+ id: first_bad_commit
+ attributes:
+ label: First Bad Commit
+ description: >
+ If the bug was not present on an earlier version: when did it start appearing?
+ If possible, please do a git bisect and identify the exact commit that introduced the bug.
+ validations:
+ required: false
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: >
+ Please copy and paste any relevant log output, including the command that you entered and any generated text.
+ For very long logs (thousands of lines), preferably upload them as files instead.
+ On Linux you can redirect console output into a file by appending ` > llama.log 2>&1` to your command.
+ value: |
+ <details>
+ <summary>Logs</summary>
+ <!-- Copy-pasted short logs go into the "console" area here -->
+
+ ```console
+
+ ```
+ </details>
+
+ <!-- Long logs that you upload as files go here, outside the "console" area -->
+ validations:
+ required: true
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/019-bug-misc.yml b/llama.cpp/.github/ISSUE_TEMPLATE/019-bug-misc.yml
new file mode 100644
index 0000000..8e867e7
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/019-bug-misc.yml
@@ -0,0 +1,103 @@
+name: Bug (misc.)
+description: Something is not working the way it should (and it's not covered by any of the above cases).
+title: "Misc. bug: "
+labels: ["bug-unconfirmed"]
+body:
+ - type: markdown
+ attributes:
+ value: >
+ Thanks for taking the time to fill out this bug report!
+ This issue template is intended for miscellaneous bugs that don't fit into any other category.
+ If you encountered the issue while using an external UI (e.g. ollama),
+ please reproduce your issue using one of the examples/binaries in this repository.
+ - type: textarea
+ id: version
+ attributes:
+ label: Name and Version
+ description: Which version of our software is affected? (You can use `--version` to get a version string.)
+ placeholder: |
+ $./llama-cli --version
+ version: 2999 (42b4109e)
+ built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
+ validations:
+ required: true
+ - type: dropdown
+ id: operating-system
+ attributes:
+ label: Operating systems
+ description: Which operating systems do you know to be affected?
+ multiple: true
+ options:
+ - Linux
+ - Mac
+ - Windows
+ - BSD
+ - Other? (Please let us know in description)
+ validations:
+ required: false
+ - type: dropdown
+ id: module
+ attributes:
+ label: Which llama.cpp modules do you know to be affected?
+ multiple: true
+ options:
+ - Documentation/Github
+ - libllama (core library)
+ - llama-cli
+ - llama-server
+ - llama-bench
+ - llama-quantize
+ - Python/Bash scripts
+ - Test code
+ - Other (Please specify in the next section)
+ validations:
+ required: false
+ - type: textarea
+ id: command
+ attributes:
+ label: Command line
+ description: >
+ Please provide the exact commands you entered, if applicable. For example: `llama-server -m ... -c ...`, `llama-cli -m ...`, etc.
+ This will be automatically formatted into code, so no need for backticks.
+ render: shell
+ validations:
+ required: false
+ - type: textarea
+ id: info
+ attributes:
+ label: Problem description & steps to reproduce
+ description: >
+ Please give us a summary of the problem and tell us how to reproduce it (if applicable).
+ validations:
+ required: true
+ - type: textarea
+ id: first_bad_commit
+ attributes:
+ label: First Bad Commit
+ description: >
+ If the bug was not present on an earlier version and it's not trivial to track down: when did it start appearing?
+ If possible, please do a git bisect and identify the exact commit that introduced the bug.
+ validations:
+ required: false
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: >
+ If applicable, please copy and paste any relevant log output, including any generated text.
+ If you are encountering problems specifically with the `llama_params_fit` module, always upload `--verbose` logs as well.
+ For very long logs (thousands of lines), please upload them as files instead.
+ On Linux you can redirect console output into a file by appending ` > llama.log 2>&1` to your command.
+ value: |
+ <details>
+ <summary>Logs</summary>
+ <!-- Copy-pasted short logs go into the "console" area here -->
+
+ ```console
+
+ ```
+ </details>
+
+ <!-- Long logs that you upload as files go here, outside the "console" area -->
+ validations:
+ required: false
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/020-enhancement.yml b/llama.cpp/.github/ISSUE_TEMPLATE/020-enhancement.yml
new file mode 100644
index 0000000..cee1446
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/020-enhancement.yml
@@ -0,0 +1,51 @@
+name: Enhancement
+description: Used to request enhancements for llama.cpp.
+title: "Feature Request: "
+labels: ["enhancement"]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ [Please post your idea first in Discussion if there is not yet a consensus for this enhancement request. This will help to keep this issue tracker focused on enhancements that the community has agreed needs to be implemented.](https://github.com/ggml-org/llama.cpp/discussions/categories/ideas)
+
+ - type: checkboxes
+ id: prerequisites
+ attributes:
+ label: Prerequisites
+ description: Please confirm the following before submitting your enhancement request.
+ options:
+ - label: I am running the latest code. Mention the version if possible as well.
+ required: true
+ - label: I carefully followed the [README.md](https://github.com/ggml-org/llama.cpp/blob/master/README.md).
+ required: true
+ - label: I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
+ required: true
+ - label: I reviewed the [Discussions](https://github.com/ggml-org/llama.cpp/discussions), and have a new and useful enhancement to share.
+ required: true
+
+ - type: textarea
+ id: feature-description
+ attributes:
+ label: Feature Description
+ description: Please provide a detailed written description of what you were trying to do, and what you expected `llama.cpp` to do as an enhancement.
+ placeholder: Detailed description of the enhancement
+ validations:
+ required: true
+
+ - type: textarea
+ id: motivation
+ attributes:
+ label: Motivation
+ description: Please provide a detailed written description of reasons why this feature is necessary and how it is useful to `llama.cpp` users.
+ placeholder: Explanation of why this feature is needed and its benefits
+ validations:
+ required: true
+
+ - type: textarea
+ id: possible-implementation
+ attributes:
+ label: Possible Implementation
+ description: If you have an idea as to how it can be implemented, please write a detailed description. Feel free to give links to external sources or share visuals that might be helpful to understand the details better.
+ placeholder: Detailed description of potential implementation
+ validations:
+ required: false
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/030-research.yml b/llama.cpp/.github/ISSUE_TEMPLATE/030-research.yml
new file mode 100644
index 0000000..e774550
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/030-research.yml
@@ -0,0 +1,52 @@
+name: Research
+description: Track new technical research area.
+title: "Research: "
+labels: ["research 🔬"]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Don't forget to check for any [duplicate research issue tickets](https://github.com/ggml-org/llama.cpp/issues?q=is%3Aopen+is%3Aissue+label%3A%22research+%F0%9F%94%AC%22)
+
+ - type: checkboxes
+ id: research-stage
+ attributes:
+ label: Research Stage
+ description: Track general state of this research ticket
+ options:
+ - label: Background Research (Let's try to avoid reinventing the wheel)
+ - label: Hypothesis Formed (How do you think this will work and it's effect?)
+ - label: Strategy / Implementation Forming
+ - label: Analysis of results
+ - label: Debrief / Documentation (So people in the future can learn from us)
+
+ - type: textarea
+ id: background
+ attributes:
+ label: Previous existing literature and research
+ description: Whats the current state of the art and whats the motivation for this research?
+
+ - type: textarea
+ id: hypothesis
+ attributes:
+ label: Hypothesis
+ description: How do you think this will work and it's effect?
+
+ - type: textarea
+ id: implementation
+ attributes:
+ label: Implementation
+ description: Got an approach? e.g. a PR ready to go?
+
+ - type: textarea
+ id: analysis
+ attributes:
+ label: Analysis
+ description: How does the proposed implementation behave?
+
+ - type: textarea
+ id: logs
+ attributes:
+ label: Relevant log output
+ description: Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
+ render: shell
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/040-refactor.yml b/llama.cpp/.github/ISSUE_TEMPLATE/040-refactor.yml
new file mode 100644
index 0000000..2fe94e2
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/040-refactor.yml
@@ -0,0 +1,28 @@
+name: Refactor (Maintainers)
+description: Used to track refactoring opportunities.
+title: "Refactor: "
+labels: ["refactor"]
+body:
+ - type: markdown
+ attributes:
+ value: |
+ Don't forget to [check for existing refactor issue tickets](https://github.com/ggml-org/llama.cpp/issues?q=is%3Aopen+is%3Aissue+label%3Arefactoring) in case it's already covered.
+ Also you may want to check [Pull request refactor label as well](https://github.com/ggml-org/llama.cpp/pulls?q=is%3Aopen+is%3Apr+label%3Arefactoring) for duplicates too.
+
+ - type: textarea
+ id: background-description
+ attributes:
+ label: Background Description
+ description: Please provide a detailed written description of the pain points you are trying to solve.
+ placeholder: Detailed description behind your motivation to request refactor
+ validations:
+ required: true
+
+ - type: textarea
+ id: possible-approaches
+ attributes:
+ label: Possible Refactor Approaches
+ description: If you have some idea of possible approaches to solve this problem. You may want to make it a todo list.
+ placeholder: Your idea of possible refactoring opportunity/approaches
+ validations:
+ required: false
diff --git a/llama.cpp/.github/ISSUE_TEMPLATE/config.yml b/llama.cpp/.github/ISSUE_TEMPLATE/config.yml
new file mode 100644
index 0000000..0d24653
--- /dev/null
+++ b/llama.cpp/.github/ISSUE_TEMPLATE/config.yml
@@ -0,0 +1,11 @@
+blank_issues_enabled: true
+contact_links:
+ - name: Got an idea?
+ url: https://github.com/ggml-org/llama.cpp/discussions/categories/ideas
+ about: Pop it there. It may then become an enhancement ticket.
+ - name: Got a question?
+ url: https://github.com/ggml-org/llama.cpp/discussions/categories/q-a
+ about: Ask a question there!
+ - name: Want to contribute?
+ url: https://github.com/ggml-org/llama.cpp/wiki/contribute
+ about: Head to the contribution guide page of the wiki for areas you can help with