Compare commits

...

11 Commits
b8872 ... b8883

Author SHA1 Message Date
Piotr Wilkin (ilintar)
134d6e54d4 common/chat, server: refactor, move all conversion functions to common, add tests (#20690)
* Refactor conversion functions
2026-04-22 10:28:45 +02:00
Chen Yuan
ca7f7b7b94 ggml-webgpu(shader): support conv2d kernels. (#21964)
* ggml(webgpu): fix the busy-polls in Emscripten  in the waitAny after #20618, and remove the busy webgpu log

* Merge with upstream

* Fix GET_ROWS packed integer NaN when using f16 as memory buffer in shader quants

* Update Unary wgsl EXP and EXPM1 for f16 stability

* Fix GET_ROWS IQ4_XS strcut for NaN f16 canonicalization

* Fix numerical percision for unary sqrt when working with f16

* Fix NaN canonicalization for packed integers using f16

* Update err threshold for binary div ops when using f16

* backend: Keep one Dawn/WebGPU instance alive for the lifetime of the static backend

* clean: uncomment existing code logs

* clean: clean the unncessary debug info

* Refactor and generalize dequant helpers

* Remove deprecated quant structs

* Refactor shader defines to reduce repetition

* Remove error override for F16 type

* fix: fix the accidential removal of the proper initialization of ctx

* clean: clean legacy and format code

* fix: did not modify tests ops

* shader(conv2d): add conv2d shader kernels and pass f32 and f16 tests

* shader(conv2d): fix the out of bounds memory access in the weight indexing

* shader(conv2d): clean unused variables and optimize the computation

* merge: use the new entries function

* clean: address the formatting issues

* clean: address the warning issues

* clear: clean the shader editorconfig-checker issues

* clear: clean the shader editorconfig-checker with utf-8

---------

Co-authored-by: Jeremy J. Hartmann <jeremy@mtion.tv>
2026-04-21 20:18:57 -07:00
Aparna M P
0dedb9ef7a hexagon: add support for FILL op (#22198)
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
2026-04-21 16:24:20 -07:00
Masashi Yoshimura
2799d933b5 ggml-webgpu: reset CPU/GPU profiling time when freeing context (#22050)
* Reset the CPU/GPU profiling time when freeing context.

* move GPU profiling time from global context to webgpu_context.
2026-04-21 16:05:21 -07:00
Xuan-Son Nguyen
04fe84b69d server: allow cancel loading model (#21814) 2026-04-22 00:26:09 +02:00
Shreya Jain
5a4cd6741f Hexagon: DAIG op (#22195)
* hexagon: Add DIAG op

* hexagon: add HVX support and DMA double buffering

* hexagon: fix fatal error

* hexagon: remove as many pragma(s) as possible
2026-04-21 14:16:04 -07:00
Mengsheng Wu
2248799a58 hexagon: fix missing v79 entry in libggml-htp.inf (#22194) 2026-04-21 13:53:44 -07:00
Paul Dubs
72d693e4fb spec : reset i_last when low acceptance streak occurs (#22168)
By resetting i_last to zero, we will include the current context when rebuilding the speculative map.
2026-04-21 21:29:07 +03:00
Kwa Jie Hao
98d2d2884e mtmd: Add support for Reka Edge 2603 (#21616)
* feat: (vocab) fix stray text appended in llama_decode_text

Remove accidental concatenation of the full `text` string when
formatting UNK_BYTE hex escapes. Only the closing "]" should be appended.

* feat(mtmd): add Yasa2 vision encoder support

Add a Yasa2 (ConvNeXtV2-based) vision encoder for reka-edge:
- Register PROJECTOR_TYPE_YASA2 and tensor name definitions
- Add yasa2_block/yasa2_stage model structs
- Implement graph builder with ConvNeXt stages, GRN, adaptive pooling
- Wire into clip.cpp switch statements and mtmd.cpp init_vision
- Use mtmd_image_preprocessor_fixed_size for image preprocessing

* feat(chat): add reka-edge template handler (tools, thinking)

- Add chat-reka.cpp/h implementing PEG-based parser for reka-edge format
- Add Reka-Edge.jinja chat template
- Detect reka-edge template in try_specialized_template()
- Add LLAMA_EXAMPLE_MTMD to chat-template-file arg

* feat: add reka vlm to gguf conversion script

Converts Reka Yasa2 hf checkpoints to GGUF format:
- Text decoder: Llama-arch with tiktoken/BPE vocab
- Mmproj (--mmproj): ConvNeXt vision backbone + language_projection
- Generates 2D sincos positional embeddings for vision encoder

* test: add Reka Edge chat template and parser tests

- test-chat-template: oracle tests comparing Jinja engine output vs
  common_chat_templates_apply for text, tools, thinking, images, video
- test-chat: PEG parser tests for Reka Edge format, round-trip tests
  for image/video content parts, common path integration tests

* scripts: add Reka Edge mixed quantization helper

Q4_0 base quantization with Q8_0 override for the last 8 transformer
blocks (layers 24-31) via --tensor-type regex.

* fix: adapt chat-reka and tests to upstream API

- Use autoparser::generation_params (not templates_params)
- Add p.prefix(generation_prompt) to PEG parser
- Simplify reasoning parser to match LFM2 pattern
- Remove image/video oracle tests (unsupported by oaicompat parser;
  no other multimodal models test this path)

* fix: avoid duplicate tensor loading in yasa2 vision encoder

TN_YASA_PATCH_W and TN_PATCH_EMBD both resolve to "v.patch_embd.weight",
causing the same tensor to be loaded twice into ctx_data and overflowing
the memory pool. Reuse the tensors already loaded by the common section.

* chore: update image pre-processing settings

The reka-edge model depends on the following settings in an older
fork of llama.cpp:
1. Fixed square resize
2. BICUBIC
3. add_padding=false

In current llama.cpp, this means setting:
- image_resize_algo = RESIZE_ALGO_BICUBIC
- image_resize_pad = false

* chore: remove reka gguf conversion script

* chore: remove reka quantization script

* chore: remove unnecessary changes from PR scope

This commit removes a couple of unnecessary changes for the PR scope:
1. BPE decoder bug fix - this affects reka edge because there's a bug
in our tokenization that doesn't represent <think> tokens as special
tokens. However this isn't meant to be a thinking model so when run
with --reasoning off the edge case does not affect us

2. --chat-template-file support from llama-mtmd-cli - the focus is on
llama-server and the reka edge gguf contains the necessary metadata
to detect the chat template

3. reka edge oracle test cases - no other model has similar test cases,
so I removed it for standardization

* chore: remove unnecessary ggml_cast

This commit removes unnecessary ggml_cast after updating the
reka vlm -> gguf conversion script on hugging face.

* chore: remove redundant code

* chore: remove unnecessary ggml_cont calls

This commit removes all ggml_cont calls except the four that
precede ggml_reshape_3d/ggml_reshape_4d. Those are necessary
because ggml_reshape recomputes strides assuming contiguous
layout and asserts ggml_is_contiguous.

Other operations (ggml_mean, ggml_add, ggml_mul etc.) use
stride-based indexing and handle non-contiguous inputs
correctly and so we are ok to remove ggml_cont for those.

* chore: remove unnecessary ggml_repeat calls

This commit removes unnecessary ggml_repeat calls because the underlying
ops already broadcast automatically.

Every ggml_repeat in yasa2.cpp was expanding a smaller tensor to match
a larger one's shape before passing both to an elementwise op (ggml_add,
ggml_sub, ggml_mul, or ggml_div). This is unnecessary because all four
of these ops already support broadcasting internally.

* chore: restore ggml_cont needed for cpu operations

* refactor: locate reka chat template handler in chat.cpp

* chore: remove unnecessary warmup tokens

* chore: add code comments on image_resize_pad

* chore: remove custom reka parsing code

* chore: revert common/chat.cpp

* Uncomment debug logging for PEG input parsing

---------

Co-authored-by: Piotr Wilkin (ilintar) <piotr.wilkin@syndatis.com>
2026-04-21 20:02:49 +02:00
Georgi Gerganov
84652b80cf arg : add --spec-default (#22223) 2026-04-21 19:52:02 +03:00
Zijun Yu
52f1096f21 openvino: driver setup, CI split, thread safety, and NPU optimizations (#21944)
* Thread safety per request only

* Fix ROPE yarn case

* Fix sticky stateful config

* Use i4/i8 directly for symmetric quant

* Use weightless caching

* Add WeightlessCacheAttribute to reduce NPU memory usage

* Gelu tanh support (#125)

* Imrope support (#126)

* fix(openvino): explicit ov::Tensor frees in ggml_backend_openvino_free

* add GPU,NPU support in OV Dockerfile

* add build-openvino.yml ci

* Fix sticky stateful config

* add concurrency to ov-gpu ci runs. Move OV CI to build-openvino.yml

* fix thread-safety of shared runtime context

* rope type abstraction for frontend translations

* fix editorconfig

---------

Co-authored-by: Mustafa Cavus <mustafa.cavus@intel.com>
Co-authored-by: Dan Hoffman <dhoff749@gmail.com>
Co-authored-by: Ravi Panchumarthy <ravi.panchumarthy@intel.com>
2026-04-21 18:58:34 +03:00
53 changed files with 2754 additions and 1195 deletions

View File

@@ -2,7 +2,19 @@ ARG OPENVINO_VERSION_MAJOR=2026.0
ARG OPENVINO_VERSION_FULL=2026.0.0.20965.c6d6a13a886
ARG UBUNTU_VERSION=24.04
# Optional proxy build arguments - empty by default
# Intel GPU driver versions. https://github.com/intel/compute-runtime/releases
ARG IGC_VERSION=v2.30.1
ARG IGC_VERSION_FULL=2_2.30.1+20950
ARG COMPUTE_RUNTIME_VERSION=26.09.37435.1
ARG COMPUTE_RUNTIME_VERSION_FULL=26.09.37435.1-0
ARG IGDGMM_VERSION=22.9.0
# Intel NPU driver versions. https://github.com/intel/linux-npu-driver/releases
ARG NPU_DRIVER_VERSION=v1.32.0
ARG NPU_DRIVER_FULL=v1.32.0.20260402-23905121947
ARG LIBZE1_VERSION=1.27.0-1~24.04~ppa2
# Optional proxy build arguments
ARG http_proxy=
ARG https_proxy=
@@ -78,13 +90,47 @@ ARG http_proxy
ARG https_proxy
RUN apt-get update \
&& apt-get install -y libgomp1 libtbb12 curl \
&& apt-get install -y libgomp1 libtbb12 curl wget ocl-icd-libopencl1 \
&& apt autoremove -y \
&& apt clean -y \
&& rm -rf /tmp/* /var/tmp/* \
&& find /var/cache/apt/archives /var/lib/apt/lists -not -name lock -type f -delete \
&& find /var/cache -type f -delete
# Install GPU drivers
ARG IGC_VERSION
ARG IGC_VERSION_FULL
ARG COMPUTE_RUNTIME_VERSION
ARG COMPUTE_RUNTIME_VERSION_FULL
ARG IGDGMM_VERSION
RUN mkdir /tmp/neo/ && cd /tmp/neo/ \
&& wget https://github.com/intel/intel-graphics-compiler/releases/download/${IGC_VERSION}/intel-igc-core-${IGC_VERSION_FULL}_amd64.deb \
&& wget https://github.com/intel/intel-graphics-compiler/releases/download/${IGC_VERSION}/intel-igc-opencl-${IGC_VERSION_FULL}_amd64.deb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/intel-ocloc-dbgsym_${COMPUTE_RUNTIME_VERSION_FULL}_amd64.ddeb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/intel-ocloc_${COMPUTE_RUNTIME_VERSION_FULL}_amd64.deb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/intel-opencl-icd-dbgsym_${COMPUTE_RUNTIME_VERSION_FULL}_amd64.ddeb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/intel-opencl-icd_${COMPUTE_RUNTIME_VERSION_FULL}_amd64.deb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/libigdgmm12_${IGDGMM_VERSION}_amd64.deb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/libze-intel-gpu1-dbgsym_${COMPUTE_RUNTIME_VERSION_FULL}_amd64.ddeb \
&& wget https://github.com/intel/compute-runtime/releases/download/${COMPUTE_RUNTIME_VERSION}/libze-intel-gpu1_${COMPUTE_RUNTIME_VERSION_FULL}_amd64.deb \
&& dpkg --install *.deb \
&& rm -rf /tmp/neo/
# Install NPU drivers
ARG NPU_DRIVER_VERSION
ARG NPU_DRIVER_FULL
ARG LIBZE1_VERSION
RUN mkdir /tmp/npu/ && cd /tmp/npu/ \
&& wget https://github.com/intel/linux-npu-driver/releases/download/${NPU_DRIVER_VERSION}/linux-npu-driver-${NPU_DRIVER_FULL}-ubuntu2404.tar.gz \
&& tar -xf linux-npu-driver-${NPU_DRIVER_FULL}-ubuntu2404.tar.gz \
&& dpkg --install *.deb \
&& rm -rf /tmp/npu/
RUN cd /tmp \
&& wget https://snapshot.ppa.launchpadcontent.net/kobuk-team/intel-graphics/ubuntu/20260324T100000Z/pool/main/l/level-zero-loader/libze1_${LIBZE1_VERSION}_amd64.deb \
&& dpkg --install libze1_${LIBZE1_VERSION}_amd64.deb \
&& rm libze1_${LIBZE1_VERSION}_amd64.deb
COPY --from=build /app/lib/ /app/
### Full (all binaries)

120
.github/workflows/build-openvino.yml vendored Normal file
View File

@@ -0,0 +1,120 @@
name: CI (openvino)
on:
workflow_dispatch: # allows manual triggering
push:
branches:
- master
paths: [
'.github/workflows/build-openvino.yml',
'**/CMakeLists.txt',
'**/.cmake',
'**/*.h',
'**/*.hpp',
'**/*.c',
'**/*.cpp',
]
pull_request:
types: [opened, synchronize, reopened]
paths: [
'.github/workflows/build-openvino.yml',
'ggml/src/ggml-openvino/**'
]
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref && github.ref || github.run_id }}
cancel-in-progress: true
env:
GGML_NLOOP: 3
GGML_N_THREADS: 1
LLAMA_LOG_COLORS: 1
LLAMA_LOG_PREFIX: 1
LLAMA_LOG_TIMESTAMPS: 1
jobs:
ubuntu-24-openvino:
name: ubuntu-24-openvino-${{ matrix.openvino_device }}
concurrency:
group: openvino-${{ matrix.variant }}-${{ github.head_ref || github.ref }}
cancel-in-progress: false
strategy:
matrix:
include:
- variant: cpu
runner: '"ubuntu-24.04"'
openvino_device: "CPU"
- variant: gpu
runner: '["self-hosted","Linux","Intel","OpenVINO"]'
openvino_device: "GPU"
runs-on: ${{ fromJSON(matrix.runner) }}
env:
# Sync versions in build-openvino.yml, build-self-hosted.yml, release.yml, build-cache.yml, .devops/openvino.Dockerfile
OPENVINO_VERSION_MAJOR: "2026.0"
OPENVINO_VERSION_FULL: "2026.0.0.20965.c6d6a13a886"
steps:
- name: Clone
id: checkout
uses: actions/checkout@v6
- name: ccache
if: runner.environment == 'github-hosted'
uses: ggml-org/ccache-action@v1.2.21
with:
key: ubuntu-24-openvino-${{ matrix.variant }}-no-preset-v1
evict-old-files: 1d
save: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
- name: Dependencies
id: depends
run: |
sudo apt-get update
sudo apt-get install -y build-essential libssl-dev libtbb12 cmake ninja-build python3-pip
sudo apt-get install -y ocl-icd-opencl-dev opencl-headers opencl-clhpp-headers intel-opencl-icd
- name: Use OpenVINO Toolkit Cache
if: runner.environment == 'github-hosted'
uses: actions/cache@v5
id: cache-openvino
with:
path: ./openvino_toolkit
key: openvino-toolkit-v${{ env.OPENVINO_VERSION_FULL }}-${{ runner.os }}
- name: Setup OpenVINO Toolkit
if: steps.cache-openvino.outputs.cache-hit != 'true'
uses: ./.github/actions/linux-setup-openvino
with:
path: ./openvino_toolkit
version_major: ${{ env.OPENVINO_VERSION_MAJOR }}
version_full: ${{ env.OPENVINO_VERSION_FULL }}
- name: Install OpenVINO dependencies
run: |
cd ./openvino_toolkit
chmod +x ./install_dependencies/install_openvino_dependencies.sh
echo "Y" | sudo -E ./install_dependencies/install_openvino_dependencies.sh
- name: Build
id: cmake_build
run: |
source ./openvino_toolkit/setupvars.sh
cmake -B build/ReleaseOV -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_OPENVINO=ON
time cmake --build build/ReleaseOV --config Release -j $(nproc)
- name: Test
id: cmake_test
# TODO: fix and re-enable the `test-llama-archs` test below
run: |
cd ${{ github.workspace }}
if [ "${{ matrix.openvino_device }}" = "GPU" ]; then
export GGML_OPENVINO_DEVICE=GPU
fi
ctest --test-dir build/ReleaseOV -L main -E "test-llama-archs" --verbose --timeout 2000

View File

@@ -265,6 +265,10 @@ jobs:
ggml-ci-intel-openvino-gpu-low-perf:
runs-on: [self-hosted, Linux, Intel, OpenVINO]
concurrency:
group: openvino-gpu-${{ github.head_ref || github.ref }}
cancel-in-progress: false
env:
# Sync versions in build.yml, build-self-hosted.yml, release.yml, build-cache.yml, .devops/openvino.Dockerfile
OPENVINO_VERSION_MAJOR: "2026.0"

View File

@@ -656,86 +656,6 @@ jobs:
-DGGML_SYCL_F16=ON
time cmake --build build --config Release -j $(nproc)
ubuntu-24-openvino:
name: ubuntu-24-openvino-${{ matrix.openvino_device }}
strategy:
matrix:
include:
- variant: cpu
runner: '"ubuntu-24.04"'
openvino_device: "CPU"
- variant: gpu
runner: '["self-hosted","Linux","X64","Intel"]'
openvino_device: "GPU"
runs-on: ${{ fromJSON(matrix.runner) }}
env:
# Sync versions in build.yml, build-self-hosted.yml, release.yml, build-cache.yml, .devops/openvino.Dockerfile
OPENVINO_VERSION_MAJOR: "2026.0"
OPENVINO_VERSION_FULL: "2026.0.0.20965.c6d6a13a886"
steps:
- name: Clone
id: checkout
uses: actions/checkout@v6
- name: ccache
if: runner.environment == 'github-hosted'
uses: ggml-org/ccache-action@v1.2.21
with:
key: ubuntu-24-openvino-${{ matrix.variant }}-no-preset-v1
evict-old-files: 1d
save: ${{ github.event_name == 'push' && github.ref == 'refs/heads/master' }}
- name: Dependencies
id: depends
run: |
sudo apt-get update
sudo apt-get install -y build-essential libssl-dev libtbb12 cmake ninja-build python3-pip
sudo apt-get install -y ocl-icd-opencl-dev opencl-headers opencl-clhpp-headers intel-opencl-icd
- name: Use OpenVINO Toolkit Cache
if: runner.environment == 'github-hosted'
uses: actions/cache@v5
id: cache-openvino
with:
path: ./openvino_toolkit
key: openvino-toolkit-v${{ env.OPENVINO_VERSION_FULL }}-${{ runner.os }}
- name: Setup OpenVINO Toolkit
if: steps.cache-openvino.outputs.cache-hit != 'true'
uses: ./.github/actions/linux-setup-openvino
with:
path: ./openvino_toolkit
version_major: ${{ env.OPENVINO_VERSION_MAJOR }}
version_full: ${{ env.OPENVINO_VERSION_FULL }}
- name: Install OpenVINO dependencies
run: |
cd ./openvino_toolkit
chmod +x ./install_dependencies/install_openvino_dependencies.sh
echo "Y" | sudo -E ./install_dependencies/install_openvino_dependencies.sh
- name: Build
id: cmake_build
run: |
source ./openvino_toolkit/setupvars.sh
cmake -B build/ReleaseOV -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_OPENVINO=ON
time cmake --build build/ReleaseOV --config Release -j $(nproc)
- name: Test
id: cmake_test
# TODO: fix and re-enable the `test-llama-archs` test below
run: |
cd ${{ github.workspace }}
if [ "${{ matrix.openvino_device }}" = "GPU" ]; then
export GGML_OPENVINO_DEVICE=GPU
fi
ctest --test-dir build/ReleaseOV -L main -E "test-llama-archs" --verbose --timeout 2000
windows-latest:
runs-on: windows-2025

View File

@@ -3902,6 +3902,17 @@ common_params_context common_params_parser_init(common_params & params, llama_ex
}
).set_examples({LLAMA_EXAMPLE_SERVER, LLAMA_EXAMPLE_CLI}));
add_opt(common_arg(
{"--spec-default"},
string_format("enable default speculative decoding config"),
[](common_params & params) {
params.speculative.type = COMMON_SPECULATIVE_TYPE_NGRAM_MOD;
params.speculative.ngram_size_n = 24;
params.speculative.n_min = 48;
params.speculative.n_max = 64;
}
).set_examples({LLAMA_EXAMPLE_SERVER, LLAMA_EXAMPLE_CLI}));
return ctx_arg;
}

View File

@@ -397,6 +397,25 @@ json common_chat_msgs_to_json_oaicompat(const std::vector<common_chat_msg> & msg
return render_message_to_json(msgs, c);
}
json common_chat_tools_to_json_oaicompat(const std::vector<common_chat_tool> & tools) {
if (tools.empty()) {
return json();
}
auto result = json::array();
for (const auto & tool : tools) {
result.push_back({
{ "type", "function" },
{ "function", {
{ "name", tool.name },
{ "description", tool.description },
{ "parameters", json::parse(tool.parameters) },
}},
});
}
return result;
}
std::vector<common_chat_tool> common_chat_tools_parse_oaicompat(const json & tools) {
std::vector<common_chat_tool> result;
@@ -432,56 +451,6 @@ std::vector<common_chat_tool> common_chat_tools_parse_oaicompat(const json & too
return result;
}
json common_chat_tools_to_json_oaicompat(const std::vector<common_chat_tool> & tools) {
if (tools.empty()) {
return json();
}
auto result = json::array();
for (const auto & tool : tools) {
result.push_back({
{ "type", "function" },
{ "function",
{
{ "name", tool.name },
{ "description", tool.description },
{ "parameters", json::parse(tool.parameters) },
} },
});
}
return result;
}
json common_chat_msg_diff_to_json_oaicompat(const common_chat_msg_diff & diff) {
json delta = json::object();
if (!diff.reasoning_content_delta.empty()) {
delta["reasoning_content"] = diff.reasoning_content_delta;
}
if (!diff.content_delta.empty()) {
delta["content"] = diff.content_delta;
}
if (diff.tool_call_index != std::string::npos) {
json tool_call;
tool_call["index"] = diff.tool_call_index;
if (!diff.tool_call_delta.id.empty()) {
tool_call["id"] = diff.tool_call_delta.id;
tool_call["type"] = "function";
}
if (!diff.tool_call_delta.name.empty() || !diff.tool_call_delta.arguments.empty()) {
json function = json::object();
if (!diff.tool_call_delta.name.empty()) {
function["name"] = diff.tool_call_delta.name;
}
if (!diff.tool_call_delta.arguments.empty()) {
function["arguments"] = diff.tool_call_delta.arguments;
}
tool_call["function"] = function;
}
delta["tool_calls"] = json::array({ tool_call });
}
return delta;
}
bool common_chat_verify_template(const std::string & tmpl, bool use_jinja) {
if (use_jinja) {
try {

View File

@@ -256,14 +256,13 @@ bool common_chat_templates_support_enable_thinking(const common_chat_templates *
// Parses a JSON array of messages in OpenAI's chat completion API format.
std::vector<common_chat_msg> common_chat_msgs_parse_oaicompat(const nlohmann::ordered_json & messages);
std::vector<common_chat_tool> common_chat_tools_parse_oaicompat(const nlohmann::ordered_json & tools);
// DEPRECATED: only used in tests
nlohmann::ordered_json common_chat_msgs_to_json_oaicompat(const std::vector<common_chat_msg> & msgs, bool concat_typed_text = false);
std::vector<common_chat_tool> common_chat_tools_parse_oaicompat(const nlohmann::ordered_json & tools);
nlohmann::ordered_json common_chat_tools_to_json_oaicompat(const std::vector<common_chat_tool> & tools);
nlohmann::ordered_json common_chat_msg_diff_to_json_oaicompat(const common_chat_msg_diff & diff);
// get template caps, useful for reporting to server /props endpoint
std::map<std::string, bool> common_chat_templates_get_caps(const common_chat_templates * chat_templates);

View File

@@ -749,6 +749,7 @@ struct common_speculative_state_ngram_mod : public common_speculative_state {
mod.reset();
n_low = 0;
i_last = 0;
}
} else {
n_low = 0;

View File

@@ -244,7 +244,6 @@ build\ReleaseOV\bin\llama-cli.exe -m "C:\models\Llama-3.2-1B-Instruct-Q4_0.gguf"
- `-fa 1` is required when running llama-bench with the OpenVINO backend.
- `GGML_OPENVINO_STATEFUL_EXECUTION=1 GGML_OPENVINO_DEVICE=GPU ./llama-bench -fa 1`
- `llama-server` with OpenVINO backend supports only one chat session/thread, when `GGML_OPENVINO_STATEFUL_EXECUTION=1` is enabled.
- For Intel GPU, NPU detection in containers, GPU, NPU user-space drivers/libraries must be present inside the image. We will include in a future PR. Until then, you can use this reference Dockerfile: [openvino.Dockerfile](https://github.com/ravi9/llama.cpp/blob/ov-docker-update/.devops/openvino.Dockerfile)
> [!NOTE]
> The OpenVINO backend is actively under development. Fixes are underway, and this document will continue to be updated as issues are resolved.
@@ -274,8 +273,6 @@ docker build --build-arg http_proxy=$http_proxy --build-arg https_proxy=$https_p
Run llama.cpp with OpenVINO backend Docker container.
Save sample models in `~/models` as [shown above](#3-download-sample-model). It will be mounted to the container in the examples below.
> [!NOTE]
> Intel GPU, NPU detection in containers will be included in a future PR. Until then, you can use this reference Dockerfile: [openvino.Dockerfile](https://github.com/ravi9/llama.cpp/blob/ov-docker-update/.devops/openvino.Dockerfile).
```bash
# Run Docker container

View File

@@ -2596,6 +2596,29 @@ static bool ggml_hexagon_supported_cumsum(const struct ggml_hexagon_session * se
return true;
}
static bool ggml_hexagon_supported_diag(const struct ggml_hexagon_session * sess, const struct ggml_tensor * op) {
const struct ggml_tensor * src0 = op->src[0];
const struct ggml_tensor * dst = op;
// diag only supports F32 currently
if (src0->type != GGML_TYPE_F32 || dst->type != GGML_TYPE_F32) {
return false;
}
// Input must have ne[1] == 1 (vector input)
if (src0->ne[1] != 1) {
return false;
}
// Output must be square in first two dimensions
if (dst->ne[0] != dst->ne[1] || dst->ne[0] != src0->ne[0]) {
return false;
}
GGML_UNUSED(sess);
return true;
}
static const char * ggml_backend_hexagon_name(ggml_backend_t backend) {
auto sess = static_cast<ggml_hexagon_session *>(backend->context);
return sess->c_name();
@@ -2632,6 +2655,8 @@ static htp_op_code op_remap_to_htp(const ggml_tensor * t) {
case GGML_OP_ROPE: return HTP_OP_ROPE;
case GGML_OP_REPEAT: return HTP_OP_REPEAT;
case GGML_OP_CUMSUM: return HTP_OP_CUMSUM;
case GGML_OP_FILL: return HTP_OP_FILL;
case GGML_OP_DIAG: return HTP_OP_DIAG;
case GGML_OP_UNARY:
switch (ggml_get_unary_op(t)) {
@@ -3029,6 +3054,17 @@ static bool ggml_hexagon_supported_repeat(const struct ggml_hexagon_session * se
return true;
}
static bool ggml_hexagon_supported_fill(const struct ggml_hexagon_session * sess, const struct ggml_tensor * op) {
const struct ggml_tensor * dst = op;
if (dst->type != GGML_TYPE_F32 && dst->type != GGML_TYPE_F16) {
return false;
}
GGML_UNUSED(sess);
return true;
}
static bool ggml_backend_hexagon_device_supports_op(ggml_backend_dev_t dev, const struct ggml_tensor * op) {
auto sess = static_cast<ggml_hexagon_session *>(dev->context);
@@ -3159,6 +3195,14 @@ static bool ggml_backend_hexagon_device_supports_op(ggml_backend_dev_t dev, cons
supp = ggml_hexagon_supported_cumsum(sess, op);
break;
case GGML_OP_FILL:
supp = ggml_hexagon_supported_fill(sess, op);
break;
case GGML_OP_DIAG:
supp = ggml_hexagon_supported_diag(sess, op);
break;
default:
break;
}

View File

@@ -34,6 +34,8 @@ add_library(${HTP_LIB} SHARED
argsort-ops.c
ssm-conv.c
cumsum-ops.c
fill-ops.c
diag-ops.c
)
target_compile_definitions(${HTP_LIB} PRIVATE

View File

@@ -0,0 +1,216 @@
#pragma clang diagnostic ignored "-Wunused-but-set-variable"
#include <HAP_farf.h>
#include <HAP_perf.h>
#define GGML_COMMON_DECL_C
#include "ggml-common.h"
#include "htp-ctx.h"
#include "htp-ops.h"
#include "hvx-types.h"
#include "hex-utils.h"
#include "hvx-copy.h"
#include "hex-dma.h"
#define htp_diag_tensors_preamble \
const struct htp_tensor * restrict src0 = octx->src[0]; \
const struct htp_tensor * restrict dst = octx->dst; \
\
const uint32_t ne02 = src0->ne[2]; \
\
const uint32_t ne0 = dst->ne[0]; \
const uint32_t ne1 = dst->ne[1]; \
\
const uint32_t nb02 = src0->nb[2]; \
const uint32_t nb03 = src0->nb[3]; \
\
const uint32_t nb1 = dst->nb[1]; \
const uint32_t nb2 = dst->nb[2]; \
const uint32_t nb3 = dst->nb[3];
struct htp_diag_context {
struct htp_ops_context * octx;
size_t src_batch_size;
size_t dst_row_size;
size_t src_batch_size_aligned;
size_t dst_row_size_aligned;
uint32_t batches_per_thread;
uint32_t total_batches;
};
#define htp_diag_preamble \
struct htp_diag_context * dctx = (struct htp_diag_context *) data; \
struct htp_ops_context * octx = dctx->octx; \
htp_diag_tensors_preamble;
static inline void hvx_diag_row_f32(const float * restrict src, float * restrict dst,
uint32_t row_idx, uint32_t n) {
hvx_splat_f32_a((uint8_t *) dst, 0.0f, n);
dst[row_idx] = src[row_idx];
}
// ---------------------------------------------------------------------------
// Per thread worker: DMA src fetch, compute in VTCM, DMA dst writeback
// ---------------------------------------------------------------------------
static void diag_thread_f32_dma(unsigned int nth, unsigned int ith, void * data) {
htp_diag_preamble;
dma_queue * dma_queue = octx->ctx->dma[ith];
uint64_t t1, t2;
t1 = HAP_perf_get_qtimer_count();
const uint32_t ib0 = dctx->batches_per_thread * ith;
const uint32_t ib1 = MIN(ib0 + dctx->batches_per_thread, dctx->total_batches);
if (ib0 >= ib1) {
return;
}
const size_t src_batch_size = dctx->src_batch_size;
const size_t dst_row_size = dctx->dst_row_size;
const size_t src_batch_size_aligned = dctx->src_batch_size_aligned;
const size_t dst_row_size_aligned = dctx->dst_row_size_aligned;
const uint8_t * src_data = (const uint8_t *) src0->data;
uint8_t * dst_data = (uint8_t *) dst->data;
// 1 src buffer + 1 dst row buffer per thread in VTCM
uint8_t * src_spad = octx->src0_spad.data + (ith * src_batch_size_aligned);
uint8_t * dst_spad = octx->dst_spad.data + (ith * dst_row_size_aligned);
for (uint32_t ib = ib0; ib < ib1; ib++) {
const uint32_t i3 = ib / ne02;
const uint32_t i2 = ib % ne02;
const uint8_t * src_batch = src_data + i3 * nb03 + i2 * nb02;
// Fetch source vector into VTCM
dma_queue_push_ddr_to_vtcm(dma_queue,
dma_make_ptr(src_spad, src_batch),
src_batch_size_aligned, src_batch_size, 1);
dma_queue_flush(dma_queue);
const float * src_spad_f32 = (const float *) src_spad;
float * dst_spad_f32 = (float *) dst_spad;
for (uint32_t i1 = 0; i1 < ne1; i1++) {
// Compute row in VTCM
hvx_diag_row_f32(src_spad_f32, dst_spad_f32, i1, ne0);
// Write completed row back to DDR
uint8_t * dst_row = dst_data + i3 * nb3 + i2 * nb2 + i1 * nb1;
dma_queue_push_vtcm_to_ddr(dma_queue,
dma_make_ptr(dst_row, dst_spad),
dst_row_size, dst_row_size_aligned, 1);
dma_queue_flush(dma_queue);
}
}
t2 = HAP_perf_get_qtimer_count();
FARF(HIGH, "diag-f32-dma %d/%d: %ux%ux%ux%u (%u:%u) -> %ux%ux%ux%u usec %u\n",
ith, nth, src0->ne[0], src0->ne[1], src0->ne[2], src0->ne[3], ib0, ib1,
dst->ne[0], dst->ne[1], dst->ne[2], dst->ne[3],
(unsigned) HAP_perf_qtimer_count_to_us(t2 - t1));
}
// ---------------------------------------------------------------------------
// Per thread worker: Direct HVX (no DMA)
// ---------------------------------------------------------------------------
static void diag_thread_f32(unsigned int nth, unsigned int ith, void * data) {
htp_diag_preamble;
uint64_t t1, t2;
t1 = HAP_perf_get_qtimer_count();
const uint8_t * src_data = (const uint8_t *) src0->data;
uint8_t * dst_data = (uint8_t *) dst->data;
const uint32_t ib0 = dctx->batches_per_thread * ith;
const uint32_t ib1 = MIN(ib0 + dctx->batches_per_thread, dctx->total_batches);
for (uint32_t ib = ib0; ib < ib1; ib++) {
const uint32_t i3 = ib / ne02;
const uint32_t i2 = ib % ne02;
const float * restrict src_batch = (const float *)(src_data + i3 * nb03 + i2 * nb02);
for (uint32_t i1 = 0; i1 < ne1; i1++) {
float * restrict dst_row = (float *)(dst_data + i3 * nb3 + i2 * nb2 + i1 * nb1);
hvx_diag_row_f32(src_batch, dst_row, i1, ne0);
}
}
t2 = HAP_perf_get_qtimer_count();
FARF(HIGH, "diag-f32 %d/%d: %ux%ux%ux%u (%u:%u) -> %ux%ux%ux%u usec %u\n",
ith, nth, src0->ne[0], src0->ne[1], src0->ne[2], src0->ne[3], ib0, ib1,
dst->ne[0], dst->ne[1], dst->ne[2], dst->ne[3],
(unsigned) HAP_perf_qtimer_count_to_us(t2 - t1));
}
int op_diag_f32(struct htp_ops_context * octx) {
const struct htp_tensor * src0 = octx->src[0];
const struct htp_tensor * dst = octx->dst;
if (octx->flags & HTP_OPFLAGS_SKIP_COMPUTE) {
return HTP_STATUS_OK;
}
const uint32_t total_batches = src0->ne[2] * src0->ne[3];
const uint32_t n_threads = MIN(octx->n_threads, total_batches);
const size_t src_batch_size = src0->ne[0] * sizeof(float);
const size_t dst_row_size = dst->ne[0] * sizeof(float);
const size_t src_batch_size_aligned = hex_round_up(src_batch_size, VLEN);
const size_t dst_row_size_aligned = hex_round_up(dst_row_size, VLEN);
// 1 src buffer + 1 dst row buffer per thread
const size_t spad_per_thread = src_batch_size_aligned + dst_row_size_aligned;
octx->src0_spad.size_per_thread = src_batch_size_aligned;
octx->dst_spad.size_per_thread = dst_row_size_aligned;
octx->src0_spad.size = n_threads * octx->src0_spad.size_per_thread;
octx->dst_spad.size = n_threads * octx->dst_spad.size_per_thread;
octx->src0_spad.data = octx->ctx->vtcm_base; octx->src0_spad.src = NULL;
octx->dst_spad.data = octx->src0_spad.data + octx->src0_spad.size; octx->dst_spad.src = NULL;
struct htp_diag_context dctx = {
.octx = octx,
.src_batch_size = src_batch_size,
.dst_row_size = dst_row_size,
.src_batch_size_aligned = src_batch_size_aligned,
.dst_row_size_aligned = dst_row_size_aligned,
.batches_per_thread = (total_batches + n_threads - 1) / n_threads,
.total_batches = total_batches,
};
if (octx->ctx->vtcm_size < spad_per_thread * n_threads) {
worker_pool_run_func(octx->ctx->worker_pool, diag_thread_f32, &dctx, n_threads);
} else {
worker_pool_run_func(octx->ctx->worker_pool, diag_thread_f32_dma, &dctx, n_threads);
}
return HTP_STATUS_OK;
}
int op_diag(struct htp_ops_context * octx) {
const struct htp_tensor * dst = octx->dst;
int err = HTP_STATUS_OK;
switch (dst->type) {
case HTP_TYPE_F32:
err = op_diag_f32(octx);
break;
default:
err = HTP_STATUS_NO_SUPPORT;
break;
}
return err;
}

View File

@@ -0,0 +1,123 @@
#pragma clang diagnostic ignored "-Wunused-variable"
#pragma clang diagnostic ignored "-Wunused-function"
#pragma clang diagnostic ignored "-Wunused-but-set-variable"
#include <HAP_farf.h>
#include <HAP_perf.h>
#include <string.h>
#include "hvx-copy.h"
#include "hvx-utils.h"
#define GGML_COMMON_DECL_C
#include "ggml-common.h"
#include "htp-ctx.h"
#include "htp-ops.h"
// ggml op_params layout for FILL:
// op_params[0] (as float) - the scalar fill value
#define fill_preamble \
const struct htp_tensor * dst = octx->dst; \
\
const uint32_t ne0 = dst->ne[0]; \
const uint32_t ne1 = dst->ne[1]; \
const uint32_t ne2 = dst->ne[2]; \
const uint32_t ne3 = dst->ne[3]; \
\
const uint32_t nb1 = dst->nb[1]; \
const uint32_t nb2 = dst->nb[2]; \
const uint32_t nb3 = dst->nb[3]; \
\
const uint32_t nr = ne1 * ne2 * ne3;
struct htp_fill_context {
struct htp_ops_context * octx;
uint32_t nrows_per_thread;
uint32_t total_rows; // ne1 * ne2 * ne3
bool opt_path;
HVX_Vector splat_vec;
uint32_t elem_size;
};
static void fill_thread(unsigned int nth, unsigned int ith, void * data) {
const struct htp_fill_context * fctx = (const struct htp_fill_context *) data;
struct htp_ops_context * octx = fctx->octx;
fill_preamble;
// Parallelise over the flat row index spanning ne1*ne2*ne3
const uint32_t ir0 = fctx->nrows_per_thread * ith;
const uint32_t ir1 = MIN(ir0 + fctx->nrows_per_thread, fctx->total_rows);
uint64_t t1 = HAP_perf_get_qtimer_count();
if (fctx->opt_path) {
// Opt path: tensor is fully contiguous, treat as flat array
const uint32_t elem_start = ir0 * ne0;
const uint32_t elem_end = ir1 * ne0;
uint8_t * dst_ptr = (uint8_t *) dst->data + elem_start * fctx->elem_size;
hvx_splat_u(dst_ptr, fctx->splat_vec, elem_end - elem_start, fctx->elem_size);
} else {
// Non-contiguous path: must respect strides
for (uint32_t ir = ir0; ir < ir1; ++ir) {
const uint32_t i1 = ir % ne1;
const uint32_t i2 = (ir / ne1) % ne2;
const uint32_t i3 = ir / (ne1 * ne2);
uint8_t * dst_ptr = (uint8_t *) dst->data + i1*nb1 + i2*nb2 + i3*nb3;
hvx_splat_u(dst_ptr, fctx->splat_vec, ne0, fctx->elem_size);
}
}
uint64_t t2 = HAP_perf_get_qtimer_count();
FARF(HIGH, "fill %u/%u: rows %u:%u usec %u\n",
ith, nth, ir0, ir1, (unsigned) HAP_perf_qtimer_count_to_us(t2 - t1));
}
int op_fill(struct htp_ops_context * octx) {
fill_preamble;
if (dst->type != HTP_TYPE_F32 && dst->type != HTP_TYPE_F16) {
return HTP_STATUS_NO_SUPPORT;
}
if (octx->flags & HTP_OPFLAGS_SKIP_COMPUTE) {
return HTP_STATUS_OK;
}
// nr = ne1*ne2*ne3 (flat row count across all outer dims); parallelise over it.
const uint32_t n_threads = MIN(nr, octx->n_threads);
// Optimize if fully contiguous: skip stride arithmetic, treat as flat array
const bool opt_path = (nb2 == nb1 * ne1) && (nb3 == nb2 * ne2);
FARF(HIGH, "fill: (%ux%ux%ux%u) type=%u opt=%d\n",
dst->ne[0], dst->ne[1], dst->ne[2], dst->ne[3], dst->type, (int) opt_path);
float val_f32 = 0.f;
memcpy(&val_f32, &octx->op_params[0], sizeof(float));
struct htp_fill_context fctx = {
.octx = octx,
.nrows_per_thread = (nr + n_threads - 1) / n_threads,
.total_rows = nr,
.opt_path = opt_path,
};
switch (dst->type) {
case HTP_TYPE_F32:
fctx.splat_vec = hvx_vec_splat_f32(val_f32);
fctx.elem_size = sizeof(float);
break;
case HTP_TYPE_F16:
fctx.splat_vec = hvx_vec_splat_f16((_Float16) val_f32);
fctx.elem_size = sizeof(_Float16);
break;
default:
return HTP_STATUS_NO_SUPPORT;
}
worker_pool_run_func(octx->ctx->worker_pool, fill_thread, &fctx, n_threads);
return HTP_STATUS_OK;
}

View File

@@ -98,5 +98,7 @@ int op_repeat(struct htp_ops_context * octx);
int op_argsort(struct htp_ops_context * octx);
int op_ssm_conv(struct htp_ops_context * octx);
int op_cumsum(struct htp_ops_context * octx);
int op_fill(struct htp_ops_context * octx);
int op_diag(struct htp_ops_context * octx);
#endif /* HTP_CTX_H */

View File

@@ -80,6 +80,8 @@ enum htp_op_code {
HTP_OP_SSM_CONV,
HTP_OP_REPEAT,
HTP_OP_CUMSUM,
HTP_OP_FILL,
HTP_OP_DIAG,
HTP_OP_INVALID
};

View File

@@ -514,6 +514,12 @@ static int execute_op(struct htp_ops_context * octx) {
case HTP_OP_CUMSUM:
return op_cumsum(octx);
case HTP_OP_FILL:
return op_fill(octx);
case HTP_OP_DIAG:
return op_diag(octx);
case HTP_OP_INVALID:
break;

View File

@@ -18,6 +18,7 @@ libggml-htp-v68.so = 1
libggml-htp-v69.so = 1
libggml-htp-v73.so = 1
libggml-htp-v75.so = 1
libggml-htp-v79.so = 1
libggml-htp-v81.so = 1
[ControlFlags]
@@ -31,6 +32,7 @@ libggml-htp-v68.so,,,0x10 ;COPYFLG_NO_OVERWRITE
libggml-htp-v69.so,,,0x10 ;COPYFLG_NO_OVERWRITE
libggml-htp-v73.so,,,0x10 ;COPYFLG_NO_OVERWRITE
libggml-htp-v75.so,,,0x10 ;COPYFLG_NO_OVERWRITE
libggml-htp-v79.so,,,0x10 ;COPYFLG_NO_OVERWRITE
libggml-htp-v81.so,,,0x10 ;COPYFLG_NO_OVERWRITE
[Strings]

View File

@@ -19,7 +19,6 @@
#include <iomanip>
#include <map>
#include <memory>
#include <mutex>
#include <openvino/core/dimension.hpp>
#include <openvino/core/except.hpp>
#include <openvino/core/node.hpp>
@@ -207,8 +206,22 @@ int GgmlOvDecoder::compute_op_case(const ggml_tensor * node) const {
break;
}
case GGML_OP_ROPE: {
const int mode = node->op_params[2];
switch (mode) {
case GGML_ROPE_TYPE_NEOX: {
op_case = 0x00010000;
break;
}
case GGML_ROPE_TYPE_IMROPE: {
op_case = 0x00020000;
break;
}
default:
op_case = 0x00000000;
break;
}
if (node->src[0]->op == GGML_OP_VIEW) {
op_case = 2;
op_case = (op_case | 0x00000002);
}
break;
}
@@ -573,9 +586,6 @@ std::map<std::string, std::string> GgmlOvDecoder::get_kv_param_res_names() const
}
std::map<std::string, std::shared_ptr<ov::Node>> GgmlOvDecoder::create_weight_nodes(ggml_cgraph * cgraph, bool naive) {
static std::mutex weights_mutex;
std::lock_guard<std::mutex> lock(weights_mutex);
std::map<std::string, std::shared_ptr<ov::Node>> model_weights;
auto * nodes = cgraph->nodes;
auto n_nodes = cgraph->n_nodes;

View File

@@ -6,6 +6,7 @@
#include <cstring>
#include <openvino/runtime/intel_gpu/ocl/ocl.hpp>
#include <openvino/runtime/intel_npu/level_zero/level_zero.hpp>
#include <openvino/runtime/properties.hpp>
#include <optional>
ov::Core & ov_singleton_core() {
@@ -42,11 +43,13 @@ void ggml_openvino_device_config::init() {
{"NPUW_DQ", "YES" },
{"NPUW_DQ_FULL", "NO" },
};
if (cache_dir) {
if (cache_dir && strlen(cache_dir) > 0) {
compile_config["NPUW_CACHE_DIR"] = cache_dir;
compile_config.insert(ov::cache_mode(ov::CacheMode::OPTIMIZE_SIZE));
}
} else if (cache_dir) {
ov_singleton_core().set_property(ov::cache_dir(cache_dir));
} else if (cache_dir && strlen(cache_dir) > 0) {
compile_config.insert(ov::cache_dir(cache_dir));
compile_config.insert(ov::cache_mode(ov::CacheMode::OPTIMIZE_SIZE));
}
// Initialize remote context with queue sharing for GPU
@@ -259,10 +262,12 @@ ggml_openvino_extracted_layout ggml_openvino_get_extracted_layout(const ggml_ten
layout.weights_size = layout.is_u4 ? (n_elements / 2) : n_elements;
int64_t n_blocks = n_elements / layout.weights_per_block;
layout.scales_size = n_blocks * sizeof(uint16_t);
// For symmetric quantization, we only need one zp value (not one per block)
// Zero points are stored in U4 or U8 format matching the weight type
size_t n_zp_elements = layout.is_symmetric ? 1 : n_blocks;
layout.zp_size = layout.is_u4 ? ((n_zp_elements + 1) / 2) : n_zp_elements;
// For symmetric quantization, no zp needed (weights stored as signed)
if (layout.is_symmetric) {
layout.zp_size = 0;
} else {
layout.zp_size = layout.is_u4 ? ((n_blocks + 1) / 2) : n_blocks;
}
layout.weights_offset = 0;
layout.scales_offset = ((layout.weights_size + alignment - 1) / alignment) * alignment;
@@ -313,10 +318,12 @@ ggml_openvino_extracted_layout ggml_openvino_get_extracted_layout(const ggml_ten
// Scales: F16 per block
int64_t n_blocks = n_elements / layout.weights_per_block;
layout.scales_size = n_blocks * sizeof(uint16_t); // F16 = 2 bytes
// Zero points: U4 or U8 matching weight type
// For symmetric quantization, we only need one zp value (not one per block)
size_t n_zp_elements = layout.is_symmetric ? 1 : n_blocks;
layout.zp_size = layout.is_u4 ? ((n_zp_elements + 1) / 2) : n_zp_elements;
// For symmetric quantization, no zp needed (weights stored as signed)
if (layout.is_symmetric) {
layout.zp_size = 0;
} else {
layout.zp_size = layout.is_u4 ? ((n_blocks + 1) / 2) : n_blocks;
}
// Layout in buffer: [weights | scales | zp] with alignment
layout.weights_offset = 0;

View File

@@ -145,13 +145,18 @@ static void * ggml_backend_openvino_buffer_get_base(ggml_backend_buffer_t buffer
return ctx->data;
}
static bool is_stateful_enabled() {
static const auto * stateful = getenv("GGML_OPENVINO_STATEFUL_EXECUTION");
return stateful && *stateful != '\0' && strcmp(stateful, "0") != 0;
}
static enum ggml_status ggml_backend_openvino_buffer_init_tensor(ggml_backend_buffer_t buffer, ggml_tensor * tensor) {
// GGML_LOG_DEBUG("%s: buffer usage=%d, tensor name=%s\n", __func__, buffer->usage, tensor->name);
ggml_backend_openvino_buffer_context * ctx = (ggml_backend_openvino_buffer_context *) buffer->context;
// Put kvcache on device memory for GPU (NPU memory is too small even for kvcache)
if (strncmp(tensor->name, "cache_", 6) == 0 && !ctx->is_remote && ggml_openvino_get_device_name() == "GPU" &&
!getenv("GGML_OPENVINO_STATEFUL_EXECUTION")) {
!is_stateful_enabled()) {
GGML_ASSERT(ctx->tensor_extras.empty());
auto device = ctx->device;
auto size = ctx->size;
@@ -600,6 +605,14 @@ bool ggml_backend_buft_is_openvino_host(ggml_backend_buffer_type_t buft) {
static void ggml_backend_openvino_free(ggml_backend_t backend) {
ggml_backend_openvino_context * ctx = (ggml_backend_openvino_context *) backend->context;
if (ctx->runtime_context) {
auto r_ctx = std::static_pointer_cast<ov_runtime_context>(ctx->runtime_context);
if (--r_ctx->backend_count == 0) {
r_ctx->clear_caches();
}
}
delete ctx;
delete backend;
}
@@ -644,7 +657,12 @@ static ggml_guid_t ggml_backend_openvino_guid(void) {
}
static std::shared_ptr<ov_runtime_context> get_ov_runtime_context_ptr() {
static std::shared_ptr<ov_runtime_context> r_ctx = std::make_shared<ov_runtime_context>();
static std::shared_ptr<ov_runtime_context> r_ctx = [] {
auto ctx = std::make_shared<ov_runtime_context>();
ctx->device = ggml_openvino_get_device_name();
ctx->stateful = is_stateful_enabled() && !ggml_openvino_is_npu();
return ctx;
}();
return r_ctx;
}
@@ -669,8 +687,7 @@ GGML_BACKEND_API ggml_backend_t ggml_backend_openvino_init(int device) {
}
std::shared_ptr<ov_runtime_context> r_ctx = std::static_pointer_cast<ov_runtime_context>(ctx->runtime_context);
r_ctx->device = ggml_openvino_get_device_name();
r_ctx->stateful = getenv("GGML_OPENVINO_STATEFUL_EXECUTION") && !ggml_openvino_is_npu();
r_ctx->backend_count++;
ggml_backend_t openvino_backend = new ggml_backend{
/* .guid = */ ggml_backend_openvino_guid(),
@@ -883,7 +900,7 @@ static bool is_op_unsupported_case(const ggml_tensor * op) {
const int32_t * op_params = op->op_params;
const int n_dims = op_params[1];
const int mode = op_params[2];
if (mode != GGML_ROPE_TYPE_NORMAL && mode != GGML_ROPE_TYPE_NEOX) {
if (mode != GGML_ROPE_TYPE_NORMAL && mode != GGML_ROPE_TYPE_NEOX && mode != GGML_ROPE_TYPE_IMROPE) {
// GGML_LOG_WARN("OpenVINO backend does not support ROPE with mode %d\n", mode);
return true;
}
@@ -896,14 +913,6 @@ static bool is_op_unsupported_case(const ggml_tensor * op) {
// GGML_LOG_WARN("OpenVINO backend does not support ROPE with type %s\n", ggml_type_name(op->type));
return true;
}
float freq_scale;
float ext_factor;
memcpy(&freq_scale, op_params + 6, sizeof(float));
memcpy(&ext_factor, op_params + 7, sizeof(float));
if (ext_factor != 0.0f) {
// GGML_LOG_WARN("OpenVINO backend does not support ROPE with ext_factor %f != 0.0f\n", ext_factor);
return true;
}
if (op->src[0]->op == GGML_OP_VIEW) {
if (op->src[0]->view_src->ne[1] != op->src[0]->ne[2]) {
// GGML_LOG_WARN(
@@ -913,6 +922,12 @@ static bool is_op_unsupported_case(const ggml_tensor * op) {
return true;
}
}
if (mode == GGML_ROPE_TYPE_IMROPE &&
(op->src[2] != 0 || ((const float *) op_params)[6] != 1 || ((const float *) op_params)[7] != 0 ||
((const float *) op_params)[8] != 1)) {
// GGML_LOG_WARN("OpenVINO backend does not support IMROPE with freq_factors, freq_scale, ext_factor, and attn_factor\n");
return true;
}
break;
}
default:
@@ -942,6 +957,7 @@ static bool ggml_backend_openvino_device_supports_op(ggml_backend_dev_t dev, con
// GGML_OP_SOFT_MAX,
GGML_OP_SET_ROWS, GGML_OP_FLASH_ATTN_EXT, GGML_OP_CPY};
static const std::set<ggml_unary_op> supported_unary_ops{
GGML_UNARY_OP_GELU,
GGML_UNARY_OP_SILU,
};
static const std::set<ggml_glu_op> supported_glu_ops{

View File

@@ -46,6 +46,7 @@ void unpack_32_4(const uint8_t * data, uint8_t * dst) {
// Extracts (weight, scales, zp) from Q4_0 tensors.
// Data layout is: |16 bit scale|32 x 4bit weights|.
// When zp_arr is empty (symmetric), weights are stored as signed i4 (value - 8).
void extract_q4_0_data(const ggml_tensor * tensor,
ov::Tensor & weights_arr,
ov::Tensor & scales_arr,
@@ -55,28 +56,32 @@ void extract_q4_0_data(const ggml_tensor * tensor,
auto * data = static_cast<uint8_t *>(tensor->data);
auto * weights = static_cast<uint8_t *>(weights_arr.data());
auto * scales = scales_arr.data<ov::element_type_traits<ov::element::f16>::value_type>();
auto * zp = static_cast<uint8_t *>(zp_arr.data());
bool is_scalar_zp = (zp_arr.get_size() == 1); // Symmetric quantization
bool is_symmetric = (weights_arr.get_element_type() == ov::element::i4); // Signed i4 path
// For Q4_0, zero point is always 8
if (is_scalar_zp) {
zp[0] = 8 | (8 << 4); // Pack two 4-bit values
}
ov::parallel_for(scales_arr.get_size(), [&](size_t i) {
scales[i] = ov::float16::from_bits(*((uint16_t *) (data + i * bytes_per_block)));
// For asymmetric quantization, compute per-block zero points
if (!is_scalar_zp) {
if (!is_symmetric) {
auto * zp = static_cast<uint8_t *>(zp_arr.data());
ov::parallel_for(scales_arr.get_size(), [&](size_t i) {
scales[i] = ov::float16::from_bits(*((uint16_t *) (data + i * bytes_per_block)));
// Pack two 4-bit zero points per byte
if (i % 2 == 0) {
zp[i / 2] = 8; // Lower nibble
} else {
zp[i / 2] |= (8 << 4); // Upper nibble
}
}
unpack_32_4(data + i * bytes_per_block + 2, weights + i * 16);
});
unpack_32_4(data + i * bytes_per_block + 2, weights + i * 16);
});
} else {
// Symmetric: unpack as u4 then convert to i4 by subtracting 8 (XOR each nibble)
ov::parallel_for(scales_arr.get_size(), [&](size_t i) {
scales[i] = ov::float16::from_bits(*((uint16_t *) (data + i * bytes_per_block)));
unpack_32_4(data + i * bytes_per_block + 2, weights + i * 16);
// Convert u4 to i4: subtract 8 from each nibble. XOR 0x88 flips each nibble by 8.
for (int j = 0; j < 16; ++j) {
weights[i * 16 + j] ^= 0x88;
}
});
}
}
// Extracts (weight, scales, zp) from Q4_1 tensors.
@@ -123,6 +128,7 @@ void extract_q4_1_data(const ggml_tensor * tensor,
// Extracts (weight, scales, zp) from Q8_0 tensors.
// Data layout is: |16 bit scale|32 x 8bit weights|.
// When zp_arr is empty (symmetric), weights are stored as signed i8 directly.
void extract_q8_0_data(const ggml_tensor * tensor,
ov::Tensor & weights_arr,
ov::Tensor & scales_arr,
@@ -133,29 +139,30 @@ void extract_q8_0_data(const ggml_tensor * tensor,
auto * data = static_cast<uint8_t *>(tensor->data);
auto * weights = static_cast<uint8_t *>(weights_arr.data());
auto * scales = scales_arr.data<ov::element_type_traits<ov::element::f16>::value_type>();
auto * zp = static_cast<uint8_t *>(zp_arr.data());
bool is_scalar_zp = (zp_arr.get_size() == 1); // Symmetric quantization
bool is_symmetric = (weights_arr.get_element_type() == ov::element::i8); // Signed i8 path
// For Q8_0, zero point is always 128
if (is_scalar_zp) {
zp[0] = 128;
}
ov::parallel_for(scales_arr.get_size(), [&](size_t i) {
uint8_t * block_data = data + i * bytes_per_block;
scales[i] = ov::float16::from_bits(*(uint16_t *) block_data);
// For asymmetric quantization, store per-block zero points
if (!is_scalar_zp) {
if (!is_symmetric) {
auto * zp = static_cast<uint8_t *>(zp_arr.data());
ov::parallel_for(scales_arr.get_size(), [&](size_t i) {
uint8_t * block_data = data + i * bytes_per_block;
scales[i] = ov::float16::from_bits(*(uint16_t *) block_data);
zp[i] = 128;
}
for (size_t j = 0; j < weights_per_block; ++j) {
uint8_t x = block_data[j + 2]; // j+2 to skip the scale bytes.
// Original data is in int8_t, so we add a bias of -128 and invert the first bit.
x ^= 1 << 7;
weights[i * weights_per_block + j] = x;
}
});
for (size_t j = 0; j < weights_per_block; ++j) {
uint8_t x = block_data[j + 2];
x ^= 1 << 7; // Convert int8 to uint8 by flipping sign bit
weights[i * weights_per_block + j] = x;
}
});
} else {
// Symmetric: store original int8 values directly (no unsigned bias)
ov::parallel_for(scales_arr.get_size(), [&](size_t i) {
uint8_t * block_data = data + i * bytes_per_block;
scales[i] = ov::float16::from_bits(*(uint16_t *) block_data);
// Copy int8 weights as-is (the tensor element type is i8)
memcpy(weights + i * weights_per_block, block_data + 2, weights_per_block);
});
}
}
void unpack_256_4(const uint8_t * data, uint8_t * dst) {
@@ -256,44 +263,62 @@ void extract_q6_k_data(const ggml_tensor * tensor,
auto * data = static_cast<uint8_t *>(tensor->data);
auto * weights = static_cast<uint8_t *>(weights_arr.data());
auto * scales = scales_arr.data<ov::element_type_traits<ov::element::f16>::value_type>();
auto * zp = static_cast<uint8_t *>(zp_arr.data());
bool is_scalar_zp = (zp_arr.get_size() == 1); // Symmetric quantization
bool is_symmetric = (weights_arr.get_element_type() == ov::element::i8); // Signed i8 path
// For Q6_K, zero point is always 32
if (is_scalar_zp) {
zp[0] = 32;
}
ov::parallel_for(n_super_block, [&](size_t i) {
uint8_t * block_data = data + i * bytes_per_block;
float scale_factor =
static_cast<float>(ov::float16::from_bits(*((uint16_t *) block_data + 104))); // (128+64+16)/2
for (size_t j = 0; j < 16; j++) {
scales[j + i * 16] =
ov::float16(scale_factor * static_cast<float>(*((int8_t *) (block_data + 128 + 64 + j))));
// For asymmetric quantization, store per-block zero points
if (!is_scalar_zp) {
if (!is_symmetric) {
auto * zp = static_cast<uint8_t *>(zp_arr.data());
ov::parallel_for(n_super_block, [&](size_t i) {
uint8_t * block_data = data + i * bytes_per_block;
float scale_factor = static_cast<float>(ov::float16::from_bits(*((uint16_t *) block_data + 104)));
for (size_t j = 0; j < 16; j++) {
scales[j + i * 16] =
ov::float16(scale_factor * static_cast<float>(*((int8_t *) (block_data + 128 + 64 + j))));
zp[j + i * 16] = 32;
}
}
uint8_t * ql = block_data;
uint8_t * qh = block_data + 128;
for (int64_t j = 0; j < 32; ++j) {
weights[i * 256 + j] = (ql[j] & 0xF) | (((qh[j] >> 0) & 3) << 4);
weights[i * 256 + j + 32] = (ql[32 + j] & 0xF) | (((qh[j] >> 2) & 3) << 4);
weights[i * 256 + j + 64] = (ql[j] >> 4) | (((qh[j] >> 4) & 3) << 4);
weights[i * 256 + j + 96] = (ql[32 + j] >> 4) | (((qh[j] >> 6) & 3) << 4);
weights[i * 256 + j + 128] = (ql[64 + j] & 0xF) | (((qh[32 + j] >> 0) & 3) << 4);
weights[i * 256 + j + 160] = (ql[96 + j] & 0xF) | (((qh[32 + j] >> 2) & 3) << 4);
weights[i * 256 + j + 192] = (ql[64 + j] >> 4) | (((qh[32 + j] >> 4) & 3) << 4);
weights[i * 256 + j + 224] = (ql[96 + j] >> 4) | (((qh[32 + j] >> 6) & 3) << 4);
}
});
uint8_t * ql = block_data;
uint8_t * qh = block_data + 128;
for (int64_t j = 0; j < 32; ++j) {
weights[i * 256 + j] = (ql[j] & 0xF) | (((qh[j] >> 0) & 3) << 4);
weights[i * 256 + j + 32] = (ql[32 + j] & 0xF) | (((qh[j] >> 2) & 3) << 4);
weights[i * 256 + j + 64] = (ql[j] >> 4) | (((qh[j] >> 4) & 3) << 4);
weights[i * 256 + j + 96] = (ql[32 + j] >> 4) | (((qh[j] >> 6) & 3) << 4);
weights[i * 256 + j + 128] = (ql[64 + j] & 0xF) | (((qh[32 + j] >> 0) & 3) << 4);
weights[i * 256 + j + 160] = (ql[96 + j] & 0xF) | (((qh[32 + j] >> 2) & 3) << 4);
weights[i * 256 + j + 192] = (ql[64 + j] >> 4) | (((qh[32 + j] >> 4) & 3) << 4);
weights[i * 256 + j + 224] = (ql[96 + j] >> 4) | (((qh[32 + j] >> 6) & 3) << 4);
}
});
} else {
// Symmetric: subtract 32 from each weight to store as signed i8
ov::parallel_for(n_super_block, [&](size_t i) {
uint8_t * block_data = data + i * bytes_per_block;
float scale_factor = static_cast<float>(ov::float16::from_bits(*((uint16_t *) block_data + 104)));
for (size_t j = 0; j < 16; j++) {
scales[j + i * 16] =
ov::float16(scale_factor * static_cast<float>(*((int8_t *) (block_data + 128 + 64 + j))));
}
uint8_t * ql = block_data;
uint8_t * qh = block_data + 128;
auto * signed_weights = reinterpret_cast<int8_t *>(weights);
for (int64_t j = 0; j < 32; ++j) {
signed_weights[i * 256 + j] = static_cast<int8_t>((ql[j] & 0xF) | (((qh[j] >> 0) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 32] =
static_cast<int8_t>((ql[32 + j] & 0xF) | (((qh[j] >> 2) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 64] = static_cast<int8_t>((ql[j] >> 4) | (((qh[j] >> 4) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 96] =
static_cast<int8_t>((ql[32 + j] >> 4) | (((qh[j] >> 6) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 128] =
static_cast<int8_t>((ql[64 + j] & 0xF) | (((qh[32 + j] >> 0) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 160] =
static_cast<int8_t>((ql[96 + j] & 0xF) | (((qh[32 + j] >> 2) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 192] =
static_cast<int8_t>((ql[64 + j] >> 4) | (((qh[32 + j] >> 4) & 3) << 4)) - 32;
signed_weights[i * 256 + j + 224] =
static_cast<int8_t>((ql[96 + j] >> 4) | (((qh[32 + j] >> 6) & 3) << 4)) - 32;
}
});
}
}
static inline void get_scale_min_k4(int j, const uint8_t * q, uint8_t * d, uint8_t * m) {
@@ -389,11 +414,10 @@ ov::Output<ov::Node> make_int8_weights(ov::Tensor & weight,
size_t group_size,
bool use_bias) {
ov::Shape orig_shape = weight.get_shape();
bool is_signed = (weight.get_element_type() == ov::element::i8); // Symmetric: signed weights, no ZP
// Expand dimensions for scales and zp/bias
auto scale_shape = scales.get_shape();
auto zp_shape = zp.get_shape();
bool is_scalar_zp = zp_shape.empty(); // Symmetric quantization
ov::Shape packed_shape = {orig_shape[0], orig_shape[1] / group_size, group_size};
@@ -403,37 +427,48 @@ ov::Output<ov::Node> make_int8_weights(ov::Tensor & weight,
} else {
scale_shape.push_back(1);
scales.set_shape(scale_shape);
// For symmetric quantization, zp remains scalar (don't resize)
if (!is_scalar_zp) {
if (!is_signed && zp.get_size() > 0) {
auto zp_shape = zp.get_shape();
zp_shape.push_back(1);
zp.set_shape(zp_shape);
}
}
// Create graph nodes
auto weights_node = std::make_shared<ov::op::v0::Constant>(ov::element::u8, packed_shape,
static_cast<uint8_t *>(weight.data()), nullptr);
weights_node->get_rt_info()["__gguf_tensor_holder"] = weight;
auto scales_f16 = std::make_shared<ov::op::v0::Constant>(scales);
auto weights_f16 = std::make_shared<ov::op::v0::Convert>(weights_node, ov::element::f16);
ov::Output<ov::Node> result;
if (use_bias && !is_scalar_zp) {
// Bias path: w * s + b (zp tensor holds f16 bias values)
auto bias_f16 = std::make_shared<ov::op::v0::Constant>(zp);
auto w_s = std::make_shared<ov::op::v1::Multiply>(weights_f16, scales_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Add>(w_s, bias_f16, ov::op::AutoBroadcastType::NUMPY);
if (is_signed) {
// Signed path: q * s (no zero point subtraction needed)
auto weights_node = std::make_shared<ov::op::v0::Constant>(ov::element::i8, packed_shape,
static_cast<uint8_t *>(weight.data()), nullptr);
weights_node->get_rt_info()["__gguf_tensor_holder"] = weight;
auto weights_f16 = std::make_shared<ov::op::v0::Convert>(weights_node, ov::element::f16);
result = std::make_shared<ov::op::v1::Multiply>(weights_f16, scales_f16, ov::op::AutoBroadcastType::NUMPY);
} else {
// Zero point path: (w - zp) * s
auto zero_point = std::make_shared<ov::op::v0::Constant>(zp);
float zp_value;
if (ov::op::util::get_single_value(zero_point, zp_value)) {
zero_point = ov::op::v0::Constant::create(zero_point->get_element_type(), {}, {zp_value});
// Unsigned path
auto weights_node = std::make_shared<ov::op::v0::Constant>(ov::element::u8, packed_shape,
static_cast<uint8_t *>(weight.data()), nullptr);
weights_node->get_rt_info()["__gguf_tensor_holder"] = weight;
auto weights_f16 = std::make_shared<ov::op::v0::Convert>(weights_node, ov::element::f16);
if (use_bias && zp.get_size() > 0) {
// Bias path: w * s + b (zp tensor holds f16 bias values)
auto bias_f16 = std::make_shared<ov::op::v0::Constant>(zp);
auto w_s =
std::make_shared<ov::op::v1::Multiply>(weights_f16, scales_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Add>(w_s, bias_f16, ov::op::AutoBroadcastType::NUMPY);
} else {
// Zero point path: (w - zp) * s
auto zero_point = std::make_shared<ov::op::v0::Constant>(zp);
float zp_value;
if (ov::op::util::get_single_value(zero_point, zp_value)) {
zero_point = ov::op::v0::Constant::create(zero_point->get_element_type(), {}, {zp_value});
}
auto zero_point_f16 = std::make_shared<ov::op::v0::Convert>(zero_point, ov::element::f16);
auto w_zp =
std::make_shared<ov::op::v1::Subtract>(weights_f16, zero_point_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Multiply>(w_zp, scales_f16, ov::op::AutoBroadcastType::NUMPY);
}
auto zero_point_f16 = std::make_shared<ov::op::v0::Convert>(zero_point, ov::element::f16);
auto w_zp =
std::make_shared<ov::op::v1::Subtract>(weights_f16, zero_point_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Multiply>(w_zp, scales_f16, ov::op::AutoBroadcastType::NUMPY);
}
if (packed_shape.size() != 2) {
@@ -452,11 +487,10 @@ ov::Output<ov::Node> make_int4_weights(ov::Tensor & weight,
size_t group_size,
bool use_bias) {
ov::Shape orig_weight_shape = weight.get_shape();
bool is_signed = (weight.get_element_type() == ov::element::i4); // Symmetric: signed weights, no ZP
// Expand dimensions for scales and zp/bias
ov::Shape scale_shape = scales.get_shape();
auto zp_shape = zp.get_shape();
bool is_scalar_zp = zp_shape.empty(); // Symmetric quantization
// Create INT4 weight tensor
ov::Shape packed_shape = {orig_weight_shape[0], orig_weight_shape[1] / group_size, group_size};
@@ -467,36 +501,48 @@ ov::Output<ov::Node> make_int4_weights(ov::Tensor & weight,
} else {
scale_shape.push_back(1);
scales.set_shape(scale_shape);
// For symmetric quantization, zp remains scalar (don't resize)
if (!is_scalar_zp) {
if (!is_signed && zp.get_size() > 0) {
auto zp_shape = zp.get_shape();
zp_shape.push_back(1);
zp.set_shape(zp_shape);
}
}
auto weights_node = std::make_shared<ov::op::v0::Constant>(ov::element::u4, packed_shape,
static_cast<uint8_t *>(weight.data()), nullptr);
weights_node->get_rt_info()["__gguf_tensor_holder"] = weight;
auto weights_f16 = std::make_shared<ov::op::v0::Convert>(weights_node, ov::element::f16);
auto scales_f16 = std::make_shared<ov::op::v0::Constant>(scales);
ov::Output<ov::Node> result;
if (use_bias && !is_scalar_zp) {
// Bias path: w * s + b (zp tensor holds f16 bias values)
auto bias_f16 = std::make_shared<ov::op::v0::Constant>(zp);
auto w_s = std::make_shared<ov::op::v1::Multiply>(weights_f16, scales_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Add>(w_s, bias_f16, ov::op::AutoBroadcastType::NUMPY);
if (is_signed) {
// Signed path: q * s (no zero point subtraction needed)
auto weights_node = std::make_shared<ov::op::v0::Constant>(ov::element::i4, packed_shape,
static_cast<uint8_t *>(weight.data()), nullptr);
weights_node->get_rt_info()["__gguf_tensor_holder"] = weight;
auto weights_f16 = std::make_shared<ov::op::v0::Convert>(weights_node, ov::element::f16);
result = std::make_shared<ov::op::v1::Multiply>(weights_f16, scales_f16, ov::op::AutoBroadcastType::NUMPY);
} else {
// Zero point path: (w - zp) * s
auto zero_points_node = std::make_shared<ov::op::v0::Constant>(zp);
float zp_value;
if (ov::op::util::get_single_value(zero_points_node, zp_value)) {
zero_points_node = ov::op::v0::Constant::create(zero_points_node->get_element_type(), {}, {zp_value});
// Unsigned path
auto weights_node = std::make_shared<ov::op::v0::Constant>(ov::element::u4, packed_shape,
static_cast<uint8_t *>(weight.data()), nullptr);
weights_node->get_rt_info()["__gguf_tensor_holder"] = weight;
auto weights_f16 = std::make_shared<ov::op::v0::Convert>(weights_node, ov::element::f16);
if (use_bias && zp.get_size() > 0) {
// Bias path: w * s + b (zp tensor holds f16 bias values)
auto bias_f16 = std::make_shared<ov::op::v0::Constant>(zp);
auto w_s =
std::make_shared<ov::op::v1::Multiply>(weights_f16, scales_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Add>(w_s, bias_f16, ov::op::AutoBroadcastType::NUMPY);
} else {
// Zero point path: (w - zp) * s
auto zero_points_node = std::make_shared<ov::op::v0::Constant>(zp);
float zp_value;
if (ov::op::util::get_single_value(zero_points_node, zp_value)) {
zero_points_node = ov::op::v0::Constant::create(zero_points_node->get_element_type(), {}, {zp_value});
}
auto zero_points_f16 = std::make_shared<ov::op::v0::Convert>(zero_points_node, ov::element::f16);
auto w_zp =
std::make_shared<ov::op::v1::Subtract>(weights_f16, zero_points_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Multiply>(w_zp, scales_f16, ov::op::AutoBroadcastType::NUMPY);
}
auto zero_points_f16 = std::make_shared<ov::op::v0::Convert>(zero_points_node, ov::element::f16);
auto w_zp =
std::make_shared<ov::op::v1::Subtract>(weights_f16, zero_points_f16, ov::op::AutoBroadcastType::NUMPY);
result = std::make_shared<ov::op::v1::Multiply>(w_zp, scales_f16, ov::op::AutoBroadcastType::NUMPY);
}
if (packed_shape.size() != 2) {
@@ -699,24 +745,32 @@ OvWeight process_weight_tensor(const ggml_tensor * tensor, const void * data, vo
// Quantized path (normal extraction or quantized requant)
// Create weight/scale/zp tensors - shared between both paths
ov::element::Type weight_type = layout.is_u4 ? ov::element::u4 : ov::element::u8;
// For symmetric quantization, use signed types (i4/i8) and no ZP tensor
ov::element::Type weight_type = layout.is_symmetric ? (layout.is_u4 ? ov::element::i4 : ov::element::i8) :
(layout.is_u4 ? ov::element::u4 : ov::element::u8);
ov::Shape scale_shape = {node_shape[0], node_shape[1] / layout.weights_per_block};
ov::Shape zp_shape = layout.is_symmetric ? ov::Shape{} : scale_shape;
if (output_base_ptr) {
uint8_t * buf_base = static_cast<uint8_t *>(output_base_ptr);
result.weights = ov::Tensor(weight_type, node_shape, buf_base + layout.weights_offset);
result.scales = ov::Tensor(ov::element::f16, scale_shape, buf_base + layout.scales_offset);
result.zp = ov::Tensor(weight_type, zp_shape, buf_base + layout.zp_offset);
if (!layout.is_symmetric) {
ov::element::Type zp_type = layout.is_u4 ? ov::element::u4 : ov::element::u8;
result.zp = ov::Tensor(zp_type, scale_shape, buf_base + layout.zp_offset);
}
// else: result.zp remains default-constructed (empty) for symmetric
} else {
result.weights = ov::Tensor(weight_type, node_shape);
result.scales = ov::Tensor(ov::element::f16, scale_shape);
if (use_bias && !layout.is_symmetric) {
// bias only has effect for asymmetric quant
result.zp = ov::Tensor(ov::element::f16, zp_shape);
} else {
result.zp = ov::Tensor(weight_type, zp_shape);
if (!layout.is_symmetric) {
if (use_bias) {
result.zp = ov::Tensor(ov::element::f16, scale_shape);
} else {
ov::element::Type zp_type = layout.is_u4 ? ov::element::u4 : ov::element::u8;
result.zp = ov::Tensor(zp_type, scale_shape);
}
}
// else: result.zp remains default-constructed (empty) for symmetric
}
if (layout.is_requant && layout.requant_type.has_value()) {
@@ -741,59 +795,75 @@ void quantize_q4_0(const float * x,
auto * weights = static_cast<uint8_t *>(weights_arr.data());
auto * scales = scales_arr.data<ov::element_type_traits<ov::element::f16>::value_type>();
auto * zp = static_cast<uint8_t *>(zp_arr.data());
bool is_scalar_zp = (zp_arr.get_size() == 1); // Symmetric quantization
bool is_symmetric = (weights_arr.get_element_type() == ov::element::i4); // Signed i4 path
// For Q4_0, zero point is always 8
if (is_scalar_zp) {
zp[0] = 8 | (8 << 4); // Pack two 4-bit values
}
for (int i = 0; i < nb; i++) {
float amax = 0.0f; // absolute max
float max = 0.0f;
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
if (amax < fabsf(v)) {
amax = fabsf(v);
max = v;
if (!is_symmetric) {
auto * zp = static_cast<uint8_t *>(zp_arr.data());
for (int i = 0; i < nb; i++) {
float amax = 0.0f;
float max = 0.0f;
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
if (amax < fabsf(v)) {
amax = fabsf(v);
max = v;
}
}
}
const float d = max / -8;
if (d == 0) {
scales[i] = ov::float16(1.0f);
// zp is already set to 8 for symmetric, or set per-block for asymmetric
if (!is_scalar_zp) {
const float d = max / -8;
if (d == 0) {
scales[i] = ov::float16(1.0f);
if (i % 2 == 0) {
zp[i / 2] = 8;
} else {
zp[i / 2] |= (8 << 4);
}
memset(weights + i * qk / 2, 8 | (8 << 4), qk / 2);
continue;
}
memset(weights + i * qk / 2, 8 | (8 << 4), qk / 2);
continue;
}
const float id = 1.0f / d;
scales[i] = ov::float16(d);
// For asymmetric quantization, store per-block zero points
if (!is_scalar_zp) {
const float id = 1.0f / d;
scales[i] = ov::float16(d);
if (i % 2 == 0) {
zp[i / 2] = 8;
} else {
zp[i / 2] |= (8 << 4);
}
for (int j = 0; j < qk / 2; ++j) {
const float x0 = x[i * qk + 2 * j] * id;
const float x1 = x[i * qk + 2 * j + 1] * id;
const uint8_t xi0 = MIN(15, (int8_t) (x0 + 8.5f));
const uint8_t xi1 = MIN(15, (int8_t) (x1 + 8.5f));
weights[i * qk / 2 + j] = xi0 | (xi1 << 4);
}
}
for (int j = 0; j < qk / 2; ++j) {
const float x0 = x[i * qk + 2 * j] * id;
const float x1 = x[i * qk + 2 * j + 1] * id;
const uint8_t xi0 = MIN(15, (int8_t) (x0 + 8.5f));
const uint8_t xi1 = MIN(15, (int8_t) (x1 + 8.5f));
weights[i * qk / 2 + j] = xi0 | (xi1 << 4);
} else {
// Symmetric: produce signed i4 values in [-8, 7]
for (int i = 0; i < nb; i++) {
float amax = 0.0f;
float max = 0.0f;
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
if (amax < fabsf(v)) {
amax = fabsf(v);
max = v;
}
}
const float d = max / -8;
if (d == 0) {
scales[i] = ov::float16(1.0f);
// i4 value 0 packed: 0x00
memset(weights + i * qk / 2, 0, qk / 2);
continue;
}
const float id = 1.0f / d;
scales[i] = ov::float16(d);
for (int j = 0; j < qk / 2; ++j) {
const float x0 = x[i * qk + 2 * j] * id;
const float x1 = x[i * qk + 2 * j + 1] * id;
// Signed i4: range [-8, 7]. Quantize as round(x*id), then pack as 4-bit two's complement.
int8_t si0 = (int8_t) std::max(-8, std::min(7, (int) roundf(x0)));
int8_t si1 = (int8_t) std::max(-8, std::min(7, (int) roundf(x1)));
weights[i * qk / 2 + j] = (si0 & 0x0F) | ((si1 & 0x0F) << 4);
}
}
}
}
@@ -809,36 +879,42 @@ void quantize_q8_0(const float * x,
auto * weights = static_cast<uint8_t *>(weights_arr.data());
auto * scales = scales_arr.data<ov::element_type_traits<ov::element::f16>::value_type>();
auto * zp = static_cast<uint8_t *>(zp_arr.data());
bool is_scalar_zp = (zp_arr.get_size() == 1); // Symmetric quantization
bool is_symmetric = (weights_arr.get_element_type() == ov::element::i8); // Signed i8 path
// For Q8_0, zero point is always 128
if (is_scalar_zp) {
zp[0] = 128;
}
for (int i = 0; i < nb; i++) {
float amax = 0.0f; // absolute max
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
if (amax < fabsf(v)) {
amax = fabsf(v);
if (!is_symmetric) {
auto * zp = static_cast<uint8_t *>(zp_arr.data());
for (int i = 0; i < nb; i++) {
float amax = 0.0f;
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
amax = std::max(amax, fabsf(v));
}
const float d = amax / 127.0f;
const float id = d ? 1.0f / d : 0.0f;
scales[i] = ov::float16(d);
zp[i] = 128;
for (int j = 0; j < qk; ++j) {
const float x0 = x[i * qk + j] * id;
const int8_t xi0 = roundf(x0);
weights[i * qk + j] = (uint8_t) (xi0 + 128);
}
}
const float d = amax / 127.0f;
const float id = d ? 1.0f / d : 0.0f;
scales[i] = ov::float16(d);
// For asymmetric quantization, store per-block zero points
if (!is_scalar_zp) {
zp[i] = 128;
}
for (int j = 0; j < qk; ++j) {
const float x0 = x[i * qk + j] * id;
const int8_t xi0 = roundf(x0);
weights[i * qk + j] = (uint8_t) (xi0 + 128);
} else {
// Symmetric: store signed int8 values directly
auto * signed_weights = reinterpret_cast<int8_t *>(weights);
for (int i = 0; i < nb; i++) {
float amax = 0.0f;
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
amax = std::max(amax, fabsf(v));
}
const float d = amax / 127.0f;
const float id = d ? 1.0f / d : 0.0f;
scales[i] = ov::float16(d);
for (int j = 0; j < qk; ++j) {
const float x0 = x[i * qk + j] * id;
signed_weights[i * qk + j] = (int8_t) roundf(x0);
}
}
}
}
@@ -861,12 +937,8 @@ void quantize_q8_1(const float * x,
for (int j = 0; j < qk; j++) {
const float v = x[i * qk + j];
if (v < min) {
min = v;
}
if (v > max) {
max = v;
}
min = std::min(v, min);
max = std::max(v, max);
}
const float d = (max - min) / ((1 << 8) - 1);

View File

@@ -9,12 +9,17 @@
#include <openvino/op/add.hpp>
#include <openvino/op/concat.hpp>
#include <openvino/op/constant.hpp>
#include <openvino/op/convert.hpp>
#include <openvino/op/cos.hpp>
#include <openvino/op/gather.hpp>
#include <openvino/op/multiply.hpp>
#include <openvino/op/reshape.hpp>
#include <openvino/op/shape_of.hpp>
#include <openvino/op/sin.hpp>
#include <openvino/op/slice.hpp>
#include <openvino/op/split.hpp>
#include <openvino/op/subtract.hpp>
#include <openvino/op/transpose.hpp>
#include <openvino/op/unsqueeze.hpp>
#include <vector>
@@ -33,6 +38,12 @@ OutputVector translate_rope(const NodeContext & context) {
auto data_node = context.get_input(0).get_node_shared_ptr();
auto output_shape = context.get_output_shape().to_shape();
int32_t * op_params = context.get_output_op_params();
const int mode = (op_case & 0xFFFF0000) >> 16;
op_case = (op_case & 0x0000FFFF);
constexpr int TYPE_NORMAL = 0;
constexpr int TYPE_NEOX = 1;
constexpr int TYPE_IMROPE = 2;
Output<Node> cos_theta_node;
Output<Node> sin_theta_node;
@@ -45,7 +56,7 @@ OutputVector translate_rope(const NodeContext & context) {
if (context.get_input_size() == 3) {
rope_freqs_weight = context.get_input(2).get_node_shared_ptr();
}
auto sin_cos = make_sin_cos(op_params, inp_pos, rope_freqs_weight);
auto sin_cos = make_sin_cos(op_params, inp_pos, rope_freqs_weight, mode == TYPE_IMROPE);
sin_theta_node = sin_cos.first;
cos_theta_node = sin_cos.second;
}
@@ -65,11 +76,7 @@ OutputVector translate_rope(const NodeContext & context) {
}
}
const int mode = op_params[2];
constexpr int ROPE_TYPE_NORMAL = 0;
constexpr int ROPE_TYPE_NEOX = 2;
if (mode == ROPE_TYPE_NORMAL) {
if (mode == TYPE_NORMAL) {
auto neg_one = ov::op::v0::Constant::create(ov::element::i64, {1}, {-1});
auto zero = ov::op::v0::Constant::create(ov::element::i64, {1}, {0});
auto one = ov::op::v0::Constant::create(ov::element::i64, {1}, {1});
@@ -97,7 +104,7 @@ OutputVector translate_rope(const NodeContext & context) {
auto data_shape = ov::op::v0::Constant::create(
ov::element::i64, {4}, std::vector<int64_t>{1, -1, (int64_t) output_shape[2], (int64_t) output_shape[3]});
res = std::make_shared<ov::op::v1::Reshape>(stack, data_shape, false);
} else if (mode == ROPE_TYPE_NEOX) {
} else if (mode == TYPE_NEOX) {
auto data_split = std::make_shared<ov::op::v1::Split>(
data_node, ov::op::v0::Constant::create(ov::element::i64, ov::Shape{}, {-1}), 2);
Output<Node> slice_data_node_0 = data_split->outputs()[0];
@@ -112,6 +119,25 @@ OutputVector translate_rope(const NodeContext & context) {
std::make_shared<ov::op::v1::Multiply>(slice_data_node_1, cos_theta_node));
res = std::make_shared<ov::op::v0::Concat>(ov::OutputVector{first_half_node, second_half_node}, -1);
} else if (mode == TYPE_IMROPE) {
int64_t n_dims = data_node->get_shape()[3];
auto cos_sin_shape = std::make_shared<ov::op::v0::Constant>(ov::element::i64, ov::Shape{4}, std::vector<int64_t>{1,-1,1,(n_dims >> 1)});
auto cos_reshaped = std::make_shared<ov::op::v1::Reshape>(cos_theta_node, cos_sin_shape, true);
auto sin_reshaped = std::make_shared<ov::op::v1::Reshape>(sin_theta_node, cos_sin_shape, true);
auto split_axis = ov::op::v0::Constant::create(ov::element::i64, ov::Shape{}, {3});
auto split_a = std::make_shared<ov::op::v1::Split>(data_node, split_axis, 2);
auto x0 = split_a->output(0);
auto x1 = split_a->output(1);
auto mul_a = std::make_shared<ov::op::v1::Multiply>(x0, cos_reshaped);
auto mul_b = std::make_shared<ov::op::v1::Multiply>(x1, sin_reshaped);
auto sub = std::make_shared<ov::op::v1::Subtract>(mul_a, mul_b);
auto mul_c = std::make_shared<ov::op::v1::Multiply>(x0, sin_reshaped);
auto mul_d = std::make_shared<ov::op::v1::Multiply>(x1, cos_reshaped);
auto add = std::make_shared<ov::op::v1::Add>(mul_c, mul_d);
res = std::make_shared<ov::op::v0::Concat>(ov::OutputVector{sub, add}, 3);
}
return rename_outputs_with_suffix({res}, context.get_name());

View File

@@ -0,0 +1,25 @@
#include "../node_context.h"
#include "../op_table.h"
#include "../utils.h"
#include <openvino/core/node_output.hpp>
#include <openvino/op/gelu.hpp>
namespace ov {
namespace frontend {
namespace ggml {
namespace op {
OutputVector translate_unary_gelu(const NodeContext & context) {
num_inputs_check(context, 1, 1);
auto input = context.get_input(0);
auto res = std::make_shared<ov::op::v7::Gelu>(input);
return rename_outputs_with_suffix({res}, context.get_name());
}
} // namespace op
} // namespace ggml
} // namespace frontend
} // namespace ov

View File

@@ -31,6 +31,7 @@ std::unordered_map<std::string, CreatorFunction> get_supported_ops() {
{"GGML_OP_SOFT_MAX", op::translate_soft_max },
{"GGML_OP_SUB", op::translate_1to1_match_2_inputs<v1::Subtract>},
{"GGML_OP_TRANSPOSE", op::translate_transpose },
{"GGML_UNARY_OP_GELU", op::translate_unary_gelu },
{"GGML_UNARY_OP_SILU", op::translate_unary_silu },
{"GGML_OP_VIEW", op::translate_view },
{"GGML_GLU_OP_SWIGLU", op::translate_glu_swiglu },

View File

@@ -21,6 +21,7 @@ GGML_OP_CONVERTER(translate_rms_norm);
GGML_OP_CONVERTER(translate_rope);
GGML_OP_CONVERTER(translate_scale);
GGML_OP_CONVERTER(translate_unary_silu);
GGML_OP_CONVERTER(translate_unary_gelu);
GGML_OP_CONVERTER(translate_soft_max);
GGML_OP_CONVERTER(translate_transpose);
GGML_OP_CONVERTER(translate_view);

View File

@@ -1,123 +0,0 @@
#include "eliminate_zp.h"
#include <openvino/core/graph_util.hpp>
#include <openvino/core/parallel.hpp>
#include <openvino/core/rt_info.hpp>
#include <openvino/op/constant.hpp>
#include <openvino/op/convert.hpp>
#include <openvino/op/multiply.hpp>
#include <openvino/op/subtract.hpp>
#include <openvino/pass/pattern/op/label.hpp>
#include <openvino/pass/pattern/op/pattern.hpp>
#include <openvino/pass/pattern/op/wrap_type.hpp>
namespace ov {
namespace frontend {
namespace ggml {
namespace pass {
EliminateZeroPoints::EliminateZeroPoints() {
// Find pattern:
// (Multiply Any(scale)
// (Subtract (Convert Constant(data)))
// (Convert Constant(zero_point)))
// where zero_point is a scalar
// If data is u4 and zp value is 8 (q4_0), Replace the Subtract with an i4 Constant whose value is data - zp_val
// If data is u8 and zp value is 128 (q8_0) or 32 (q6_k), Replace the Subtract with an i8 Constant
auto m_data_constant = ov::pass::pattern::wrap_type<ov::op::v0::Constant>();
auto m_data_convert = ov::pass::pattern::wrap_type<ov::op::v0::Convert>({m_data_constant});
auto m_zp_constant = ov::pass::pattern::wrap_type<ov::op::v0::Constant>();
auto m_zp_convert = ov::pass::pattern::wrap_type<ov::op::v0::Convert>({m_zp_constant});
auto m_subtract = ov::pass::pattern::wrap_type<ov::op::v1::Subtract>({m_data_convert, m_zp_convert});
auto m_scale = ov::pass::pattern::any_input();
auto m_multiply = ov::pass::pattern::wrap_type<ov::op::v1::Multiply>({m_scale, m_subtract});
const auto callback = [=](ov::pass::pattern::Matcher & m) {
const auto & pattern_map = m.get_pattern_value_map();
auto multiply_node =
std::dynamic_pointer_cast<ov::op::v1::Multiply>(pattern_map.at(m_multiply).get_node_shared_ptr());
auto subtract_node =
std::dynamic_pointer_cast<ov::op::v1::Subtract>(pattern_map.at(m_subtract).get_node_shared_ptr());
auto data_constant =
std::dynamic_pointer_cast<ov::op::v0::Constant>(pattern_map.at(m_data_constant).get_node_shared_ptr());
auto zp_constant =
std::dynamic_pointer_cast<ov::op::v0::Constant>(pattern_map.at(m_zp_constant).get_node_shared_ptr());
if (!multiply_node || !subtract_node || !data_constant || !zp_constant) {
return false;
}
if (ov::shape_size(zp_constant->get_shape()) != 1) {
return false;
}
auto data_type = data_constant->get_element_type();
auto zp_data = zp_constant->cast_vector<int>();
if (zp_data.empty()) {
return false;
}
int zp_value = zp_data[0];
bool should_eliminate = false;
ov::element::Type target_type;
if (data_type == ov::element::u4 && zp_value == 8) {
should_eliminate = true;
target_type = ov::element::i4;
} else if (data_type == ov::element::u8 && (zp_value == 128 || zp_value == 32)) {
should_eliminate = true;
target_type = ov::element::i8;
}
if (!should_eliminate) {
return false;
}
auto data_shape = data_constant->get_shape();
size_t total_elements = ov::shape_size(data_shape);
std::shared_ptr<ov::op::v0::Constant> new_constant;
// TODO improve performance
if (data_type == ov::element::u4) {
auto data_values = data_constant->cast_vector<uint8_t>();
std::vector<int8_t> adjusted_values(total_elements);
ov::parallel_for(total_elements, [&](size_t i) {
adjusted_values[i] = static_cast<int8_t>(static_cast<int>(data_values[i]) - 8);
});
new_constant = std::make_shared<ov::op::v0::Constant>(target_type, data_shape, adjusted_values);
} else if (data_type == ov::element::u8) {
auto data_values = data_constant->cast_vector<uint8_t>();
std::vector<int8_t> adjusted_values(total_elements);
ov::parallel_for(total_elements, [&, zp_value](size_t i) {
adjusted_values[i] = static_cast<int8_t>(static_cast<int>(data_values[i]) - zp_value);
});
new_constant = std::make_shared<ov::op::v0::Constant>(target_type, data_shape, adjusted_values);
}
auto new_convert =
std::make_shared<ov::op::v0::Convert>(new_constant, subtract_node->get_output_element_type(0));
ov::replace_node(subtract_node, new_convert);
return true;
};
register_matcher(
std::make_shared<ov::pass::pattern::Matcher>(m_multiply, "ov::frontend::ggml::pass::EliminateZeroPoints"),
callback);
}
} // namespace pass
} // namespace ggml
} // namespace frontend
} // namespace ov

View File

@@ -1,17 +0,0 @@
#include "openvino/pass/matcher_pass.hpp"
namespace ov {
namespace frontend {
namespace ggml {
namespace pass {
class EliminateZeroPoints : public ov::pass::MatcherPass {
public:
OPENVINO_MATCHER_PASS_RTTI("ov::frontend::ggml::pass::EliminateZeroPoints")
EliminateZeroPoints();
};
} // namespace pass
} // namespace ggml
} // namespace frontend
} // namespace ov

View File

@@ -0,0 +1,41 @@
// Copyright (C) 2018-2026 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <openvino/core/core_visibility.hpp>
#include <openvino/core/node.hpp>
#include <openvino/core/runtime_attribute.hpp>
namespace ov {
/**
* @brief Holds weightless caching attributes of a single constant.
*
* WeightlessCacheAttribute class represents runtime info attribute that holds
* the values of original size of the constant in bytes and the binary offset of the
* constant's data in the weights file used by the weightless caching mechanism. It's
* not copyable in case the data was changed (the original node was replaced by a new
* one produced during the tranformation pipeline) - in that case weightless caching
* can't be used for that constant.
*/
class OPENVINO_API WeightlessCacheAttribute : public RuntimeAttribute {
public:
OPENVINO_RTTI("WeightlessCacheAttribute", "0", RuntimeAttribute)
WeightlessCacheAttribute() = delete;
WeightlessCacheAttribute(size_t original_size, size_t bin_offset, ov::element::Type original_dtype)
: original_size(original_size),
bin_offset(bin_offset),
original_dtype(original_dtype) {}
bool is_copyable() const override;
size_t original_size;
size_t bin_offset;
ov::element::Type original_dtype;
};
} // namespace ov

View File

@@ -3,15 +3,16 @@
#include "ggml-openvino/openvino/node_context.h"
#include "ggml-openvino/openvino/utils.h"
#include "input_model.h"
#include "pass/eliminate_zp.h"
#include "pass/mark_decompression_convert_constant_folding.h"
#include "pass/squeeze_matmul.h"
#include "rt_info/weightless_caching_attributes.hpp"
#include <cstdint>
#include <cstdlib>
#include <map>
#include <memory>
#include <openvino/core/node.hpp>
#include <openvino/core/preprocess/pre_post_process.hpp>
#include <openvino/op/add.hpp>
#include <openvino/op/broadcast.hpp>
#include <openvino/op/concat.hpp>
@@ -33,7 +34,6 @@
#include <openvino/op/unsqueeze.hpp>
#include <openvino/pass/constant_folding.hpp>
#include <openvino/pass/make_stateful.hpp>
#include <openvino/core/preprocess/pre_post_process.hpp>
namespace ov {
namespace frontend {
@@ -240,6 +240,31 @@ std::shared_ptr<Model> TranslateSession::translate_graph(const frontend::InputMo
resulting_model = std::make_shared<Model>(results, used_params);
apply_transformations(resulting_model);
// Set WeightlessCacheAttribute on large constants to avoid unnecessary memory copies
// in the NPUW plugin. Without this attribute, NPUW's LazyTensor constructor
// (lazy_tensor.cpp, op::Const::Const) will memcpy every constant "in case export
// occurs", doubling memory usage per compile_model call.
//
// The bin_offset field serves as a unique key (not a real file offset) — this is
// the same convention the GPU plugin uses for non-IR models (see
// Plugin::set_weightless_cache_attributes in intel_gpu/src/plugin/plugin.cpp).
// Each constant must have a distinct bin_offset, otherwise GPU's weightless cache
// import will map multiple constants to the same data.
//
// Small constants (< 16 elements) are excluded since they may be introduced by
// optimization patterns and the overhead is negligible.
size_t offset = 0;
for (auto & node : resulting_model->get_ordered_ops()) {
if (auto cnst = ov::as_type_ptr<ov::op::v0::Constant>(node);
cnst && cnst->get_byte_size() / cnst->get_element_type().size() >= 16) {
auto & rt_info = cnst->get_rt_info();
if (rt_info.find(ov::WeightlessCacheAttribute::get_type_info_static()) == rt_info.end()) {
rt_info[ov::WeightlessCacheAttribute::get_type_info_static()] =
ov::WeightlessCacheAttribute(cnst->get_byte_size(), offset++, cnst->get_element_type());
}
}
}
return resulting_model;
}
@@ -257,7 +282,6 @@ std::shared_ptr<Model> TranslateSession::apply_transformations(std::shared_ptr<M
}
if (ggml_model_decoder->is_static()) {
manager.register_pass<pass::EliminateZeroPoints>();
manager.register_pass<pass::SqueezeMatmul>();
}
manager.run_passes(model);

View File

@@ -2,6 +2,7 @@
#include "ggml-impl.h"
#include <cmath>
#include <cstddef>
#include <ctime>
#include <memory>
@@ -13,6 +14,7 @@
#include <openvino/op/gather.hpp>
#include <openvino/op/maximum.hpp>
#include <openvino/op/multiply.hpp>
#include <openvino/op/reshape.hpp>
#include <openvino/op/shape_of.hpp>
#include <openvino/op/sin.hpp>
#include <openvino/op/squeeze.hpp>
@@ -87,8 +89,11 @@ ov::Output<ov::Node> rope_yarn_ramp_mix(int n_dims, const float corr_dims[2], fl
auto ramp_y =
std::make_shared<ov::op::v1::Divide>(std::make_shared<ov::op::v1::Subtract>(dim_ids, corr_low), denom);
auto ramp_clamped = std::make_shared<ov::op::v0::Clamp>(ramp_y, 0.0f, 1.0f);
// rope_yarn_ramp returns (1 - clamp(y)), so invert before scaling
auto one = ov::op::v0::Constant::create(ov::element::f32, Shape{1, 1, 1, 1}, {1.0f});
auto ramp_inverted = std::make_shared<ov::op::v1::Subtract>(one, ramp_clamped);
auto ext_factor_node = ov::op::v0::Constant::create(ov::element::f32, Shape{}, {ext_factor});
auto ramp_mix = std::make_shared<ov::op::v1::Multiply>(ramp_clamped, ext_factor_node);
auto ramp_mix = std::make_shared<ov::op::v1::Multiply>(ramp_inverted, ext_factor_node);
return ramp_mix;
}
@@ -115,6 +120,7 @@ void ggml_rope_yarn_corr_dims(int n_dims,
std::pair<ov::Output<Node>, ov::Output<Node>> make_sin_cos(int32_t * rope_params,
std::shared_ptr<ov::Node> inp_pos,
std::shared_ptr<ov::Node> rope_freqs_weight,
bool imrope,
bool stateful) {
if (stateful) {
inp_pos = std::make_shared<ov::op::v0::Squeeze>(inp_pos, ov::op::v0::Constant::create(ov::element::i64, {1}, {0}));
@@ -122,6 +128,13 @@ std::pair<ov::Output<Node>, ov::Output<Node>> make_sin_cos(int32_t * rope_params
auto pos_perm =
std::make_shared<ov::op::v0::Constant>(ov::element::i64, ov::Shape{3}, std::vector<int64_t>{2, 1, 0});
inp_pos = std::make_shared<ov::op::v1::Transpose>(inp_pos, pos_perm);
} else if (imrope) {
inp_pos = std::make_shared<ov::op::v0::Convert>(inp_pos, ov::element::f32);
auto pos_shape = ov::op::v0::Constant::create(ov::element::i64, ov::Shape{5}, {0, 0, 0, 4, -1});
inp_pos = std::make_shared<ov::op::v1::Reshape>(inp_pos, pos_shape, true);
auto pos_transpose_shape =
std::make_shared<ov::op::v0::Constant>(ov::element::i64, ov::Shape{5}, std::vector<int64_t>{0, 1, 2, 4, 3});
inp_pos = std::make_shared<ov::op::v1::Transpose>(inp_pos, pos_transpose_shape);
} else {
inp_pos = std::make_shared<ov::op::v0::Convert>(inp_pos, ov::element::f32);
auto pos_perm =
@@ -136,6 +149,7 @@ std::pair<ov::Output<Node>, ov::Output<Node>> make_sin_cos(int32_t * rope_params
float beta_fast;
float beta_slow;
const int n_dims = rope_params[1];
const size_t n_dims_half = n_dims >> 1;
const int n_ctx_orig = rope_params[4];
memcpy(&freq_base, rope_params + 5, sizeof(float));
memcpy(&freq_scale, rope_params + 6, sizeof(float));
@@ -146,57 +160,74 @@ std::pair<ov::Output<Node>, ov::Output<Node>> make_sin_cos(int32_t * rope_params
const float theta_scale = powf(freq_base, -2.0f / n_dims);
float corr_dims[2];
ggml_rope_yarn_corr_dims(n_dims, n_ctx_orig, freq_base, beta_fast, beta_slow, corr_dims);
std::vector<float> factor(n_dims / 2);
factor[0] = 1.0f;
for (size_t i = 1; i < factor.size(); i++) {
factor[i] = theta_scale * factor[i - 1];
}
std::vector<float> factor(n_dims_half);
Output<Node> freq_factors;
if (stateful) {
freq_factors =
std::make_shared<ov::op::v0::Constant>(ov::element::f32, ov::Shape{1, 1, factor.size()}, factor);
} else {
freq_factors =
std::make_shared<ov::op::v0::Constant>(ov::element::f32, ov::Shape{1, 1, 1, factor.size()}, factor);
}
if (rope_freqs_weight) {
freq_factors = std::make_shared<ov::op::v1::Divide>(freq_factors, rope_freqs_weight);
}
auto theta_extrap = std::make_shared<ov::op::v1::Multiply>(freq_factors, inp_pos);
auto theta_interp = std::make_shared<ov::op::v1::Multiply>(
theta_extrap, ov::op::v0::Constant::create(ov::element::f32, {1}, {freq_scale}));
Output<Node> theta;
float mscale = attn_factor;
if (ext_factor == 0.0f) {
theta = theta_interp;
} else {
auto ramp_mix = rope_yarn_ramp_mix(n_dims, corr_dims, ext_factor);
Output<Node> one;
if (stateful) {
one = ov::op::v0::Constant::create(ov::element::f32, Shape{1, 1, 1}, {1.0f});
} else {
one = ov::op::v0::Constant::create(ov::element::f32, Shape{1, 1, 1, 1}, {1.0f});
if (imrope) {
std::vector<int64_t> gather_indices(n_dims_half);
for (size_t j = 0; j < n_dims_half; j++) {
gather_indices[j] = j % 3;
factor[j] = std::pow(theta_scale, j);
}
auto gather_indices_const =
std::make_shared<ov::op::v0::Constant>(ov::element::i64, ov::Shape{n_dims_half}, gather_indices);
auto gather_axis = ov::op::v0::Constant::create(ov::element::i32, ov::Shape{}, {4});
inp_pos = std::make_shared<ov::op::v8::Gather>(inp_pos, gather_indices_const, gather_axis);
auto factor_const = std::make_shared<ov::op::v0::Constant>(ov::element::f32, ov::Shape{n_dims_half}, factor);
theta = std::make_shared<ov::op::v1::Multiply>(inp_pos, factor_const);
} else {
float corr_dims[2];
ggml_rope_yarn_corr_dims(n_dims, n_ctx_orig, freq_base, beta_fast, beta_slow, corr_dims);
factor[0] = 1.0f;
for (size_t i = 1; i < factor.size(); i++) {
factor[i] = theta_scale * factor[i - 1];
}
if (stateful) {
freq_factors =
std::make_shared<ov::op::v0::Constant>(ov::element::f32, ov::Shape{1, 1, factor.size()}, factor);
} else {
freq_factors =
std::make_shared<ov::op::v0::Constant>(ov::element::f32, ov::Shape{1, 1, 1, factor.size()}, factor);
}
if (rope_freqs_weight) {
freq_factors = std::make_shared<ov::op::v1::Divide>(freq_factors, rope_freqs_weight);
}
auto one_minus_ramp = std::make_shared<ov::op::v1::Subtract>(one, ramp_mix);
theta = std::make_shared<ov::op::v1::Add>(std::make_shared<ov::op::v1::Multiply>(theta_interp, one_minus_ramp),
std::make_shared<ov::op::v1::Multiply>(theta_extrap, ramp_mix));
mscale *= (1.0f + 0.1f * std::log(1.0f / freq_scale));
auto theta_extrap = std::make_shared<ov::op::v1::Multiply>(freq_factors, inp_pos);
auto theta_interp = std::make_shared<ov::op::v1::Multiply>(
theta_extrap, ov::op::v0::Constant::create(ov::element::f32, {1}, {freq_scale}));
if (ext_factor == 0.0f) {
theta = theta_interp;
} else {
auto ramp_mix = rope_yarn_ramp_mix(n_dims, corr_dims, ext_factor);
Output<Node> one;
if (stateful) {
one = ov::op::v0::Constant::create(ov::element::f32, Shape{1, 1, 1}, {1.0f});
} else {
one = ov::op::v0::Constant::create(ov::element::f32, Shape{1, 1, 1, 1}, {1.0f});
}
auto one_minus_ramp = std::make_shared<ov::op::v1::Subtract>(one, ramp_mix);
theta = std::make_shared<ov::op::v1::Add>(std::make_shared<ov::op::v1::Multiply>(theta_interp, one_minus_ramp),
std::make_shared<ov::op::v1::Multiply>(theta_extrap, ramp_mix));
mscale *= (1.0f + 0.1f * std::log(1.0f / freq_scale));
}
}
Output<Node> cos_theta = std::make_shared<ov::op::v0::Cos>(theta);
Output<Node> sin_theta = std::make_shared<ov::op::v0::Sin>(theta);
auto mscale_node = ov::op::v0::Constant::create(ov::element::f32, Shape{}, {mscale});
if (!imrope) {
auto mscale_node = ov::op::v0::Constant::create(ov::element::f32, Shape{}, {mscale});
cos_theta = std::make_shared<ov::op::v1::Multiply>(cos_theta, mscale_node);
sin_theta = std::make_shared<ov::op::v1::Multiply>(sin_theta, mscale_node);
}
cos_theta = std::make_shared<ov::op::v1::Multiply>(cos_theta, mscale_node);
sin_theta = std::make_shared<ov::op::v1::Multiply>(sin_theta, mscale_node);
return std::make_pair(sin_theta, cos_theta);
}

View File

@@ -67,6 +67,7 @@ OutputVector rename_outputs_with_suffix(const OutputVector& outputs, const std::
std::pair<ov::Output<Node>, ov::Output<Node>> make_sin_cos(int32_t* rope_params,
std::shared_ptr<ov::Node> inp_pos,
std::shared_ptr<ov::Node> rope_freqs_weight = nullptr,
bool imrope = false,
bool stateful = false);
ov::Output<ov::Node> process_view_input(const NodeContext& context, int input_index, int slice_len = 0);

View File

@@ -81,8 +81,8 @@ ov::Tensor create_ov_output_tensor(std::shared_ptr<GgmlOvDecoder> ggml_decoder,
enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<ov_runtime_context> r_ctx) {
auto & core = ov_singleton_core();
const auto & config = ggml_openvino_get_compile_config();
auto device = r_ctx->device;
bool stateful = r_ctx->stateful;
const auto & device = r_ctx->device;
const auto & stateful = r_ctx->stateful;
static auto is_static = false;
if (is_naive(cgraph)) {
@@ -106,14 +106,26 @@ enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<
int64_t infer_end_time;
{
std::lock_guard<std::mutex> lock(r_ctx->ov_compute_mutex);
auto it = r_ctx->decoder_cache.find(key);
cache_hit = it != r_ctx->decoder_cache.end();
std::shared_ptr<decoder_runtime_ctx> entry;
ModelParams old_m_params;
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
auto it = r_ctx->decoder_cache.find(key);
cache_hit = it != r_ctx->decoder_cache.end();
if (cache_hit) {
entry = it->second;
} else {
auto mutex = std::make_shared<std::mutex>();
entry = std::make_shared<decoder_runtime_ctx>(mutex);
r_ctx->decoder_cache[key] = entry;
}
}
std::lock_guard<std::mutex> lock(*(entry->mutex));
if (cache_hit) {
ggml_decoder = it->second;
ggml_decoder = entry->ptr;
old_m_params = ggml_decoder->get_model_params();
cache_hit = old_m_params.can_reuse_dynamically(m_params);
}
@@ -126,7 +138,10 @@ enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<
ggml_decoder->update_io(cgraph);
}
ggml_decoder->add_extra_inputs();
infer_request = r_ctx->infer_request_cache.at(key);
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
infer_request = r_ctx->infer_request_cache.at(key);
}
if (stateful) {
const auto * inp_pos = get_inp_pos_tensor(cgraph);
@@ -170,7 +185,10 @@ enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<
conversion_end_time = decoder_end_time;
compile_end_time = decoder_end_time;
} else {
r_ctx->infer_request_cache.erase(key);
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
r_ctx->infer_request_cache.erase(key);
}
std::shared_ptr<ov::Model> model;
auto model_weights = GgmlOvDecoder::create_weight_nodes(cgraph);
@@ -199,8 +217,7 @@ enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<
}
compile_end_time = ggml_time_us();
infer_request = std::make_shared<ov::InferRequest>(compiled_model.create_infer_request());
r_ctx->infer_request_cache[key] = infer_request;
r_ctx->decoder_cache[key] = ggml_decoder;
entry->ptr = ggml_decoder;
std::vector<std::string> ov_input_names;
std::vector<std::string> ov_output_names;
@@ -210,8 +227,13 @@ enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<
for (const auto & ov_output : model->get_results()) {
ov_output_names.push_back(ov_output->get_friendly_name());
}
r_ctx->ov_input_names_cache[key] = std::move(ov_input_names);
r_ctx->ov_output_names_cache[key] = std::move(ov_output_names);
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
r_ctx->infer_request_cache[key] = infer_request;
r_ctx->ov_input_names_cache[key] = std::move(ov_input_names);
r_ctx->ov_output_names_cache[key] = std::move(ov_output_names);
}
if (stateful) {
const auto * inp_pos = get_inp_pos_tensor(cgraph);
@@ -224,8 +246,13 @@ enum ggml_status ov_graph_compute_dynamic(ggml_cgraph * cgraph, std::shared_ptr<
}
}
auto ov_input_names = r_ctx->ov_input_names_cache[key];
auto ov_output_names = r_ctx->ov_output_names_cache[key];
std::vector<std::string> ov_input_names;
std::vector<std::string> ov_output_names;
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
ov_input_names = r_ctx->ov_input_names_cache[key];
ov_output_names = r_ctx->ov_output_names_cache[key];
}
for (size_t i = 0; i < ov_input_names.size(); i++) {
auto param_name = ov_input_names[i];
@@ -306,12 +333,26 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
int64_t compile_end_time;
int64_t infer_end_time;
auto it = r_ctx->decoder_cache.find(key);
cache_hit = it != r_ctx->decoder_cache.end();
std::shared_ptr<decoder_runtime_ctx> entry;
ModelParams old_m_params;
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
auto it = r_ctx->decoder_cache.find(key);
cache_hit = it != r_ctx->decoder_cache.end();
if (cache_hit) {
entry = it->second;
} else {
auto mutex = std::make_shared<std::mutex>();
entry = std::make_shared<decoder_runtime_ctx>(mutex);
r_ctx->decoder_cache[key] = entry;
}
}
std::lock_guard<std::mutex> lock(*(entry->mutex));
if (cache_hit) {
ggml_decoder = it->second;
ggml_decoder = entry->ptr;
old_m_params = ggml_decoder->get_model_params();
cache_hit = old_m_params.can_reuse_statically(m_params);
}
@@ -325,14 +366,21 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
ggml_decoder->update_io(cgraph);
}
ggml_decoder->add_extra_inputs();
infer_request = is_prefill ? r_ctx->infer_request_cache_prefill.at(key) : r_ctx->infer_request_cache.at(key);
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
infer_request =
is_prefill ? r_ctx->infer_request_cache_prefill.at(key) : r_ctx->infer_request_cache.at(key);
}
decoder_end_time = ggml_time_us();
conversion_end_time = decoder_end_time;
compile_end_time = decoder_end_time;
} else {
r_ctx->infer_request_cache.erase(key);
r_ctx->infer_request_cache_prefill.erase(key);
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
r_ctx->infer_request_cache.erase(key);
r_ctx->infer_request_cache_prefill.erase(key);
}
std::shared_ptr<ov::Model> model;
auto model_weights = GgmlOvDecoder::create_weight_nodes(cgraph);
@@ -372,16 +420,14 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
compiled_model_decode = core.compile_model(model_decode, device, config);
}
r_ctx->infer_request_cache_prefill[key] =
std::make_shared<ov::InferRequest>(compiled_model_prefill.create_infer_request());
r_ctx->infer_request_cache[key] =
std::make_shared<ov::InferRequest>(compiled_model_decode.create_infer_request());
auto infer_request_prefill = std::make_shared<ov::InferRequest>(compiled_model_prefill.create_infer_request());
auto infer_request_decode = std::make_shared<ov::InferRequest>(compiled_model_decode.create_infer_request());
compile_end_time = ggml_time_us();
model = is_prefill ? model_prefill : model_decode;
ggml_decoder = is_prefill ? ggml_decoder_prefill : ggml_decoder_decode;
infer_request = is_prefill ? r_ctx->infer_request_cache_prefill[key] : r_ctx->infer_request_cache[key];
r_ctx->decoder_cache[key] = ggml_decoder;
infer_request = is_prefill ? infer_request_prefill : infer_request_decode;
entry->ptr = ggml_decoder;
std::vector<std::string> ov_input_names;
std::vector<std::string> ov_output_names;
@@ -391,18 +437,29 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
for (const auto & ov_output : model->get_results()) {
ov_output_names.push_back(ov_output->get_friendly_name());
}
r_ctx->ov_input_names_cache[key] = std::move(ov_input_names);
r_ctx->ov_output_names_cache[key] = std::move(ov_output_names);
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
r_ctx->infer_request_cache_prefill[key] = infer_request_prefill;
r_ctx->infer_request_cache[key] = infer_request_decode;
r_ctx->ov_input_names_cache[key] = std::move(ov_input_names);
r_ctx->ov_output_names_cache[key] = std::move(ov_output_names);
}
}
auto ov_input_names = r_ctx->ov_input_names_cache[key];
auto ov_output_names = r_ctx->ov_output_names_cache[key];
std::vector<std::string> ov_input_names_local;
std::vector<std::string> ov_output_names_local;
{
std::lock_guard<std::mutex> map_lock(r_ctx->ctx_mutex);
ov_input_names_local = r_ctx->ov_input_names_cache[key];
ov_output_names_local = r_ctx->ov_output_names_cache[key];
}
if (is_prefill) {
auto inp_len = inp_pos->ne[0];
for (int chunk_index = 0; chunk_index * prefill_chunk_size < inp_len; chunk_index++) {
for (size_t i = 0; i < ov_input_names.size(); i++) {
auto param_name = ov_input_names[i];
for (size_t i = 0; i < ov_input_names_local.size(); i++) {
auto param_name = ov_input_names_local[i];
auto input_tensor = get_ov_input_tensor_static_prefill(ggml_decoder, param_name, chunk_index);
infer_request->set_input_tensor(i, input_tensor);
@@ -412,8 +469,8 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
}
}
for (size_t i = 0; i < ov_output_names.size(); i++) {
auto * ggml_tensor = ggml_decoder->get_model_outputs().at(ov_output_names[i]);
for (size_t i = 0; i < ov_output_names_local.size(); i++) {
auto * ggml_tensor = ggml_decoder->get_model_outputs().at(ov_output_names_local[i]);
auto output_tensor = create_ov_output_tensor(ggml_decoder, infer_request, i, ggml_tensor);
infer_request->set_output_tensor(i, output_tensor);
}
@@ -421,16 +478,16 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
infer_request->infer();
if (getenv("GGML_OPENVINO_DEBUG_OUTPUT")) {
for (size_t i = 0; i < ov_output_names.size(); i++) {
for (size_t i = 0; i < ov_output_names_local.size(); i++) {
const auto output_tensor = infer_request->get_output_tensor(i);
print_output_tensor_info(ov_output_names[i], output_tensor, output_tensor.data());
print_output_tensor_info(ov_output_names_local[i], output_tensor, output_tensor.data());
}
}
}
infer_end_time = ggml_time_us();
} else {
for (size_t i = 0; i < ov_input_names.size(); i++) {
auto param_name = ov_input_names[i];
for (size_t i = 0; i < ov_input_names_local.size(); i++) {
auto param_name = ov_input_names_local[i];
auto input_tensor = get_ov_input_tensor_static_decode(ggml_decoder, param_name);
infer_request->set_input_tensor(i, input_tensor);
@@ -440,8 +497,8 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
}
}
for (size_t i = 0; i < ov_output_names.size(); i++) {
auto * ggml_tensor = ggml_decoder->get_model_outputs().at(ov_output_names[i]);
for (size_t i = 0; i < ov_output_names_local.size(); i++) {
auto * ggml_tensor = ggml_decoder->get_model_outputs().at(ov_output_names_local[i]);
auto output_tensor = create_ov_output_tensor(ggml_decoder, infer_request, i, ggml_tensor);
infer_request->set_output_tensor(i, output_tensor);
}
@@ -450,9 +507,9 @@ enum ggml_status ov_graph_compute_static(ggml_cgraph * cgraph, std::shared_ptr<o
infer_end_time = ggml_time_us();
if (getenv("GGML_OPENVINO_DEBUG_OUTPUT")) {
for (size_t i = 0; i < ov_output_names.size(); i++) {
for (size_t i = 0; i < ov_output_names_local.size(); i++) {
const auto output_tensor = infer_request->get_output_tensor(i);
print_output_tensor_info(ov_output_names[i], output_tensor, output_tensor.data());
print_output_tensor_info(ov_output_names_local[i], output_tensor, output_tensor.data());
}
}
}

View File

@@ -3,12 +3,15 @@
#include "ggml-impl.h"
#include <algorithm>
#include <atomic>
#include <cstddef>
#include <memory>
#include <mutex>
#include <openvino/runtime/core.hpp>
#include <openvino/runtime/infer_request.hpp>
#include <string>
#include <unordered_map>
#include <utility>
#include <vector>
struct graph_key {
@@ -40,11 +43,17 @@ struct graph_key_hash {
}
};
struct decoder_runtime_ctx {
decoder_runtime_ctx(std::shared_ptr<std::mutex> mutex) : mutex(std::move(mutex)) {}
std::shared_ptr<std::mutex> mutex;
std::shared_ptr<GgmlOvDecoder> ptr;
};
struct ov_runtime_context {
std::mutex ov_compute_mutex;
mutable std::mutex ctx_mutex;
std::string device;
bool stateful;
std::unordered_map<graph_key, std::shared_ptr<GgmlOvDecoder>, graph_key_hash> decoder_cache;
std::unordered_map<graph_key, std::shared_ptr<decoder_runtime_ctx>, graph_key_hash> decoder_cache;
std::unordered_map<graph_key, std::shared_ptr<ov::InferRequest>, graph_key_hash> infer_request_cache;
std::unordered_map<graph_key, std::shared_ptr<ov::InferRequest>, graph_key_hash> infer_request_cache_prefill;
std::unordered_map<graph_key, std::vector<std::string>, graph_key_hash> ov_input_names_cache;
@@ -53,11 +62,22 @@ struct ov_runtime_context {
// Simultanous stateful inference request support to be added.
size_t stateful_kv_size;
std::map<std::string, std::string> kv_state_input_name_map;
std::atomic<int> backend_count;
ov_runtime_context() :
device("CPU"),
stateful(false),
stateful_kv_size(0) {}
stateful_kv_size(0),
backend_count(0) {}
void clear_caches() {
std::lock_guard<std::mutex> lock(ctx_mutex);
decoder_cache.clear();
infer_request_cache.clear();
infer_request_cache_prefill.clear();
ov_input_names_cache.clear();
ov_output_names_cache.clear();
}
};
enum ggml_status ov_graph_compute(struct ggml_cgraph * cgraph, ggml_backend_t backend);

View File

@@ -240,6 +240,27 @@ struct ggml_webgpu_ssm_conv_pipeline_key {
}
};
/** CONV 2D */
struct ggml_webgpu_conv2d_pipeline_key {
ggml_type weight_type;
ggml_type input_type;
ggml_type output_type;
bool operator==(const ggml_webgpu_conv2d_pipeline_key & other) const {
return weight_type == other.weight_type && input_type == other.input_type && output_type == other.output_type;
}
};
struct ggml_webgpu_conv2d_pipeline_key_hash {
size_t operator()(const ggml_webgpu_conv2d_pipeline_key & key) const {
size_t seed = 0;
ggml_webgpu_hash_combine(seed, key.weight_type);
ggml_webgpu_hash_combine(seed, key.input_type);
ggml_webgpu_hash_combine(seed, key.output_type);
return seed;
}
};
/** Gated Delta Net **/
struct ggml_webgpu_gated_delta_net_pipeline_key {
int type;
@@ -789,6 +810,8 @@ class ggml_webgpu_shader_lib {
rope_pipelines;
std::unordered_map<ggml_webgpu_soft_max_pipeline_key, webgpu_pipeline, ggml_webgpu_soft_max_pipeline_key_hash>
soft_max_pipelines;
std::unordered_map<ggml_webgpu_conv2d_pipeline_key, webgpu_pipeline, ggml_webgpu_conv2d_pipeline_key_hash>
conv2d_pipelines;
public:
ggml_webgpu_shader_lib(wgpu::Device device) { this->device = device; }
@@ -2382,6 +2405,46 @@ class ggml_webgpu_shader_lib {
return soft_max_pipelines[key];
}
webgpu_pipeline get_conv2d_pipeline(const ggml_webgpu_shader_lib_context & context) {
ggml_webgpu_conv2d_pipeline_key key = {};
key.weight_type = context.src0->type;
key.input_type = context.src1->type;
key.output_type = context.dst->type;
auto it = conv2d_pipelines.find(key);
if (it != conv2d_pipelines.end()) {
return it->second;
}
std::vector<std::string> defines;
std::string variant = "conv_2d";
auto push_type_defines = [&](const char * prefix, ggml_type type) {
std::string s_prefix = prefix;
if (type == GGML_TYPE_F32) {
defines.push_back(s_prefix + "_F32");
} else if (type == GGML_TYPE_F16) {
defines.push_back(s_prefix + "_F16");
} else {
GGML_ABORT("Unsupported type for CONV_2D shader");
}
};
push_type_defines("WEIGHT", key.weight_type);
push_type_defines("INPUT", key.input_type);
push_type_defines("OUTPUT", key.output_type);
defines.push_back(std::string("WG_SIZE=") + std::to_string(context.max_wg_size));
auto processed = preprocessor.preprocess(wgsl_conv2d, defines);
auto decisions = std::make_shared<ggml_webgpu_generic_shader_decisions>();
decisions->wg_size = context.max_wg_size;
webgpu_pipeline pipeline = ggml_webgpu_create_pipeline(device, processed, variant);
pipeline.context = decisions;
conv2d_pipelines[key] = pipeline;
return conv2d_pipelines[key];
}
private:
static webgpu_pipeline ggml_webgpu_create_pipeline(wgpu::Device & device,
std::string shader_code,

View File

@@ -8,6 +8,7 @@
#include "ggml-backend-impl.h"
#include "ggml-impl.h"
#include "ggml-webgpu-shader-lib.hpp"
#include "ggml.h"
#ifdef __EMSCRIPTEN__
# include <emscripten/emscripten.h>
@@ -211,6 +212,7 @@ struct webgpu_global_context_struct {
wgpu::Buffer memset_params_buf;
webgpu_pipeline memset_pipeline;
// TODO: We should rework the CPU profiling time handling to make it more useful. ref: https://github.com/ggml-org/llama.cpp/pull/22050
#ifdef GGML_WEBGPU_CPU_PROFILE
// Profiling: labeled CPU time in ms (total)
std::unordered_map<std::string, double> cpu_time_ms;
@@ -218,11 +220,6 @@ struct webgpu_global_context_struct {
std::unordered_map<std::string, double> cpu_detail_ms;
#endif
#ifdef GGML_WEBGPU_GPU_PROFILE
// Profiling: per-shader GPU time in ms
std::unordered_map<std::string, double> shader_gpu_time_ms;
#endif
#ifdef GGML_WEBGPU_DEBUG
wgpu::Buffer debug_host_buf;
wgpu::Buffer debug_dev_buf;
@@ -268,10 +265,12 @@ struct webgpu_context_struct {
size_t memset_bytes_per_thread;
#ifdef GGML_WEBGPU_GPU_PROFILE
wgpu::Buffer profile_timestamp_dev_buf;
wgpu::Buffer profile_timestamp_host_buf;
wgpu::QuerySet profile_timestamp_query_set;
uint32_t profile_timestamp_query_count = 0;
// Profiling: per-shader GPU time in ms
std::unordered_map<std::string, double> shader_gpu_time_ms;
wgpu::Buffer profile_timestamp_dev_buf;
wgpu::Buffer profile_timestamp_host_buf;
wgpu::QuerySet profile_timestamp_query_set;
uint32_t profile_timestamp_query_count = 0;
#endif
~webgpu_context_struct() {
@@ -713,12 +712,12 @@ static void ggml_backend_webgpu_free(ggml_backend_t backend) {
#ifdef GGML_WEBGPU_GPU_PROFILE
std::cout << "\n[ggml_webgpu gpu profiling summary]\n";
double total_gpu = 0.0;
for (const auto & kv : ctx->webgpu_ctx->global_ctx->shader_gpu_time_ms) {
for (const auto & kv : ctx->webgpu_ctx->shader_gpu_time_ms) {
total_gpu += kv.second;
}
std::cout << "ggml_webgpu: total gpu time (all shaders): " << total_gpu << " ms\n";
std::cout << "\nggml_webgpu: gpu breakdown:\n";
for (const auto & kv : ctx->webgpu_ctx->global_ctx->shader_gpu_time_ms) {
for (const auto & kv : ctx->webgpu_ctx->shader_gpu_time_ms) {
double pct = (total_gpu > 0.0) ? (kv.second / total_gpu * 100.0) : 0.0;
std::cout << "ggml_webgpu: " << kv.first << ": " << kv.second << " ms (" << std::fixed << std::setprecision(2)
<< pct << "%)\n";
@@ -923,6 +922,87 @@ static webgpu_encoded_op ggml_webgpu_solve_tri(webgpu_context & ctx,
return ggml_backend_webgpu_build(ctx, pipeline, params, entries, wg_x, wg_y);
}
static webgpu_encoded_op ggml_webgpu_conv_2d(webgpu_context & ctx,
ggml_tensor * src0,
ggml_tensor * src1,
ggml_tensor * dst) {
const int32_t s0 = ggml_get_op_params_i32(dst, 0);
const int32_t s1 = ggml_get_op_params_i32(dst, 1);
const int32_t p0 = ggml_get_op_params_i32(dst, 2);
const int32_t p1 = ggml_get_op_params_i32(dst, 3);
const int32_t d0 = ggml_get_op_params_i32(dst, 4);
const int32_t d1 = ggml_get_op_params_i32(dst, 5);
std::vector<uint32_t> params = {
(uint32_t) (ggml_webgpu_tensor_misalignment(ctx, src0) / ggml_type_size(src0->type)),
(uint32_t) (ggml_webgpu_tensor_misalignment(ctx, src1) / ggml_type_size(src1->type)),
(uint32_t) (ggml_webgpu_tensor_misalignment(ctx, dst) / ggml_type_size(dst->type)),
(uint32_t) (src0->nb[0] / ggml_type_size(src0->type)),
(uint32_t) (src0->nb[1] / ggml_type_size(src0->type)),
(uint32_t) (src0->nb[2] / ggml_type_size(src0->type)),
(uint32_t) (src0->nb[3] / ggml_type_size(src0->type)),
(uint32_t) (src1->nb[0] / ggml_type_size(src1->type)),
(uint32_t) (src1->nb[1] / ggml_type_size(src1->type)),
(uint32_t) (src1->nb[2] / ggml_type_size(src1->type)),
(uint32_t) (src1->nb[3] / ggml_type_size(src1->type)),
(uint32_t) (dst->nb[0] / ggml_type_size(dst->type)),
(uint32_t) (dst->nb[1] / ggml_type_size(dst->type)),
(uint32_t) (dst->nb[2] / ggml_type_size(dst->type)),
(uint32_t) (dst->nb[3] / ggml_type_size(dst->type)),
(uint32_t) src0->ne[0],
(uint32_t) src0->ne[1],
(uint32_t) src0->ne[2],
(uint32_t) src1->ne[0],
(uint32_t) src1->ne[1],
(uint32_t) dst->ne[0],
(uint32_t) dst->ne[1],
(uint32_t) dst->ne[2],
(uint32_t) dst->ne[3],
(uint32_t) s0,
(uint32_t) s1,
(uint32_t) p0,
(uint32_t) p1,
(uint32_t) d0,
(uint32_t) d1,
};
std::vector<wgpu::BindGroupEntry> entries = {
ggml_webgpu_make_tensor_bind_group_entry(ctx, 0, src0),
ggml_webgpu_make_tensor_bind_group_entry(ctx, 1, src1),
ggml_webgpu_make_tensor_bind_group_entry(ctx, 2, dst),
};
uint32_t max_wg_size =
std::min((uint32_t) WEBGPU_MAX_WG_SIZE, ctx->global_ctx->capabilities.limits.maxComputeWorkgroupSizeX);
uint32_t wg_size =
std::min((uint32_t) ctx->global_ctx->capabilities.limits.maxComputeInvocationsPerWorkgroup, max_wg_size);
ggml_webgpu_shader_lib_context shader_lib_ctx = {};
shader_lib_ctx.src0 = src0;
shader_lib_ctx.src1 = src1;
shader_lib_ctx.dst = dst;
shader_lib_ctx.max_wg_size = wg_size;
webgpu_pipeline pipeline = ctx->shader_lib->get_conv2d_pipeline(shader_lib_ctx);
auto * decisions = static_cast<ggml_webgpu_generic_shader_decisions *>(pipeline.context.get());
uint32_t n_out = ggml_nelements(dst);
uint32_t total_wg = CEIL_DIV(n_out, decisions->wg_size);
uint32_t max_wg = ctx->global_ctx->capabilities.limits.maxComputeWorkgroupsPerDimension;
uint32_t wg_x = std::min(total_wg, max_wg);
uint32_t wg_y = CEIL_DIV(total_wg, wg_x);
return ggml_backend_webgpu_build(ctx, pipeline, params, entries, wg_x, wg_y);
}
static webgpu_encoded_op ggml_webgpu_ssm_conv(webgpu_context & ctx,
ggml_tensor * src0,
ggml_tensor * src1,
@@ -2479,6 +2559,8 @@ static std::optional<webgpu_encoded_op> ggml_webgpu_encode_node(webgpu_context c
case GGML_OP_SUM:
case GGML_OP_SUM_ROWS:
return ggml_webgpu_sum_rows(ctx, src0, node);
case GGML_OP_CONV_2D:
return ggml_webgpu_conv_2d(ctx, src0, src1, node);
default:
return std::nullopt;
}
@@ -2511,7 +2593,7 @@ static void ggml_backend_webgpu_collect_profile_results(webgpu_context &
for (size_t i = 0; i < pipeline_names.size(); ++i) {
// WebGPU timestamps are in ns; convert to ms.
const double elapsed_ms = double(ts_data[2 * i + 1] - ts_data[2 * i]) * 1e-6;
ctx->global_ctx->shader_gpu_time_ms[pipeline_names[i]] += elapsed_ms;
ctx->shader_gpu_time_ms[pipeline_names[i]] += elapsed_ms;
}
ctx->profile_timestamp_host_buf.Unmap();
@@ -3497,6 +3579,11 @@ static bool ggml_backend_webgpu_device_supports_op(ggml_backend_dev_t dev, const
case GGML_OP_SOLVE_TRI:
supports_op = op->type == GGML_TYPE_F32 && src0->type == GGML_TYPE_F32 && src1->type == GGML_TYPE_F32;
break;
case GGML_OP_CONV_2D:
supports_op = (op->type == GGML_TYPE_F32 || op->type == GGML_TYPE_F16) &&
(src0->type == GGML_TYPE_F32 || src0->type == GGML_TYPE_F16) &&
(src1->type == GGML_TYPE_F32 || src1->type == GGML_TYPE_F16);
break;
case GGML_OP_SSM_CONV:
supports_op = op->type == GGML_TYPE_F32;
break;

View File

@@ -0,0 +1,165 @@
#include "common_decls.tmpl"
enable f16;
@group(0) @binding(0)
#if defined(WEIGHT_F32)
var<storage, read_write> weights: array<f32>;
#elif defined(WEIGHT_F16)
var<storage, read_write> weights: array<f16>;
#endif
@group(0) @binding(1)
#if defined(INPUT_F32)
var<storage, read_write> input: array<f32>;
#elif defined(INPUT_F16)
var<storage, read_write> input: array<f16>;
#endif
@group(0) @binding(2)
#if defined(OUTPUT_F32)
var<storage, read_write> output: array<f32>;
#elif defined(OUTPUT_F16)
var<storage, read_write> output: array<f16>;
#endif
struct Params {
offset_w: u32,
offset_i: u32,
offset_o: u32,
// element strides
sw0: u32, sw1: u32, sw2: u32, sw3: u32,
si0: u32, si1: u32, si2: u32, si3: u32,
so0: u32, so1: u32, so2: u32, so3: u32,
// kernel dimensions
KW: u32, KH: u32, IC: u32,
// input dimensions
IW: u32, IH: u32,
// output dimensions
OW: u32, OH: u32, OC_out: u32, N_out: u32,
// stride
s0: u32, s1: u32,
// padding
p0: u32, p1: u32,
// dilation
d0: u32, d1: u32,
};
@group(0) @binding(3)
var<uniform> params: Params;
fn load_weight(idx: u32) -> f32 {
#if defined(WEIGHT_F32)
return weights[idx];
#elif defined(WEIGHT_F16)
return f32(weights[idx]);
#endif
}
fn load_input(idx: u32) -> f32 {
#if defined(INPUT_F32)
return input[idx];
#elif defined(INPUT_F16)
return f32(input[idx]);
#endif
}
fn store_output(idx: u32, val: f32) {
#if defined(OUTPUT_F32)
output[idx] = val;
#elif defined(OUTPUT_F16)
output[idx] = f16(val);
#endif
}
fn ceil_div_u32(x: u32, y: u32) -> u32 {
return (x + y - 1) / y;
}
// returns the first valid kernel index k such that base + k * step >= 0
fn first_valid_k(base: i32, step: u32) -> u32 {
if (base >= 0) {
return 0;
}
return ceil_div_u32(u32(-base), step);
}
// returns the first invalid kernel index k such that base + k * step >= limit so valid k are in [0, end_valid_k)
fn end_valid_k(base: i32, step: u32, limit: u32, k_max: u32) -> u32 {
let remaining = i32(limit) - base;
if (remaining <= 0) {
return 0;
}
return min(k_max, ceil_div_u32(u32(remaining), step));
}
@compute @workgroup_size(WG_SIZE)
fn main(
@builtin(global_invocation_id) gid: vec3<u32>,
@builtin(num_workgroups) num_wg: vec3<u32>
) {
let threads_per_group = u32(WG_SIZE);
let i_out = gid.x + (num_wg.x * threads_per_group) * gid.y;
let n_out = params.OW * params.OH * params.OC_out * params.N_out;
var sum: f32 = 0.0;
if (i_out >= n_out) {
return;
}
// Kernel layout: [KW, KH, IC, ..]
// Input layout: [IW, IH, .., ..]
// Output layout: [OW, OH, OC, N]
var i = i_out;
let n = i / (params.OC_out * params.OH * params.OW);
i = i % (params.OC_out * params.OH * params.OW);
let oc = i / (params.OH * params.OW);
i = i % (params.OH * params.OW);
let oh = i / params.OW;
let ow = i % params.OW;
let ow_base = i32(ow * params.s0) - i32(params.p0);
let oh_base = i32(oh * params.s1) - i32(params.p1);
// clip the valid kernel window once
let kw_begin = first_valid_k(ow_base, params.d0);
let kw_end = end_valid_k(ow_base, params.d0, params.IW, params.KW);
let kh_begin = first_valid_k(oh_base, params.d1);
let kh_end = end_valid_k(oh_base, params.d1, params.IH, params.KH);
// entire receptive field is out of bounds
if (kw_begin >= kw_end || kh_begin >= kh_end) {
let out_idx = params.offset_o + ow * params.so0 + oh * params.so1 + oc * params.so2 + n * params.so3;
store_output(out_idx, 0.0);
return;
}
let weight_oc_base = params.offset_w + oc * params.sw3;
let input_n_base = params.offset_i + n * params.si3;
for (var ic: u32 = 0; ic < params.IC; ic += 1) {
let w_base_ic = ic * params.sw2 + weight_oc_base;
let in_base = ic * params.si2 + input_n_base;
for (var kh: u32 = kh_begin; kh < kh_end; kh += 1) {
let ih = u32(oh_base + i32(kh * params.d1));
let w_row_base = w_base_ic + kh * params.sw1;
let in_row_base = in_base + ih * params.si1;
for (var kw: u32 = kw_begin; kw < kw_end; kw += 1) {
let iw = u32(ow_base + i32(kw * params.d0));
let w_idx = w_row_base + kw * params.sw0;
let in_idx = in_row_base + iw * params.si0;
sum += load_weight(w_idx) * load_input(in_idx);
}
}
}
let out_idx = params.offset_o + ow * params.so0 + oh * params.so1 + oc * params.so2 + n * params.so3;
store_output(out_idx, sum);
}

View File

@@ -155,6 +155,8 @@ if (NOT WIN32 OR NOT BUILD_SHARED_LIBS)
llama_build_and_test(test-grammar-integration.cpp)
llama_build_and_test(test-llama-grammar.cpp)
llama_build_and_test(test-chat.cpp WORKING_DIRECTORY ${PROJECT_SOURCE_DIR})
target_include_directories(test-chat PRIVATE ${PROJECT_SOURCE_DIR}/tools/server)
target_link_libraries(test-chat PRIVATE server-context)
# TODO: disabled on loongarch64 because the ggml-ci node lacks Python 3.8
if (NOT ${CMAKE_SYSTEM_PROCESSOR} MATCHES "loongarch64")
llama_build_and_test(test-json-schema-to-grammar.cpp WORKING_DIRECTORY ${PROJECT_SOURCE_DIR})

View File

@@ -7,6 +7,7 @@
//
#include "../src/llama-grammar.h"
#include "../src/unicode.h"
#include "../tools/server/server-chat.h"
#include "chat-auto-parser.h"
#include "chat.h"
#include "common.h"
@@ -1514,6 +1515,117 @@ static void test_tools_oaicompat_json_conversion() {
common_chat_tools_to_json_oaicompat({ special_function_tool }).dump(2));
}
static void test_convert_responses_to_chatcmpl() {
LOG_DBG("%s\n", __func__);
// Test basic conversion with input messages (user/assistant alternating)
{
json input = json::parse(R"({
"input": [
{
"type": "message",
"role": "user",
"content": "hi wassup"
},
{
"type": "message",
"role": "assistant",
"content": "Hey! 👋 Not much, just here ready to chat. What's up with you? Anything I can help you with today?"
},
{
"type": "message",
"role": "user",
"content": "hi"
}
],
"model": "gpt-5-mini",
"stream": false,
"text": {},
"reasoning": {
"effort": "medium"
}
})");
json result = server_chat_convert_responses_to_chatcmpl(input);
// Verify messages were converted correctly
assert_equals(true, result.contains("messages"));
assert_equals(true, result.at("messages").is_array());
assert_equals((size_t)3, result.at("messages").size());
// Check first message (user)
const auto & msg0 = result.at("messages")[0];
assert_equals(std::string("user"), msg0.at("role").get<std::string>());
assert_equals(true, msg0.at("content").is_array());
assert_equals(std::string("text"), msg0.at("content")[0].at("type").get<std::string>());
assert_equals(std::string("hi wassup"), msg0.at("content")[0].at("text").get<std::string>());
// Check second message (assistant)
const auto & msg1 = result.at("messages")[1];
assert_equals(std::string("assistant"), msg1.at("role").get<std::string>());
assert_equals(true, msg1.at("content").is_array());
assert_equals(std::string("text"), msg1.at("content")[0].at("type").get<std::string>());
assert_equals(std::string("Hey! 👋 Not much, just here ready to chat. What's up with you? Anything I can help you with today?"), msg1.at("content")[0].at("text").get<std::string>());
// Check third message (user)
const auto & msg2 = result.at("messages")[2];
assert_equals(std::string("user"), msg2.at("role").get<std::string>());
assert_equals(true, msg2.at("content").is_array());
assert_equals(std::string("text"), msg2.at("content")[0].at("type").get<std::string>());
assert_equals(std::string("hi"), msg2.at("content")[0].at("text").get<std::string>());
// Verify other fields preserved
assert_equals(std::string("gpt-5-mini"), result.at("model").get<std::string>());
assert_equals(false, result.at("stream").get<bool>());
}
// Test string input
{
json input = json::parse(R"({
"input": "Hello, world!",
"model": "test-model"
})");
json result = server_chat_convert_responses_to_chatcmpl(input);
assert_equals((size_t)1, result.at("messages").size());
const auto & msg = result.at("messages")[0];
assert_equals(std::string("user"), msg.at("role").get<std::string>());
assert_equals(std::string("Hello, world!"), msg.at("content").get<std::string>());
}
// Test with instructions (system message)
{
json input = json::parse(R"({
"input": "Hello",
"instructions": "You are a helpful assistant.",
"model": "test-model"
})");
json result = server_chat_convert_responses_to_chatcmpl(input);
assert_equals((size_t)2, result.at("messages").size());
const auto & sys_msg = result.at("messages")[0];
assert_equals(std::string("system"), sys_msg.at("role").get<std::string>());
assert_equals(std::string("You are a helpful assistant."), sys_msg.at("content").get<std::string>());
}
// Test with max_output_tokens conversion
{
json input = json::parse(R"({
"input": "Hello",
"model": "test-model",
"max_output_tokens": 100
})");
json result = server_chat_convert_responses_to_chatcmpl(input);
assert_equals(true, result.contains("max_tokens"));
assert_equals(false, result.contains("max_output_tokens"));
assert_equals(100, result.at("max_tokens").get<int>());
}
}
static void test_template_output_peg_parsers(bool detailed_debug) {
LOG_DBG("%s\n", __func__);
@@ -3595,6 +3707,51 @@ static void test_template_output_peg_parsers(bool detailed_debug) {
.run();
}
// Reka Edge
{
auto tst = peg_tester("models/templates/Reka-Edge.jinja", detailed_debug);
tst.test("Hello, world!\nWhat's up?")
.enable_thinking(false)
.expect(message_assist)
.run();
tst.test("I'm\nthinking</think>\n\nHello, world!\nWhat's up?")
.enable_thinking(true)
.reasoning_format(COMMON_REASONING_FORMAT_DEEPSEEK)
.expect(message_assist_thoughts)
.run();
tst.test("<tool_call>\n{\"name\": \"special_function\", \"arguments\": {\"arg1\": 1}}\n</tool_call>")
.enable_thinking(false)
.tools({ special_function_tool })
.expect(message_assist_call)
.run();
tst.test("Hello, world!\nWhat's up?\n<tool_call>\n{\"name\": \"special_function\", \"arguments\": {\"arg1\": 1}}\n</tool_call>")
.enable_thinking(false)
.tools({ special_function_tool })
.expect(message_assist_call_content)
.run();
tst.test("I'm\nthinking</think>\n\n<tool_call>\n{\"name\": \"special_function\", \"arguments\": {\"arg1\": 1}}\n</tool_call>")
.enable_thinking(true)
.reasoning_format(COMMON_REASONING_FORMAT_DEEPSEEK)
.tools({ special_function_tool })
.expect(message_assist_call_thoughts)
.run();
tst.test("<tool_call>\n{\"name\": \"special_function\", \"arguments\": {\"arg1\": 1}}\n</tool_call>\n<tool_call>\n{\"name\": \"special_function_with_opt\", \"arguments\": {\"arg1\": 1, \"arg2\": 2}}\n</tool_call>")
.enable_thinking(false)
.parallel_tool_calls(true)
.tools({ special_function_tool, special_function_tool_with_optional_param })
.expect_tool_calls({
{ "special_function", R"({"arg1": 1})", {} },
{ "special_function_with_opt", R"({"arg1": 1, "arg2": 2})", {} },
})
.run();
tst.test("<tool_call>\n{\"name\": \"special_function\", \"arguments\": {\"arg")
.enable_thinking(false)
.tools({ special_function_tool })
.is_partial(true)
.expect(message_assist_call_cutoff_args)
.run();
}
// Apriel 1.5
{
auto tst = peg_tester("models/templates/unsloth-Apriel-1.5.jinja", detailed_debug);
@@ -4077,6 +4234,55 @@ static void test_template_output_peg_parsers(bool detailed_debug) {
}
}
static void test_reka_edge_common_path() {
auto tmpls = read_templates("models/templates/Reka-Edge.jinja");
{
common_chat_templates_inputs inputs;
common_chat_msg system_msg;
system_msg.role = "system";
system_msg.content = "Use tools when needed.";
common_chat_msg tool_call_msg = simple_assist_msg("", "", "special_function", "{\"arg1\": 1}");
common_chat_msg tool_msg;
tool_msg.role = "tool";
tool_msg.tool_name = "special_function";
tool_msg.tool_call_id = "call0";
tool_msg.content = "Sunny";
inputs.messages = { system_msg, message_user, tool_call_msg, tool_msg, message_user };
inputs.tools = { special_function_tool };
inputs.enable_thinking = true;
inputs.add_generation_prompt = true;
auto params = common_chat_templates_apply(tmpls.get(), inputs);
if (params.prompt.find("<tool_response>\nSunny\n</tool_response><sep>") == std::string::npos) {
throw std::runtime_error("Reka Edge prompt did not render tool response history");
}
if (params.prompt.rfind("assistant: <think>\n") == std::string::npos) {
throw std::runtime_error("Reka Edge prompt did not render thinking generation prompt");
}
}
{
common_chat_templates_inputs inputs;
inputs.messages = {
message_user,
simple_assist_msg("The first point is")
};
inputs.add_generation_prompt = false;
inputs.enable_thinking = false;
inputs.chat_template_kwargs["continue_final_message"] = "true";
auto params = common_chat_templates_apply(tmpls.get(), inputs);
if (string_ends_with(params.prompt, "<sep>")) {
throw std::runtime_error("Reka Edge continue_final_message unexpectedly closed the assistant turn");
}
}
}
// Test the developer role to system workaround with a simple mock template
static void test_developer_role_to_system_workaround() {
LOG_DBG("%s\n", __func__);
@@ -4197,7 +4403,7 @@ int main(int argc, char ** argv) {
bool detailed_debug = false;
bool only_run_filtered = false;
// Check for --template flag
// Check for --template and --detailed flags
for (int i = 1; i < argc; i++) {
std::string arg = argv[i];
if (arg == "--template" && i + 1 < argc) {
@@ -4222,7 +4428,20 @@ int main(int argc, char ** argv) {
}
#ifndef _WIN32
if (argc > 1) {
// Check if any argument is a .jinja file (for template format detection mode)
bool has_jinja_files = false;
for (int i = 1; i < argc; i++) {
std::string arg = argv[i];
if (arg == "--detailed") {
continue;
}
if (arg.size() >= 6 && arg.rfind(".jinja") == arg.size() - 6) {
has_jinja_files = true;
break;
}
}
if (has_jinja_files) {
common_chat_templates_inputs inputs;
common_chat_msg msg;
msg.role = "user";
@@ -4255,7 +4474,9 @@ int main(int argc, char ** argv) {
test_msg_diffs_compute();
test_msgs_oaicompat_json_conversion();
test_tools_oaicompat_json_conversion();
test_convert_responses_to_chatcmpl();
test_developer_role_to_system_workaround();
test_reka_edge_common_path();
test_template_output_peg_parsers(detailed_debug);
std::cout << "\n[chat] All tests passed!" << '\n';
}

View File

@@ -40,6 +40,7 @@ add_library(mtmd
models/deepseekocr.cpp
models/mobilenetv5.cpp
models/youtuvl.cpp
models/yasa2.cpp
)
set_target_properties(mtmd PROPERTIES

View File

@@ -242,6 +242,15 @@
#define TN_STD_BIAS "v.std_bias"
#define TN_STD_SCALE "v.std_scale"
// yasa2
#define TN_YASA_PATCH_LN_W "v.patch_ln.weight"
#define TN_YASA_PATCH_LN_B "v.patch_ln.bias"
#define TN_YASA_BACKBONE_LN_W "v.backbone_ln.weight"
#define TN_YASA_BACKBONE_LN_B "v.backbone_ln.bias"
#define TN_YASA_POS_EMBD "v.vision_pos_embed"
#define TN_YASA_STAGE_DOWN_LN "v.stage.%d.down.ln.%s"
#define TN_YASA_STAGE_DOWN_CONV "v.stage.%d.down.conv.%s"
#define TN_YASA_STAGE_BLK "v.stage.%d.blk.%d.%s.%s"
// align x to upper multiple of n
#define CLIP_ALIGN(x, n) ((((x) + (n) - 1) / (n)) * (n))
@@ -290,6 +299,7 @@ enum projector_type {
PROJECTOR_TYPE_LFM2A,
PROJECTOR_TYPE_GLM4V,
PROJECTOR_TYPE_YOUTUVL,
PROJECTOR_TYPE_YASA2,
PROJECTOR_TYPE_KIMIK25,
PROJECTOR_TYPE_NEMOTRON_V2_VL,
PROJECTOR_TYPE_HUNYUANOCR,
@@ -335,6 +345,7 @@ static std::map<projector_type, std::string> PROJECTOR_TYPE_NAMES = {
{ PROJECTOR_TYPE_LFM2A, "lfm2a"},
{ PROJECTOR_TYPE_GLM4V, "glm4v"},
{ PROJECTOR_TYPE_YOUTUVL, "youtuvl"},
{ PROJECTOR_TYPE_YASA2, "yasa2"},
{ PROJECTOR_TYPE_KIMIK25, "kimik25"},
{ PROJECTOR_TYPE_NEMOTRON_V2_VL, "nemotron_v2_vl"},
{ PROJECTOR_TYPE_HUNYUANOCR, "hunyuanocr"},

View File

@@ -268,6 +268,27 @@ struct mobilenetv5_block {
ggml_tensor * attn_norm_w = nullptr;
};
struct yasa2_block {
ggml_tensor * dw_w = nullptr;
ggml_tensor * dw_b = nullptr;
ggml_tensor * ln_w = nullptr;
ggml_tensor * ln_b = nullptr;
ggml_tensor * pw1_w = nullptr;
ggml_tensor * pw1_b = nullptr;
ggml_tensor * grn_w = nullptr;
ggml_tensor * grn_b = nullptr;
ggml_tensor * pw2_w = nullptr;
ggml_tensor * pw2_b = nullptr;
};
struct yasa2_stage {
ggml_tensor * down_ln_w = nullptr;
ggml_tensor * down_ln_b = nullptr;
ggml_tensor * down_conv_w = nullptr;
ggml_tensor * down_conv_b = nullptr;
std::vector<yasa2_block> blocks;
};
struct clip_model {
clip_modality modality = CLIP_MODALITY_VISION;
projector_type proj_type = PROJECTOR_TYPE_MLP;
@@ -402,6 +423,15 @@ struct clip_model {
ggml_tensor * msfa_ffn_expand_bn = nullptr;
ggml_tensor * msfa_ffn_project_bn = nullptr;
// yasa2
ggml_tensor * yasa_patch_w = nullptr;
ggml_tensor * yasa_patch_b = nullptr;
ggml_tensor * yasa_patch_ln_w = nullptr;
ggml_tensor * yasa_patch_ln_b = nullptr;
ggml_tensor * yasa_backbone_ln_w = nullptr;
ggml_tensor * yasa_backbone_ln_b = nullptr;
ggml_tensor * yasa_vision_pos_embed = nullptr;
std::vector<yasa2_stage> yasa_stages;
// pixtral, glm4v
ggml_tensor * token_embd_img_break = nullptr;

View File

@@ -947,6 +947,10 @@ static ggml_cgraph * clip_image_build_graph(clip_ctx * ctx, const clip_image_f32
{
builder = std::make_unique<clip_graph_youtuvl>(ctx, img);
} break;
case PROJECTOR_TYPE_YASA2:
{
builder = std::make_unique<clip_graph_yasa2>(ctx, img);
} break;
default:
GGML_ABORT("missing cgraph builder");
}
@@ -1389,6 +1393,16 @@ struct clip_model_loader {
hparams.set_limit_image_tokens(1, 62500);
hparams.set_warmup_n_tokens(16*16); // avoid OOM on warmup
} break;
case PROJECTOR_TYPE_YASA2:
{
hparams.ffn_op = FFN_GELU_ERF;
log_ffn_op = "gelu_erf";
hparams.image_resize_algo = RESIZE_ALGO_BICUBIC;
// reka model performs better when using resize_bicubic, which stretches
// the image to fit fixed square size
hparams.image_resize_pad = false;
} break;
case PROJECTOR_TYPE_GLM4V:
{
hparams.rope_theta = 10000.0f;
@@ -1839,6 +1853,55 @@ struct clip_model_loader {
model.mm_1_w = get_tensor(string_format(TN_LLAVA_PROJ, 2, "weight")); // merger.mlp.2
model.mm_1_b = get_tensor(string_format(TN_LLAVA_PROJ, 2, "bias"));
} break;
case PROJECTOR_TYPE_YASA2:
{
// reuse tensors already loaded by the common section
// (TN_PATCH_EMBD and TN_PATCH_BIAS have the same tensor names)
GGML_ASSERT(model.patch_embeddings_0 && "yasa2 requires v.patch_embd.weight");
model.yasa_patch_w = model.patch_embeddings_0;
model.yasa_patch_b = model.patch_bias;
model.yasa_patch_ln_w = get_tensor(TN_YASA_PATCH_LN_W, false);
model.yasa_patch_ln_b = get_tensor(TN_YASA_PATCH_LN_B, false);
model.yasa_backbone_ln_w = get_tensor(TN_YASA_BACKBONE_LN_W, false);
model.yasa_backbone_ln_b = get_tensor(TN_YASA_BACKBONE_LN_B, false);
model.yasa_vision_pos_embed = get_tensor(TN_YASA_POS_EMBD, false);
model.mm_0_w = get_tensor(string_format(TN_LLAVA_PROJ, 0, "weight"));
model.mm_0_b = get_tensor(string_format(TN_LLAVA_PROJ, 0, "bias"), false);
model.mm_2_w = get_tensor(string_format(TN_LLAVA_PROJ, 2, "weight"));
model.mm_2_b = get_tensor(string_format(TN_LLAVA_PROJ, 2, "bias"), false);
model.yasa_stages.clear();
for (int s = 0; ; ++s) {
yasa2_stage stage;
stage.down_ln_w = get_tensor(string_format(TN_YASA_STAGE_DOWN_LN, s, "weight"), false);
stage.down_ln_b = get_tensor(string_format(TN_YASA_STAGE_DOWN_LN, s, "bias"), false);
stage.down_conv_w = get_tensor(string_format(TN_YASA_STAGE_DOWN_CONV, s, "weight"), false);
stage.down_conv_b = get_tensor(string_format(TN_YASA_STAGE_DOWN_CONV, s, "bias"), false);
for (int bi = 0; ; ++bi) {
yasa2_block blk;
blk.dw_w = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "dw", "weight"), false);
if (!blk.dw_w) {
break;
}
blk.dw_b = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "dw", "bias"), false);
blk.ln_w = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "ln", "weight"), false);
blk.ln_b = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "ln", "bias"), false);
blk.pw1_w = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "pw1", "weight"), false);
blk.pw1_b = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "pw1", "bias"), false);
blk.grn_w = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "grn", "weight"), false);
blk.grn_b = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "grn", "bias"), false);
blk.pw2_w = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "pw2", "weight"), false);
blk.pw2_b = get_tensor(string_format(TN_YASA_STAGE_BLK, s, bi, "pw2", "bias"), false);
stage.blocks.push_back(blk);
}
if (!stage.down_conv_w && stage.blocks.empty()) {
break;
}
model.yasa_stages.push_back(std::move(stage));
}
} break;
case PROJECTOR_TYPE_GLM4V:
{
model.mm_fc_w = get_tensor(string_format(TN_MM_PROJECTOR, "weight"));
@@ -2843,6 +2906,10 @@ int clip_n_output_tokens(const struct clip_ctx * ctx, struct clip_image_f32 * im
{
// do nothing
} break;
case PROJECTOR_TYPE_YASA2:
{
n_patches = 64; // adaptive average pooling to 8x8 tokens
} break;
case PROJECTOR_TYPE_LDP:
case PROJECTOR_TYPE_LDPV2:
case PROJECTOR_TYPE_GLM_EDGE:
@@ -3463,6 +3530,7 @@ bool clip_image_batch_encode(clip_ctx * ctx, const int n_threads, const clip_ima
case PROJECTOR_TYPE_PHI4:
case PROJECTOR_TYPE_COGVLM:
case PROJECTOR_TYPE_HUNYUANOCR:
case PROJECTOR_TYPE_YASA2:
{
// do nothing
} break;
@@ -3689,6 +3757,7 @@ int clip_n_mmproj_embd(const struct clip_ctx * ctx) {
case PROJECTOR_TYPE_KIMIVL:
case PROJECTOR_TYPE_PADDLEOCR:
case PROJECTOR_TYPE_KIMIK25:
case PROJECTOR_TYPE_YASA2:
return ctx->model.mm_2_w->ne[1];
case PROJECTOR_TYPE_HUNYUANOCR:
return ctx->model.mm_model_proj->ne[1];

View File

@@ -43,6 +43,14 @@ struct clip_graph_youtuvl : clip_graph {
ggml_cgraph * build() override;
};
struct clip_graph_yasa2 : clip_graph {
clip_graph_yasa2(clip_ctx * ctx, const clip_image_f32 & img) : clip_graph(ctx, img) {}
ggml_cgraph * build() override;
ggml_tensor * layer_norm_channels(ggml_tensor * inp, ggml_tensor * w, ggml_tensor * b, float eps = 1e-6f);
ggml_tensor * convnext_grn(ggml_tensor * inp, ggml_tensor * w, ggml_tensor * b);
};
struct clip_graph_minicpmv : clip_graph {
clip_graph_minicpmv(clip_ctx * ctx, const clip_image_f32 & img) : clip_graph(ctx, img) {}
ggml_cgraph * build() override;

191
tools/mtmd/models/yasa2.cpp Normal file
View File

@@ -0,0 +1,191 @@
// ABOUTME: Yasa2 vision encoder graph builder for ConvNeXt-based architecture.
// ABOUTME: Implements patch embedding, ConvNeXt stages with GRN, and adaptive pooling.
#include "models.h"
static ggml_tensor * add_channel_bias(
ggml_context * ctx0,
ggml_tensor * x_whcb,
ggml_tensor * b_c) {
if (!b_c) {
return x_whcb;
}
ggml_tensor * b4 = ggml_reshape_4d(ctx0, b_c, 1, 1, b_c->ne[0], 1);
return ggml_add(ctx0, x_whcb, b4);
}
static ggml_tensor * mul_channel_weight(
ggml_context * ctx0,
ggml_tensor * x_whcb,
ggml_tensor * w_c) {
if (!w_c) {
return x_whcb;
}
ggml_tensor * w4 = ggml_reshape_4d(ctx0, w_c, 1, 1, w_c->ne[0], 1);
return ggml_mul(ctx0, x_whcb, w4);
}
ggml_tensor * clip_graph_yasa2::layer_norm_channels(ggml_tensor * inp, ggml_tensor * w, ggml_tensor * b, float eps) {
// Match HF ConvNextLayerNorm(channels_first):
// u = mean_c(x), s = mean_c((x-u)^2), x = (x-u)/sqrt(s+eps)
// cast back to input dtype before affine.
ggml_tensor * cur = ggml_permute(ctx0, inp, 2, 1, 0, 3); // [W,H,C,B] -> [C,H,W,B]
cur = ggml_cont(ctx0, cur);
ggml_tensor * u = ggml_mean(ctx0, cur); // [1,H,W,B]
ggml_tensor * xm = ggml_sub(ctx0, cur, u); // [C,H,W,B]
ggml_tensor * s = ggml_mul(ctx0, xm, xm); // [C,H,W,B]
s = ggml_mean(ctx0, s); // [1,H,W,B]
s = ggml_clamp(ctx0, s, eps, 1e30f); // avoid div-by-zero in no-alloc warmup
s = ggml_sqrt(ctx0, s); // [1,H,W,B]
ggml_tensor * xhat = ggml_div(ctx0, xm, s); // [C,H,W,B]
xhat = ggml_permute(ctx0, xhat, 2, 1, 0, 3); // [W,H,C,B]
xhat = ggml_cont(ctx0, xhat);
xhat = mul_channel_weight(ctx0, xhat, w);
xhat = add_channel_bias(ctx0, xhat, b);
return xhat;
}
ggml_tensor * clip_graph_yasa2::convnext_grn(ggml_tensor * inp, ggml_tensor * w, ggml_tensor * b) {
// Exact ConvNeXtV2 GRN:
// Gx = ||x||_2 over spatial dims (W,H), Nx = Gx / (mean_c(Gx) + eps)
// y = w * (x * Nx) + b + x
const int64_t wdim = inp->ne[0];
const int64_t hdim = inp->ne[1];
const int64_t cdim = inp->ne[2];
const int64_t bdim = inp->ne[3];
// Keep GRN math in fp32 for stability; fp16/bf16 accumulation can drift.
ggml_tensor * sq = ggml_mul(ctx0, inp, inp);
ggml_tensor * sq_flat = ggml_reshape_4d(ctx0, sq, wdim * hdim, cdim, 1, bdim); // [WH,C,1,B]
ggml_tensor * gx = ggml_sum_rows(ctx0, sq_flat); // [1,C,1,B]
gx = ggml_sqrt(ctx0, gx); // [1,C,1,B]
ggml_tensor * gx_ch_first = ggml_permute(ctx0, gx, 1, 0, 2, 3); // [C,1,1,B]
gx_ch_first = ggml_cont(ctx0, gx_ch_first);
ggml_tensor * gx_mean = ggml_mean(ctx0, gx_ch_first); // [1,1,1,B]
gx_mean = ggml_clamp(ctx0, gx_mean, 1e-6f, 1e30f); // approx +eps, warmup-safe
ggml_tensor * nx = ggml_div(ctx0, gx, gx_mean); // [1,C,1,B]
nx = ggml_permute(ctx0, nx, 0, 2, 1, 3); // [1,1,C,B]
nx = ggml_cont(ctx0, nx);
ggml_tensor * xnx = ggml_mul(ctx0, inp, nx);
xnx = mul_channel_weight(ctx0, xnx, w);
xnx = add_channel_bias(ctx0, xnx, b);
return ggml_add(ctx0, inp, xnx);
}
ggml_cgraph * clip_graph_yasa2::build() {
ggml_tensor * cur = build_inp_raw();
// Patch embedding Conv2d(kernel=4, stride=4)
cur = ggml_conv_2d(ctx0, model.yasa_patch_w, cur, patch_size, patch_size, 0, 0, 1, 1);
cur = add_channel_bias(ctx0, cur, model.yasa_patch_b);
ggml_set_name(cur, "yasa2_patch_conv_out");
cb(cur, "yasa2_patch_conv_out", -1);
cur = layer_norm_channels(cur, model.yasa_patch_ln_w, model.yasa_patch_ln_b, eps);
ggml_set_name(cur, "yasa2_patch_ln_out");
cb(cur, "yasa2_patch_ln_out", -1);
// ConvNeXt stages
for (size_t s = 0; s < model.yasa_stages.size(); ++s) {
const auto & stage = model.yasa_stages[s];
if (stage.down_conv_w) {
cur = layer_norm_channels(cur, stage.down_ln_w, stage.down_ln_b, eps);
cur = ggml_conv_2d(ctx0, stage.down_conv_w, cur, 2, 2, 0, 0, 1, 1);
cur = add_channel_bias(ctx0, cur, stage.down_conv_b);
ggml_format_name(cur, "yasa2_stage%zu_down_out", s);
}
for (size_t bi = 0; bi < stage.blocks.size(); ++bi) {
const auto & blk = stage.blocks[bi];
ggml_tensor * res = cur;
ggml_tensor * x = ggml_conv_2d_dw(ctx0, blk.dw_w, cur, 1, 1, 3, 3, 1, 1);
x = add_channel_bias(ctx0, x, blk.dw_b);
x = layer_norm_channels(x, blk.ln_w, blk.ln_b, eps);
// pwconv1/pwconv2 are HF Linear layers over channels; implement via matmul on tokens.
const int64_t w = x->ne[0];
const int64_t h = x->ne[1];
const int64_t b = x->ne[3];
ggml_tensor * tok = ggml_reshape_3d(ctx0, x, w * h, x->ne[2], b); // [T,C,B]
tok = ggml_permute(ctx0, tok, 1, 0, 2, 3); // [C,T,B]
tok = ggml_cont(ctx0, tok);
tok = ggml_mul_mat(ctx0, blk.pw1_w, tok); // [4C,T,B]
if (blk.pw1_b) {
ggml_tensor * b1 = ggml_reshape_3d(ctx0, blk.pw1_b, blk.pw1_b->ne[0], 1, 1); // [4C,1,1]
tok = ggml_add(ctx0, tok, b1);
}
x = ggml_permute(ctx0, tok, 1, 0, 2, 3); // [T,4C,B]
x = ggml_cont(ctx0, x);
x = ggml_reshape_4d(ctx0, x, w, h, tok->ne[0], b); // [W,H,4C,B]
x = ggml_gelu_erf(ctx0, x);
x = convnext_grn(x, blk.grn_w, blk.grn_b);
tok = ggml_reshape_3d(ctx0, x, w * h, x->ne[2], b); // [T,4C,B]
tok = ggml_permute(ctx0, tok, 1, 0, 2, 3); // [4C,T,B]
tok = ggml_cont(ctx0, tok);
tok = ggml_mul_mat(ctx0, blk.pw2_w, tok); // [C,T,B]
if (blk.pw2_b) {
ggml_tensor * b2 = ggml_reshape_3d(ctx0, blk.pw2_b, blk.pw2_b->ne[0], 1, 1); // [C,1,1]
tok = ggml_add(ctx0, tok, b2);
}
x = ggml_permute(ctx0, tok, 1, 0, 2, 3); // [T,C,B]
x = ggml_cont(ctx0, x);
x = ggml_reshape_4d(ctx0, x, w, h, tok->ne[0], b); // [W,H,C,B]
cur = ggml_add(ctx0, res, x);
ggml_format_name(cur, "yasa2_stage%zu_blk%zu_out", s, bi);
}
}
// HF path adds vision position embeddings BEFORE adaptive pooling.
const int64_t pre_w = cur->ne[0];
const int64_t pre_h = cur->ne[1];
ggml_tensor * tokens_pre = ggml_reshape_3d(ctx0, cur, pre_w * pre_h, cur->ne[2], cur->ne[3]); // [T,C,B]
tokens_pre = ggml_permute(ctx0, tokens_pre, 1, 0, 2, 3); // [C,T,B]
tokens_pre = ggml_cont(ctx0, tokens_pre);
if (model.yasa_vision_pos_embed && tokens_pre->ne[1] == model.yasa_vision_pos_embed->ne[1]) {
const int64_t n_ch = model.yasa_vision_pos_embed->ne[0];
const int64_t n_tokens = model.yasa_vision_pos_embed->ne[1];
ggml_tensor * pos = ggml_reshape_3d(ctx0, model.yasa_vision_pos_embed, (int) n_ch, (int) n_tokens, 1);
tokens_pre = ggml_add(ctx0, tokens_pre, pos);
}
cur = ggml_permute(ctx0, tokens_pre, 1, 0, 2, 3); // [T,C,B]
cur = ggml_cont(ctx0, cur);
cur = ggml_reshape_4d(ctx0, cur, pre_w, pre_h, cur->ne[1], cur->ne[2]); // [W,H,C,B]
// AdaptiveAvgPool2d target is 8x8 for real inputs, but warmup can use tiny images.
const int pooled_w = std::min(8, (int) cur->ne[0]);
const int pooled_h = std::min(8, (int) cur->ne[1]);
const int kw = std::max(1, (int) cur->ne[0] / pooled_w);
const int kh = std::max(1, (int) cur->ne[1] / pooled_h);
cur = ggml_pool_2d(ctx0, cur, GGML_OP_POOL_AVG, kw, kh, kw, kh, 0, 0);
// [W,H,C,B] -> [C,T,B]
ggml_tensor * tokens = ggml_reshape_3d(ctx0, cur, cur->ne[0] * cur->ne[1], cur->ne[2], cur->ne[3]);
tokens = ggml_permute(ctx0, tokens, 1, 0, 2, 3);
tokens = ggml_cont(ctx0, tokens);
cb(tokens, "yasa2_tokens", -1);
GGML_ASSERT(model.mm_0_w && model.mm_2_w);
ggml_tensor * embeddings = build_ffn(
tokens,
model.mm_0_w, model.mm_0_b,
nullptr, nullptr,
model.mm_2_w, model.mm_2_b,
FFN_GELU_ERF,
-1);
cb(embeddings, "yasa2_emb", -1);
ggml_build_forward_expand(gf, embeddings);
return gf;
}

View File

@@ -316,6 +316,19 @@ struct mtmd_context {
img_end = "<|vision_end|>";
image_preproc = std::make_unique<mtmd_image_preprocessor_youtuvl>(ctx_v);
} break;
case PROJECTOR_TYPE_YASA2:
{
img_beg = "";
// Currently only supprots single-tile preprocessing: any input is downscaled
// to one image_size x image_size tile (64 output tokens via 8x8 adaptive avg
// pool).
// However, the model itself supports llava-uhd multi-tile tiling for high-res
// images. This will be implemented in a future PR (dispatch on has_pinpoints
// - see LDP/COGVLM branch above) and emit image_grid_pinpoints in the conversion
// script.
image_preproc = std::make_unique<mtmd_image_preprocessor_fixed_size>(ctx_v);
} break;
case PROJECTOR_TYPE_GEMMA3:
case PROJECTOR_TYPE_GEMMA3NV:
{

View File

@@ -5,6 +5,8 @@ include_directories(${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR})
set(TARGET server-context)
add_library(${TARGET} STATIC
server-chat.cpp
server-chat.h
server-task.cpp
server-task.h
server-queue.cpp

View File

@@ -0,0 +1,588 @@
#include "server-chat.h"
#include "server-common.h"
#include <sstream>
json server_chat_convert_responses_to_chatcmpl(const json & response_body) {
if (!response_body.contains("input")) {
throw std::invalid_argument("'input' is required");
}
if (!json_value(response_body, "previous_response_id", std::string{}).empty()) {
throw std::invalid_argument("llama.cpp does not support 'previous_response_id'.");
}
const json input_value = response_body.at("input");
json chatcmpl_body = response_body;
chatcmpl_body.erase("input");
std::vector<json> chatcmpl_messages;
if (response_body.contains("instructions")) {
chatcmpl_messages.push_back({
{"role", "system"},
{"content", json_value(response_body, "instructions", std::string())},
});
chatcmpl_body.erase("instructions");
}
if (input_value.is_string()) {
// #responses_create-input-text_input
chatcmpl_messages.push_back({
{"role", "user"},
{"content", input_value},
});
} else if (input_value.is_array()) {
// #responses_create-input-input_item_list
static auto exists_and_is_array = [](const json & j, const char * key) -> bool {
return j.contains(key) && j.at(key).is_array();
};
static auto exists_and_is_string = [](const json & j, const char * key) -> bool {
return j.contains(key) && j.at(key).is_string();
};
for (json item : input_value) {
bool merge_prev = !chatcmpl_messages.empty() && chatcmpl_messages.back().value("role", "") == "assistant";
if (exists_and_is_string(item, "content")) {
// #responses_create-input-input_item_list-input_message-content-text_input
// Only "Input message" contains item["content"]::string
// After converting item["content"]::string to item["content"]::array,
// we can treat "Input message" as sum of "Item-Input message" and "Item-Output message"
item["content"] = json::array({
json {
{"text", item.at("content")},
{"type", "input_text"}
}
});
}
if (exists_and_is_array(item, "content") &&
exists_and_is_string(item, "role") &&
(item.at("role") == "user" ||
item.at("role") == "system" ||
item.at("role") == "developer")
) {
// #responses_create-input-input_item_list-item-input_message
std::vector<json> chatcmpl_content;
for (const json & input_item : item.at("content")) {
const std::string type = json_value(input_item, "type", std::string());
if (type == "input_text") {
if (!input_item.contains("text")) {
throw std::invalid_argument("'Input text' requires 'text'");
}
chatcmpl_content.push_back({
{"text", input_item.at("text")},
{"type", "text"},
});
} else if (type == "input_image") {
// While `detail` is marked as required,
// it has default value("auto") and can be omitted.
if (!input_item.contains("image_url")) {
throw std::invalid_argument("'image_url' is required");
}
chatcmpl_content.push_back({
{"image_url", json {
{"url", input_item.at("image_url")}
}},
{"type", "image_url"},
});
} else if (type == "input_file") {
throw std::invalid_argument("'input_file' is not supported by llamacpp at this moment");
} else {
throw std::invalid_argument("'type' must be one of 'input_text', 'input_image', or 'input_file'");
}
}
if (item.contains("type")) {
item.erase("type");
}
if (item.contains("status")) {
item.erase("status");
}
item["content"] = chatcmpl_content;
chatcmpl_messages.push_back(item);
} else if (exists_and_is_string(item, "role") &&
item.at("role") == "assistant" &&
exists_and_is_string(item, "type") &&
item.at("type") == "message"
) {
// #responses_create-input-input_item_list-item-output_message
auto chatcmpl_content = json::array();
// Handle both string content and array content
if (item.contains("content") && item.at("content").is_string()) {
// String content - convert to text content part
chatcmpl_content.push_back({
{"text", item.at("content")},
{"type", "text"},
});
} else if (exists_and_is_array(item, "content")) {
// Array content - process each item
for (const auto & output_text : item.at("content")) {
const std::string type = json_value(output_text, "type", std::string());
if (type == "output_text" || type == "input_text") {
// Accept both output_text and input_text (string content gets converted to input_text)
if (!exists_and_is_string(output_text, "text")) {
throw std::invalid_argument("'Output text' requires 'text'");
}
chatcmpl_content.push_back({
{"text", output_text.at("text")},
{"type", "text"},
});
} else if (type == "refusal") {
if (!exists_and_is_string(output_text, "refusal")) {
throw std::invalid_argument("'Refusal' requires 'refusal'");
}
chatcmpl_content.push_back({
{"refusal", output_text.at("refusal")},
{"type", "refusal"},
});
} else {
throw std::invalid_argument("'type' must be one of 'output_text' or 'refusal'");
}
}
}
if (merge_prev) {
auto & prev_msg = chatcmpl_messages.back();
if (!exists_and_is_array(prev_msg, "content")) {
prev_msg["content"] = json::array();
}
auto & prev_content = prev_msg["content"];
prev_content.insert(prev_content.end(), chatcmpl_content.begin(), chatcmpl_content.end());
} else {
item.erase("status");
item.erase("type");
item["content"] = chatcmpl_content;
chatcmpl_messages.push_back(item);
}
} else if (exists_and_is_string(item, "arguments") &&
exists_and_is_string(item, "call_id") &&
exists_and_is_string(item, "name") &&
exists_and_is_string(item, "type") &&
item.at("type") == "function_call"
) {
// #responses_create-input-input_item_list-item-function_tool_call
json tool_call = {
{"function", json {
{"arguments", item.at("arguments")},
{"name", item.at("name")},
}},
{"id", item.at("call_id")},
{"type", "function"},
};
if (merge_prev) {
auto & prev_msg = chatcmpl_messages.back();
if (!exists_and_is_array(prev_msg, "tool_calls")) {
prev_msg["tool_calls"] = json::array();
}
prev_msg["tool_calls"].push_back(tool_call);
} else {
chatcmpl_messages.push_back(json {
{"role", "assistant"},
{"tool_calls", json::array({tool_call})}
});
}
} else if (exists_and_is_string(item, "call_id") &&
(exists_and_is_string(item, "output") || exists_and_is_array(item, "output")) &&
exists_and_is_string(item, "type") &&
item.at("type") == "function_call_output"
) {
// #responses_create-input-input_item_list-item-function_tool_call_output
if (item.at("output").is_string()) {
chatcmpl_messages.push_back(json {
{"content", item.at("output")},
{"role", "tool"},
{"tool_call_id", item.at("call_id")},
});
} else {
json chatcmpl_outputs = item.at("output");
for (json & chatcmpl_output : chatcmpl_outputs) {
if (!chatcmpl_output.contains("type") || chatcmpl_output.at("type") != "input_text") {
throw std::invalid_argument("Output of tool call should be 'Input text'");
}
chatcmpl_output["type"] = "text";
}
chatcmpl_messages.push_back(json {
{"content", chatcmpl_outputs},
{"role", "tool"},
{"tool_call_id", item.at("call_id")},
});
}
} else if (exists_and_is_array(item, "summary") &&
exists_and_is_string(item, "type") &&
item.at("type") == "reasoning") {
// #responses_create-input-input_item_list-item-reasoning
if (!exists_and_is_array(item, "content")) {
throw std::invalid_argument("item['content'] is not an array");
}
if (item.at("content").empty()) {
throw std::invalid_argument("item['content'] is empty");
}
if (!exists_and_is_string(item.at("content")[0], "text")) {
throw std::invalid_argument("item['content']['text'] is not a string");
}
if (merge_prev) {
auto & prev_msg = chatcmpl_messages.back();
prev_msg["reasoning_content"] = item.at("content")[0].at("text");
} else {
chatcmpl_messages.push_back(json {
{"role", "assistant"},
{"content", json::array()},
{"reasoning_content", item.at("content")[0].at("text")},
});
}
} else {
throw std::invalid_argument("Cannot determine type of 'item'");
}
}
} else {
throw std::invalid_argument("'input' must be a string or array of objects");
}
chatcmpl_body["messages"] = chatcmpl_messages;
if (response_body.contains("tools")) {
if (!response_body.at("tools").is_array()) {
throw std::invalid_argument("'tools' must be an array of objects");
}
std::vector<json> chatcmpl_tools;
for (json resp_tool : response_body.at("tools")) {
json chatcmpl_tool;
if (json_value(resp_tool, "type", std::string()) != "function") {
throw std::invalid_argument("'type' of tool must be 'function'");
}
resp_tool.erase("type");
chatcmpl_tool["type"] = "function";
if (!resp_tool.contains("strict")) {
resp_tool["strict"] = true;
}
chatcmpl_tool["function"] = resp_tool;
chatcmpl_tools.push_back(chatcmpl_tool);
}
chatcmpl_body.erase("tools");
chatcmpl_body["tools"] = chatcmpl_tools;
}
if (response_body.contains("max_output_tokens")) {
chatcmpl_body.erase("max_output_tokens");
chatcmpl_body["max_tokens"] = response_body["max_output_tokens"];
}
return chatcmpl_body;
}
json server_chat_convert_anthropic_to_oai(const json & body) {
json oai_body;
// Convert system prompt
json oai_messages = json::array();
auto system_param = json_value(body, "system", json());
if (!system_param.is_null()) {
std::string system_content;
if (system_param.is_string()) {
system_content = system_param.get<std::string>();
} else if (system_param.is_array()) {
for (const auto & block : system_param) {
if (json_value(block, "type", std::string()) == "text") {
system_content += json_value(block, "text", std::string());
}
}
}
oai_messages.push_back({
{"role", "system"},
{"content", system_content}
});
}
// Convert messages
if (!body.contains("messages")) {
throw std::runtime_error("'messages' is required");
}
const json & messages = body.at("messages");
if (messages.is_array()) {
for (const auto & msg : messages) {
std::string role = json_value(msg, "role", std::string());
if (!msg.contains("content")) {
if (role == "assistant") {
continue;
}
oai_messages.push_back(msg);
continue;
}
const json & content = msg.at("content");
if (content.is_string()) {
oai_messages.push_back(msg);
continue;
}
if (!content.is_array()) {
oai_messages.push_back(msg);
continue;
}
json tool_calls = json::array();
json converted_content = json::array();
json tool_results = json::array();
std::string reasoning_content;
bool has_tool_calls = false;
for (const auto & block : content) {
std::string type = json_value(block, "type", std::string());
if (type == "text") {
converted_content.push_back(block);
} else if (type == "thinking") {
reasoning_content += json_value(block, "thinking", std::string());
} else if (type == "image") {
json source = json_value(block, "source", json::object());
std::string source_type = json_value(source, "type", std::string());
if (source_type == "base64") {
std::string media_type = json_value(source, "media_type", std::string("image/jpeg"));
std::string data = json_value(source, "data", std::string());
std::ostringstream ss;
ss << "data:" << media_type << ";base64," << data;
converted_content.push_back({
{"type", "image_url"},
{"image_url", {
{"url", ss.str()}
}}
});
} else if (source_type == "url") {
std::string url = json_value(source, "url", std::string());
converted_content.push_back({
{"type", "image_url"},
{"image_url", {
{"url", url}
}}
});
}
} else if (type == "tool_use") {
tool_calls.push_back({
{"id", json_value(block, "id", std::string())},
{"type", "function"},
{"function", {
{"name", json_value(block, "name", std::string())},
{"arguments", json_value(block, "input", json::object()).dump()}
}}
});
has_tool_calls = true;
} else if (type == "tool_result") {
std::string tool_use_id = json_value(block, "tool_use_id", std::string());
auto result_content = json_value(block, "content", json());
std::string result_text;
if (result_content.is_string()) {
result_text = result_content.get<std::string>();
} else if (result_content.is_array()) {
for (const auto & c : result_content) {
if (json_value(c, "type", std::string()) == "text") {
result_text += json_value(c, "text", std::string());
}
}
}
tool_results.push_back({
{"role", "tool"},
{"tool_call_id", tool_use_id},
{"content", result_text}
});
}
}
if (!converted_content.empty() || has_tool_calls || !reasoning_content.empty()) {
json new_msg = {{"role", role}};
if (!converted_content.empty()) {
new_msg["content"] = converted_content;
} else if (has_tool_calls || !reasoning_content.empty()) {
new_msg["content"] = "";
}
if (!tool_calls.empty()) {
new_msg["tool_calls"] = tool_calls;
}
if (!reasoning_content.empty()) {
new_msg["reasoning_content"] = reasoning_content;
}
oai_messages.push_back(new_msg);
}
for (const auto & tool_msg : tool_results) {
oai_messages.push_back(tool_msg);
}
}
}
oai_body["messages"] = oai_messages;
// Convert tools
if (body.contains("tools")) {
const json & tools = body.at("tools");
if (tools.is_array()) {
json oai_tools = json::array();
for (const auto & tool : tools) {
oai_tools.push_back({
{"type", "function"},
{"function", {
{"name", json_value(tool, "name", std::string())},
{"description", json_value(tool, "description", std::string())},
{"parameters", tool.contains("input_schema") ? tool.at("input_schema") : json::object()}
}}
});
}
oai_body["tools"] = oai_tools;
}
}
// Convert tool_choice
if (body.contains("tool_choice")) {
const json & tc = body.at("tool_choice");
if (tc.is_object()) {
std::string type = json_value(tc, "type", std::string());
if (type == "auto") {
oai_body["tool_choice"] = "auto";
} else if (type == "any" || type == "tool") {
oai_body["tool_choice"] = "required";
}
}
}
// Convert stop_sequences to stop
if (body.contains("stop_sequences")) {
oai_body["stop"] = body.at("stop_sequences");
}
// Handle max_tokens (required in Anthropic, but we're permissive)
if (body.contains("max_tokens")) {
oai_body["max_tokens"] = body.at("max_tokens");
} else {
oai_body["max_tokens"] = 4096;
}
// Pass through common params
for (const auto & key : {"temperature", "top_p", "top_k", "stream"}) {
if (body.contains(key)) {
oai_body[key] = body.at(key);
}
}
// Handle Anthropic-specific thinking param
if (body.contains("thinking")) {
json thinking = json_value(body, "thinking", json::object());
std::string thinking_type = json_value(thinking, "type", std::string());
if (thinking_type == "enabled") {
int budget_tokens = json_value(thinking, "budget_tokens", 10000);
oai_body["thinking_budget_tokens"] = budget_tokens;
}
}
// Handle Anthropic-specific metadata param
if (body.contains("metadata")) {
json metadata = json_value(body, "metadata", json::object());
std::string user_id = json_value(metadata, "user_id", std::string());
if (!user_id.empty()) {
oai_body["__metadata_user_id"] = user_id;
}
}
return oai_body;
}
json server_chat_msg_diff_to_json_oaicompat(const common_chat_msg_diff & diff) {
json delta = json::object();
if (!diff.reasoning_content_delta.empty()) {
delta["reasoning_content"] = diff.reasoning_content_delta;
}
if (!diff.content_delta.empty()) {
delta["content"] = diff.content_delta;
}
if (diff.tool_call_index != std::string::npos) {
json tool_call;
tool_call["index"] = diff.tool_call_index;
if (!diff.tool_call_delta.id.empty()) {
tool_call["id"] = diff.tool_call_delta.id;
tool_call["type"] = "function";
}
if (!diff.tool_call_delta.name.empty() || !diff.tool_call_delta.arguments.empty()) {
json function = json::object();
if (!diff.tool_call_delta.name.empty()) {
function["name"] = diff.tool_call_delta.name;
}
if (!diff.tool_call_delta.arguments.empty()) {
function["arguments"] = diff.tool_call_delta.arguments;
}
tool_call["function"] = function;
}
delta["tool_calls"] = json::array({ tool_call });
}
return delta;
}
json convert_transcriptions_to_chatcmpl(
const json & inp_body,
const std::map<std::string, raw_buffer> & in_files,
std::vector<raw_buffer> & out_files) {
// TODO @ngxson : this function may need to be improved in the future
// handle input files
out_files.clear();
auto it = in_files.find("file");
if (it != in_files.end()) {
out_files.push_back(it->second);
} else {
throw std::invalid_argument("No input file found for transcription");
}
// handle input data
std::string prompt = json_value(inp_body, "prompt", std::string());
std::string language = json_value(inp_body, "language", std::string());
std::string response_format = json_value(inp_body, "response_format", std::string("json"));
if (response_format != "json") {
throw std::invalid_argument("Only 'json' response_format is supported for transcription");
}
if (prompt.empty()) {
prompt = "Transcribe audio to text";
}
if (!language.empty()) {
prompt += string_format(" (language: %s)", language.c_str());
}
prompt += get_media_marker();
json chatcmpl_body = inp_body; // copy all fields
chatcmpl_body["messages"] = json::array({
{
{"role", "user"},
{"content", prompt},
},
});
// because input from form-data, everything is string, we need to correct the types here
std::string stream = json_value(inp_body, "stream", std::string("false"));
chatcmpl_body["stream"] = stream == "true";
if (inp_body.contains("max_tokens")) {
std::string inp = inp_body["max_tokens"].get<std::string>();
chatcmpl_body["max_tokens"] = std::stoul(inp);
}
if (inp_body.contains("temperature")) {
std::string inp = inp_body["temperature"].get<std::string>();
chatcmpl_body["temperature"] = std::stof(inp);
}
return chatcmpl_body;
}

View File

@@ -0,0 +1,24 @@
// Chat conversion functions for server (Responses API, Anthropic API, OAI streaming diffs)
#pragma once
#include "chat.h"
#include "server-common.h"
#include <nlohmann/json_fwd.hpp>
using json = nlohmann::ordered_json;
// Convert OpenAI Responses API format to OpenAI Chat Completions API format
json server_chat_convert_responses_to_chatcmpl(const json & body);
// Convert Anthropic Messages API format to OpenAI Chat Completions API format
json server_chat_convert_anthropic_to_oai(const json & body);
// convert OpenAI transcriptions API format to OpenAI Chat Completions API format
json convert_transcriptions_to_chatcmpl(
const json & body,
const std::map<std::string, raw_buffer> & in_files,
std::vector<raw_buffer> & out_files);
json server_chat_msg_diff_to_json_oaicompat(const common_chat_msg_diff & diff);

View File

@@ -1164,573 +1164,6 @@ json oaicompat_chat_params_parse(
return llama_params;
}
json convert_responses_to_chatcmpl(const json & response_body) {
if (!response_body.contains("input")) {
throw std::invalid_argument("'input' is required");
}
if (!json_value(response_body, "previous_response_id", std::string{}).empty()) {
throw std::invalid_argument("llama.cpp does not support 'previous_response_id'.");
}
const json input_value = response_body.at("input");
json chatcmpl_body = response_body;
chatcmpl_body.erase("input");
std::vector<json> chatcmpl_messages;
if (response_body.contains("instructions")) {
chatcmpl_messages.push_back({
{"role", "system"},
{"content", json_value(response_body, "instructions", std::string())},
});
chatcmpl_body.erase("instructions");
}
if (input_value.is_string()) {
// #responses_create-input-text_input
chatcmpl_messages.push_back({
{"role", "user"},
{"content", input_value},
});
} else if (input_value.is_array()) {
// #responses_create-input-input_item_list
static auto exists_and_is_array = [](const json & j, const char * key) -> bool {
return j.contains(key) && j.at(key).is_array();
};
static auto exists_and_is_string = [](const json & j, const char * key) -> bool {
return j.contains(key) && j.at(key).is_string();
};
for (json item : input_value) {
bool merge_prev = !chatcmpl_messages.empty() && chatcmpl_messages.back().value("role", "") == "assistant";
if (exists_and_is_string(item, "content")) {
// #responses_create-input-input_item_list-input_message-content-text_input
// Only "Input message" contains item["content"]::string
// After converting item["content"]::string to item["content"]::array,
// we can treat "Input message" as sum of "Item-Input message" and "Item-Output message"
item["content"] = json::array({
json {
{"text", item.at("content")},
{"type", "input_text"}
}
});
}
if (exists_and_is_array(item, "content") &&
exists_and_is_string(item, "role") &&
(item.at("role") == "user" ||
item.at("role") == "system" ||
item.at("role") == "developer")
) {
// #responses_create-input-input_item_list-item-input_message
std::vector<json> chatcmpl_content;
for (const json & input_item : item.at("content")) {
const std::string type = json_value(input_item, "type", std::string());
if (type == "input_text") {
if (!input_item.contains("text")) {
throw std::invalid_argument("'Input text' requires 'text'");
}
chatcmpl_content.push_back({
{"text", input_item.at("text")},
{"type", "text"},
});
} else if (type == "input_image") {
// While `detail` is marked as required,
// it has default value("auto") and can be omitted.
if (!input_item.contains("image_url")) {
throw std::invalid_argument("'image_url' is required");
}
chatcmpl_content.push_back({
{"image_url", json {
{"url", input_item.at("image_url")}
}},
{"type", "image_url"},
});
} else if (type == "input_file") {
throw std::invalid_argument("'input_file' is not supported by llamacpp at this moment");
// if (input_item.contains("file_url")) {
// // chat completion API does not support file_url
// throw std::invalid_argument("'file_url' is not supported");
// }
// if (!input_item.contains("file_data") || !input_item.contains("filename")) {
// throw std::invalid_argument("Both 'file_data' and 'filename' are required");
// }
// chatcmpl_content.push_back({
// {"file", json {
// {"file_data", input_item.at("file_data")},
// {"filename", input_item.at("filename")},
// }},
// {"type", "file"},
// });
} else {
throw std::invalid_argument("'type' must be one of 'input_text', 'input_image', or 'input_file'");
}
}
if (item.contains("type")) {
item.erase("type");
}
if (item.contains("status")) {
item.erase("status");
}
item["content"] = chatcmpl_content;
chatcmpl_messages.push_back(item);
} else if (exists_and_is_array(item, "content") &&
exists_and_is_string(item, "role") &&
item.at("role") == "assistant" &&
// exists_and_is_string(item, "status") &&
// (item.at("status") == "in_progress" ||
// item.at("status") == "completed" ||
// item.at("status") == "incomplete") &&
// item["status"] not sent by codex-cli
exists_and_is_string(item, "type") &&
item.at("type") == "message"
) {
// #responses_create-input-input_item_list-item-output_message
auto chatcmpl_content = json::array();
for (const auto & output_text : item.at("content")) {
const std::string type = json_value(output_text, "type", std::string());
if (type == "output_text") {
if (!exists_and_is_string(output_text, "text")) {
throw std::invalid_argument("'Output text' requires 'text'");
// Ignore annotations and logprobs for now
chatcmpl_content.push_back({
{"text", output_text.at("text")},
{"type", "text"},
});
}
} else if (type == "refusal") {
if (!exists_and_is_string(output_text, "refusal")) {
throw std::invalid_argument("'Refusal' requires 'refusal'");
// Ignore annotations and logprobs for now
chatcmpl_content.push_back({
{"refusal", output_text.at("refusal")},
{"type", "refusal"},
});
}
} else {
throw std::invalid_argument("'type' must be one of 'output_text' or 'refusal'");
}
}
if (merge_prev) {
auto & prev_msg = chatcmpl_messages.back();
if (!exists_and_is_array(prev_msg, "content")) {
prev_msg["content"] = json::array();
}
auto & prev_content = prev_msg["content"];
prev_content.insert(prev_content.end(), chatcmpl_content.begin(), chatcmpl_content.end());
} else {
item.erase("status");
item.erase("type");
item["content"] = chatcmpl_content;
chatcmpl_messages.push_back(item);
}
} else if (exists_and_is_string(item, "arguments") &&
exists_and_is_string(item, "call_id") &&
exists_and_is_string(item, "name") &&
exists_and_is_string(item, "type") &&
item.at("type") == "function_call"
) {
// #responses_create-input-input_item_list-item-function_tool_call
json tool_call = {
{"function", json {
{"arguments", item.at("arguments")},
{"name", item.at("name")},
}},
{"id", item.at("call_id")},
{"type", "function"},
};
if (merge_prev) {
auto & prev_msg = chatcmpl_messages.back();
if (!exists_and_is_array(prev_msg, "tool_calls")) {
prev_msg["tool_calls"] = json::array();
}
prev_msg["tool_calls"].push_back(tool_call);
} else {
chatcmpl_messages.push_back(json {
{"role", "assistant"},
{"tool_calls", json::array({tool_call})}
});
}
} else if (exists_and_is_string(item, "call_id") &&
(exists_and_is_string(item, "output") || exists_and_is_array(item, "output")) &&
exists_and_is_string(item, "type") &&
item.at("type") == "function_call_output"
) {
// #responses_create-input-input_item_list-item-function_tool_call_output
if (item.at("output").is_string()) {
chatcmpl_messages.push_back(json {
{"content", item.at("output")},
{"role", "tool"},
{"tool_call_id", item.at("call_id")},
});
} else {
json chatcmpl_outputs = item.at("output");
for (json & chatcmpl_output : chatcmpl_outputs) {
if (!chatcmpl_output.contains("type") || chatcmpl_output.at("type") != "input_text") {
throw std::invalid_argument("Output of tool call should be 'Input text'");
}
chatcmpl_output["type"] = "text";
}
chatcmpl_messages.push_back(json {
{"content", chatcmpl_outputs},
{"role", "tool"},
{"tool_call_id", item.at("call_id")},
});
}
} else if (// exists_and_is_string(item, "id") &&
// item["id"] not sent by codex-cli
exists_and_is_array(item, "summary") &&
exists_and_is_string(item, "type") &&
item.at("type") == "reasoning") {
// #responses_create-input-input_item_list-item-reasoning
if (!exists_and_is_array(item, "content")) {
throw std::invalid_argument("item['content'] is not an array");
}
if (item.at("content").empty()) {
throw std::invalid_argument("item['content'] is empty");
}
if (!exists_and_is_string(item.at("content")[0], "text")) {
throw std::invalid_argument("item['content']['text'] is not a string");
}
if (merge_prev) {
auto & prev_msg = chatcmpl_messages.back();
prev_msg["reasoning_content"] = item.at("content")[0].at("text");
} else {
chatcmpl_messages.push_back(json {
{"role", "assistant"},
{"content", json::array()},
{"reasoning_content", item.at("content")[0].at("text")},
});
}
} else {
throw std::invalid_argument("Cannot determine type of 'item'");
}
}
} else {
throw std::invalid_argument("'input' must be a string or array of objects");
}
chatcmpl_body["messages"] = chatcmpl_messages;
if (response_body.contains("tools")) {
if (!response_body.at("tools").is_array()) {
throw std::invalid_argument("'tools' must be an array of objects");
}
std::vector<json> chatcmpl_tools;
for (json resp_tool : response_body.at("tools")) {
json chatcmpl_tool;
if (json_value(resp_tool, "type", std::string()) != "function") {
throw std::invalid_argument("'type' of tool must be 'function'");
}
resp_tool.erase("type");
chatcmpl_tool["type"] = "function";
if (!resp_tool.contains("strict")) {
resp_tool["strict"] = true;
}
chatcmpl_tool["function"] = resp_tool;
chatcmpl_tools.push_back(chatcmpl_tool);
}
chatcmpl_body.erase("tools");
chatcmpl_body["tools"] = chatcmpl_tools;
}
if (response_body.contains("max_output_tokens")) {
chatcmpl_body.erase("max_output_tokens");
chatcmpl_body["max_tokens"] = response_body["max_output_tokens"];
}
return chatcmpl_body;
}
json convert_transcriptions_to_chatcmpl(
const json & inp_body,
const std::map<std::string, raw_buffer> & in_files,
std::vector<raw_buffer> & out_files) {
// TODO @ngxson : this function may need to be improved in the future
// handle input files
out_files.clear();
auto it = in_files.find("file");
if (it != in_files.end()) {
out_files.push_back(it->second);
} else {
throw std::invalid_argument("No input file found for transcription");
}
// handle input data
std::string prompt = json_value(inp_body, "prompt", std::string());
std::string language = json_value(inp_body, "language", std::string());
std::string response_format = json_value(inp_body, "response_format", std::string("json"));
if (response_format != "json") {
throw std::invalid_argument("Only 'json' response_format is supported for transcription");
}
if (prompt.empty()) {
prompt = "Transcribe audio to text";
}
if (!language.empty()) {
prompt += string_format(" (language: %s)", language.c_str());
}
prompt += get_media_marker();
json chatcmpl_body = inp_body; // copy all fields
chatcmpl_body["messages"] = json::array({
{
{"role", "user"},
{"content", prompt},
},
});
// because input from form-data, everything is string, we need to correct the types here
std::string stream = json_value(inp_body, "stream", std::string("false"));
chatcmpl_body["stream"] = stream == "true";
if (inp_body.contains("max_tokens")) {
std::string inp = inp_body["max_tokens"].get<std::string>();
chatcmpl_body["max_tokens"] = std::stoul(inp);
}
if (inp_body.contains("temperature")) {
std::string inp = inp_body["temperature"].get<std::string>();
chatcmpl_body["temperature"] = std::stof(inp);
}
return chatcmpl_body;
}
json convert_anthropic_to_oai(const json & body) {
json oai_body;
// Convert system prompt
json oai_messages = json::array();
auto system_param = json_value(body, "system", json());
if (!system_param.is_null()) {
std::string system_content;
if (system_param.is_string()) {
system_content = system_param.get<std::string>();
} else if (system_param.is_array()) {
for (const auto & block : system_param) {
if (json_value(block, "type", std::string()) == "text") {
system_content += json_value(block, "text", std::string());
}
}
}
oai_messages.push_back({
{"role", "system"},
{"content", system_content}
});
}
// Convert messages
if (!body.contains("messages")) {
throw std::runtime_error("'messages' is required");
}
const json & messages = body.at("messages");
if (messages.is_array()) {
for (const auto & msg : messages) {
std::string role = json_value(msg, "role", std::string());
if (!msg.contains("content")) {
if (role == "assistant") {
continue;
}
oai_messages.push_back(msg);
continue;
}
const json & content = msg.at("content");
if (content.is_string()) {
oai_messages.push_back(msg);
continue;
}
if (!content.is_array()) {
oai_messages.push_back(msg);
continue;
}
json tool_calls = json::array();
json converted_content = json::array();
json tool_results = json::array();
std::string reasoning_content;
bool has_tool_calls = false;
for (const auto & block : content) {
std::string type = json_value(block, "type", std::string());
if (type == "text") {
converted_content.push_back(block);
} else if (type == "thinking") {
reasoning_content += json_value(block, "thinking", std::string());
} else if (type == "image") {
json source = json_value(block, "source", json::object());
std::string source_type = json_value(source, "type", std::string());
if (source_type == "base64") {
std::string media_type = json_value(source, "media_type", std::string("image/jpeg"));
std::string data = json_value(source, "data", std::string());
std::ostringstream ss;
ss << "data:" << media_type << ";base64," << data;
converted_content.push_back({
{"type", "image_url"},
{"image_url", {
{"url", ss.str()}
}}
});
} else if (source_type == "url") {
std::string url = json_value(source, "url", std::string());
converted_content.push_back({
{"type", "image_url"},
{"image_url", {
{"url", url}
}}
});
}
} else if (type == "tool_use") {
tool_calls.push_back({
{"id", json_value(block, "id", std::string())},
{"type", "function"},
{"function", {
{"name", json_value(block, "name", std::string())},
{"arguments", json_value(block, "input", json::object()).dump()}
}}
});
has_tool_calls = true;
} else if (type == "tool_result") {
std::string tool_use_id = json_value(block, "tool_use_id", std::string());
auto result_content = json_value(block, "content", json());
std::string result_text;
if (result_content.is_string()) {
result_text = result_content.get<std::string>();
} else if (result_content.is_array()) {
for (const auto & c : result_content) {
if (json_value(c, "type", std::string()) == "text") {
result_text += json_value(c, "text", std::string());
}
}
}
tool_results.push_back({
{"role", "tool"},
{"tool_call_id", tool_use_id},
{"content", result_text}
});
}
}
if (!converted_content.empty() || has_tool_calls || !reasoning_content.empty()) {
json new_msg = {{"role", role}};
if (!converted_content.empty()) {
new_msg["content"] = converted_content;
} else if (has_tool_calls || !reasoning_content.empty()) {
new_msg["content"] = "";
}
if (!tool_calls.empty()) {
new_msg["tool_calls"] = tool_calls;
}
if (!reasoning_content.empty()) {
new_msg["reasoning_content"] = reasoning_content;
}
oai_messages.push_back(new_msg);
}
for (const auto & tool_msg : tool_results) {
oai_messages.push_back(tool_msg);
}
}
}
oai_body["messages"] = oai_messages;
// Convert tools
if (body.contains("tools")) {
const json & tools = body.at("tools");
if (tools.is_array()) {
json oai_tools = json::array();
for (const auto & tool : tools) {
oai_tools.push_back({
{"type", "function"},
{"function", {
{"name", json_value(tool, "name", std::string())},
{"description", json_value(tool, "description", std::string())},
{"parameters", tool.contains("input_schema") ? tool.at("input_schema") : json::object()}
}}
});
}
oai_body["tools"] = oai_tools;
}
}
// Convert tool_choice
if (body.contains("tool_choice")) {
const json & tc = body.at("tool_choice");
if (tc.is_object()) {
std::string type = json_value(tc, "type", std::string());
if (type == "auto") {
oai_body["tool_choice"] = "auto";
} else if (type == "any" || type == "tool") {
oai_body["tool_choice"] = "required";
}
}
}
// Convert stop_sequences to stop
if (body.contains("stop_sequences")) {
oai_body["stop"] = body.at("stop_sequences");
}
// Handle max_tokens (required in Anthropic, but we're permissive)
if (body.contains("max_tokens")) {
oai_body["max_tokens"] = body.at("max_tokens");
} else {
oai_body["max_tokens"] = 4096;
}
// Pass through common params
for (const auto & key : {"temperature", "top_p", "top_k", "stream"}) {
if (body.contains(key)) {
oai_body[key] = body.at(key);
}
}
// Handle Anthropic-specific thinking param
if (body.contains("thinking")) {
json thinking = json_value(body, "thinking", json::object());
std::string thinking_type = json_value(thinking, "type", std::string());
if (thinking_type == "enabled") {
int budget_tokens = json_value(thinking, "budget_tokens", 10000);
oai_body["thinking_budget_tokens"] = budget_tokens;
}
}
// Handle Anthropic-specific metadata param
if (body.contains("metadata")) {
json metadata = json_value(body, "metadata", json::object());
std::string user_id = json_value(metadata, "user_id", std::string());
if (!user_id.empty()) {
oai_body["__metadata_user_id"] = user_id;
}
}
return oai_body;
}
json format_embeddings_response_oaicompat(
const json & request,
const std::string & model_name,

View File

@@ -307,18 +307,6 @@ json oaicompat_chat_params_parse(
const server_chat_params & opt,
std::vector<raw_buffer> & out_files);
// convert OpenAI Responses API format to OpenAI Chat Completions API format
json convert_responses_to_chatcmpl(const json & body);
// convert OpenAI transcriptions API format to OpenAI Chat Completions API format
json convert_transcriptions_to_chatcmpl(
const json & body,
const std::map<std::string, raw_buffer> & in_files,
std::vector<raw_buffer> & out_files);
// convert Anthropic Messages API format to OpenAI Chat Completions API format
json convert_anthropic_to_oai(const json & body);
// TODO: move it to server-task.cpp
json format_embeddings_response_oaicompat(
const json & request,

View File

@@ -1,5 +1,6 @@
#include "server-context.h"
#include "server-chat.h"
#include "server-common.h"
#include "server-http.h"
#include "server-task.h"
@@ -3774,7 +3775,7 @@ void server_routes::init_routes() {
this->post_responses_oai = [this](const server_http_req & req) {
auto res = create_response();
std::vector<raw_buffer> files;
json body = convert_responses_to_chatcmpl(json::parse(req.body));
json body = server_chat_convert_responses_to_chatcmpl(json::parse(req.body));
SRV_DBG("%s\n", "Request converted: OpenAI Responses -> OpenAI Chat Completions");
SRV_DBG("converted request: %s\n", body.dump().c_str());
json body_parsed = oaicompat_chat_params_parse(
@@ -3819,7 +3820,7 @@ void server_routes::init_routes() {
this->post_anthropic_messages = [this](const server_http_req & req) {
auto res = create_response();
std::vector<raw_buffer> files;
json body = convert_anthropic_to_oai(json::parse(req.body));
json body = server_chat_convert_anthropic_to_oai(json::parse(req.body));
SRV_DBG("%s\n", "Request converted: Anthropic -> OpenAI Chat Completions");
SRV_DBG("converted request: %s\n", body.dump().c_str());
json body_parsed = oaicompat_chat_params_parse(
@@ -3837,7 +3838,7 @@ void server_routes::init_routes() {
this->post_anthropic_count_tokens = [this](const server_http_req & req) {
auto res = create_response();
std::vector<raw_buffer> files;
json body = convert_anthropic_to_oai(json::parse(req.body));
json body = server_chat_convert_anthropic_to_oai(json::parse(req.body));
SRV_DBG("%s\n", "Request converted: Anthropic -> OpenAI Chat Completions");
SRV_DBG("converted request: %s\n", body.dump().c_str());
json body_parsed = oaicompat_chat_params_parse(

View File

@@ -712,6 +712,11 @@ void server_models::unload(const std::string & name) {
if (it->second.meta.is_running()) {
SRV_INF("stopping model instance name=%s\n", name.c_str());
stopping_models.insert(name);
if (it->second.meta.status == SERVER_MODEL_STATUS_LOADING) {
// special case: if model is in loading state, unloading means force-killing it
SRV_WRN("model name=%s is still loading, force-killing\n", name.c_str());
subprocess_terminate(it->second.subproc.get());
}
cv_stop.notify_all();
// status change will be handled by the managing thread
} else {

View File

@@ -1,6 +1,7 @@
#include "server-task.h"
#include "build-info.h"
#include "server-chat.h"
#include "chat.h"
#include "common.h"
#include "json-schema-to-grammar.h"
@@ -873,7 +874,7 @@ json server_task_result_cmpl_final::to_json_oaicompat_chat_stream() {
json {
{"finish_reason", nullptr},
{"index", index},
{"delta", common_chat_msg_diff_to_json_oaicompat(diff)},
{"delta", server_chat_msg_diff_to_json_oaicompat(diff)},
},
})},
{"created", t},
@@ -1522,7 +1523,7 @@ json server_task_result_cmpl_partial::to_json_oaicompat_chat() {
}
for (const auto & diff : oaicompat_msg_diffs) {
add_delta(common_chat_msg_diff_to_json_oaicompat(diff));
add_delta(server_chat_msg_diff_to_json_oaicompat(diff));
}
if (!deltas.empty()) {