Llama-architecture q_proj/k_proj weights need an axis-0 row permutation
to match GGML's RoPE convention. The BF16 path applies this in
LlamaModel.modify_tensors via LlamaModel.permute, but the NVFP4 path
bypasses modify_tensors and writes weights directly through
ModelBase._repack_nvfp4. Without the permutation, attention heads end
up scrambled at inference and the model produces gibberish.
This change overrides _repack_nvfp4 on LlamaModel and applies the same
permutation to both the nibble-packed weight and the per-block scale
before delegating to ModelBase._repack_nvfp4 via super(). Reuses the
existing LlamaModel.permute static helper and respects the existing
undo_permute flag, so subclasses (Mistral, Granite, Llama4, etc.)
inherit the fix automatically.
Verified on TinyLlama-1.1B reproducer: perplexity drops from 4419
(gibberish) to 43.9, matching the BF16-dequantized baseline (44.0).
Also verified end-to-end on ALIA-40b-instruct-2601 (BSC, Llama
architecture) with multilingual generation in Spanish/Catalan/Basque/
Galician all coherent with the fix applied.
Co-authored-by: Chema <chema@montevive.ai>
* hmx: extract shared interleave headers and unify matmul batched
* hmx: add HMX-accelerated flash attention for prefill
* hmx: replace asm wrappers with Q6_ intrinsics in hmx-utils.h
Switches three single-instruction helpers from inline asm to the matching
Q6_ intrinsics, matching the style established by aizip f8737609a and used
by the upstream PR #21554 hmx-matmul-ops.c rewrite:
hmx_set_output_scales asm "bias=mxmem2" -> Q6_bias_mxmem2_A
hmx_load_tile_pair_fp16 asm packet -> Q6_activation_hf_mxmem_RR
+ Q6_weight_hf_mxmem_RR
hmx_consume_accumulator_fp16 asm "mxmem=acc" -> Q6_mxmem_AR_after_hf
hmx_load_tiles_fp16 stays on inline asm: it uses ":deep" activation
streaming, and the mixed Q6_activation_hf_mxmem_RR_deep + non-deep
Q6_weight_hf_mxmem_RR pair fails the HMX backend constraint check
("activate weight pair (1) exceeds limit (1)"). The asm bundle keeps
both halves in one VLIW packet and avoids the diagnostic.
Functionally equivalent — same instructions emitted; the Q6_ intrinsics
just give the compiler more visibility for scheduling.
* hmx: drop the duplicate interleave_fp16_weight_chunk_to_tiles
* hmx: apply upstream optimization to hmx-flash-attn-ops.c
apply restrict, __builtin_assume, and pointer accumulation to the three HMX workers (qk_dot, o_update, o_norm) and the matching inline HMX loops in op_hmx_flash_attn_ext.
* hmx: unify interleave helper
* hmx: multi-thread Q load / O store and enable prefill FA dispatch
Extract inline Q-load and O-store loops into worker_pool-parallel helpers
(fa_phase_q_load, fa_phase_o_store) so HVX threads split the F32↔F16
conversion work across row ranges. Also relax the softmax threading
gate from n_row_vec_cnt >= n_threads to >= 2, which was unnecessarily
forcing single-thread fallback when n_rows_g < 512.
On the dispatch side, remove the ne[2] != 1 guard that blocked multi-head
(prefill) FA from reaching the HTP backend — GQA is already handled
internally by both the HMX and HVX flash-attention paths.
* hmx: relax matmul pipeline gate to cover k > n shapes (e.g. FFN_down)
* hmx: optimize FA softmax mask phase (no-ALiBi fast path + GQA dedup)
* hmx: Add an asm memory clobber at the phase boundary to prevent reorder bug
* [experimental]: fp16 softmax (EXP2_HF) to accelerate fa
Bake log2(e) into qk_scale and use hvx_exp2_hf directly for P and m_diff
(base-2 consistent, matches htp-ops-lib). ~22 ALU ops for 64 lanes vs
~44 for the F32 round-trip path.
* hmx flash-attn: refine cost model coefficients based on profiling data
* hmx flash-attn: replace asm clobber with targeted volatile reads on vtcm_d_tiles
* hmx flash-attn: fix prefill correctness (dst indexing, softmax reduce, V stride)
* hmx flash-attn: fix p_tiles dual-tile OOB race; enable MT + pipeline
* hmx flash-attn: preserve additive mask bias in no-ALiBi fast path
The no-ALiBi fast path (max_bias==0) was skipping mask add entirely on
the assumption that mask values are only {0, -inf}. This is wrong when
the mask carries additive positional bias — those terms were silently
dropped. Keep the slope-mul skip (slope≡1.0) but add mask back so the
bias survives; vmux still clamps below -16 to -inf.
Also add HMX FA coverage to test-backend-ops: prefill shapes (nb=64,
nb=32) × {mask on/off} × {ALiBi on/off} × {softcap on/off}, F16 KV,
hs ∈ {64, 128}.
* hmx: fix softcap+EXP2_HF interaction, tighten matmul pipeline gate, add FA tests
- flash-attn: when EXP2_HF is on AND logit_softcap is active, fold
log2(e) into the post-tanh multiplier (v_cap) instead of pre-baking
it into qk_scale. Pre-baking shifted the tanh knee from x≈c to
x≈c/log2(e) and produced numerically wrong softcapped outputs
whenever both knobs were enabled.
- flash-attn softmax (fa_softmax_thread): replace the union+memcpy
scalar extract pattern with HVX vmux-based per-row accumulators on
rowmax/rowsum. Add hvx_vec_get_f16 helper in hvx-base.h. Functional
parity, less scalar code, clearer hf/qf16 lane-format contract.
- matmul (hmx_mat_mul_permuted_qk_0_d16a32): pick pipeline vs sequential
layout based on whether the chunker actually yields >=2 n-chunks,
instead of the static (m>=128 && n>=256) gate. Avoids paying for
output double-buffer + worker dispatch when there is no HMX/HVX
overlap to gain (e.g. shapes that collapse to one n-chunk).
- tests: add HMX flash-attention coverage over the
{mask, ALiBi (max_bias), logit_softcap} cross-product for the prefill
path — head_dim 64/128, GQA 4×4, kv=512/nb=64 plus a kv=113/nb=32
non-aligned case.
* [Help Wanted]: refactor D matrix computation into separate function for clarity and maintainability
* format code
* hexagon: looks like -O3 is causing issues with the large code base, switch to -O2 and -flto instead
* hexagon: use hex_ prefix for swap_ptr
* hexagon: move vtcm_seq_alloc into vtcm-utils.h
More vtcm allocator updates are coming so it makes sense to start the separate hdr for it.
* hmx-utils: add hmx_prefix for layout converters
* hmx-mm: move main hmx_mm functions to the end, remove unused fwd decls, etc
* hmx-mm: remove unused qweight_fetch_task_state_t and minor alignment fixes
* hmx-fa: minor alignment fixes
* hmx-fa: move hmx_flash_atten into hmx-ops.h
* hmx-fa: remove redundant workpool pointer in the hmx_fa_ctx, plus minor alignment updates
* hmx-fa: minor alignment and simplifications
* hexagon: move FA_EXP_F16 option to hostside CMake file
* hmx-fa: use hvx_vec_splat_f16 instead of fp16_to_bits
* hmx-fa: add hvx_splat_u16/u8 and use that in the fa instead custom hvx_fill
* hmx-fa: some more alignment updates in the core fa function
* hmx-fa: keep slopes in vtcm in fp16
Saves malloc/free and removes the need for float -> fp16 downcast on every use.
* hexagon: consistent noinline usage (after static)
* hex-hmx: consistent use FARF_HIGH to enable debug output
* hmx-utils: no need for always_inline attr
* hex-hmx: consistent noinline usage (static noinline ...)
* hex-hmx: simplify init_col_scales
* hexagon: fix editorconfig errors
* hmx-mm: minor alignment fixes
---------
Co-authored-by: Max Krasnyansky <maxk@qti.qualcomm.com>
fix#22544 (my fault!)
Credit to @Anai-Guo, ref #22559 - since that one was closed due to the
new contributor policy I am taking the liberty of re-submitting that PR
here.
* vulkan: Support asymmetric FA in coopmat2 path
There has been some recent interest/experimentation with mixed quantization
types for FA. I had originally designed the cm2 FA shader with this in mind
(because I didn't realize it wasn't supported at the time!), this change
adds the missing pieces and enables it.
Also support Q1_0 since people have been trying that out (seems crazy, but
who knows).
We should be able to do similar things in the coopmat1/scalar path, but
there's another change open against the scalar path and I don't want to
conflict.
* reorder cases
* Add mat-vec fast path of MUL_MAT_ID.
* Add shared accumulation vec logic and the other types supports.
* Add i-quant mat-mat for MUL_MAT_ID and fix some parts
* Remove n_experts from shader_lib_context.
* scripts : add wc2wt.sh - create worktree from current HEAD
Add a script to create a git worktree on a new branch from the current
HEAD. Similar to pr2wt.sh but for local development branches instead of
PRs.
Usage:
./scripts/wc2wt.sh gg/new-feature
./scripts/wc2wt.sh gg/new-feature "bash -l"
Assisted-by: llama.cpp:local pi
* cont : no need to try to delete the branch
* port #22358 PR to examples/speculative/speculative.cpp
* use vocab_[tgt,dft] instead of ctx_[tgt,dft] when logging on draft
model / target model vocabulary mismatch
Co-authored-by: Petros Sideris <petros.sideris@nokia.com>
* hexagon: allow host to set max vmem size
We use a sane default but it's helpful to allow for an override if needed.
* hexagon: add support for measuring vmem space and move pinned mmaping management to host
* hexagon: update vmem checks to use uint64
* hexagon: bump op buffers to 16 (matches max mmaps)
* hexagon: bump default vmem to 3.2GB
* hexagon: add support for autodetecting vmem space and some logging cleanup in that area
* hexagon: fix whitespace warnings
* Update scripts/snapdragon/adb/run-cli.sh
Co-authored-by: Pascal <admin@serveurperso.com>
* hex-adb: fix run-completion script
---------
Co-authored-by: Pascal <admin@serveurperso.com>
* ggml-cpu: cmake: append xsmtvdotii march for SpacemiT IME
When GGML_CPU_RISCV64_SPACEMIT=ON is set, ime1_kernels.cpp contains
inline asm for the vmadot family which requires the xsmtvdotii custom
extension.(problem can see in some blogs and make sure in K3 platform)
The current CMakeLists does not include xsmtvdotii, so any toolchain
that honours the explicit -march (tested with SpacemiT GCC 15.2) fails
at the assembler stage:
Error: unrecognized opcode `vmadot v16,v14,v0',
extension `xsmtvdotii' required
Append _xsmtvdotii to MARCH_STR when GGML_CPU_RISCV64_SPACEMIT is
enabled so the IME path can actually build with a capable toolchain.
No effect on builds that leave GGML_CPU_RISCV64_SPACEMIT off.
toolchain from https://www.spacemit.com/community/resources-download/Tools
* Update ggml/src/ggml-cpu/CMakeLists.txt
Co-authored-by: alex-spacemit <jinghui.huang@spacemit.com>
---------
Co-authored-by: alex-spacemit <jinghui.huang@spacemit.com>
* Changed to leak logger singleton to prevent hanging on Windows
* Fix comment
* Stopped using static vector
Using std::vector will cause g_col to be released before the logger thread exits, causing the logger thread to touch freed memory causing a crash
* Change so all logs are output before exit
* Added debug logging
* added more logging
* Added logging
* Explicitly free logger to avoid hanging on Win
* Reverted to leak logger instance again
* Removed debug log and fixed comment
* Fixed comment
---------
Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
* Added sve tuned code for gemm_q8_0_4x8_q8_0() kernel
* Change arrays to static const in repack.cpp
---------
Co-authored-by: Vithulep <prashant.vithule@fujitsu.com>
* ggml-cuda: add flash-attn support for DKQ=320/DV=256 with ncols2=32 (GQA=32)
Adds MMA-f16 and tile kernel configs, dispatch logic, template instances,
and tile .cu file for Mistral Small 4 (head sizes 320/256), restricting to
ncols2=32 to support GQA ratio 32 only.
* Adding check to return BEST_FATTN_KERNEL_NONE in case GQA!=32
* Apply suggestions from code review
Address review comments
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Address review comments and making kernel config default to DQK=512, DV=512 instead of DQK=256,DV=256
* Fixed bug with sinks=1, with ncols=32, there are two warp-groups created but sinks index is same(0,...,15) for both the groups hence with sinks=1, output is not matching with CPU output. Added sink_base which will be base index for each warp_group (threadIdx.y / np)
* Apply suggestions from code review
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
* Update ggml/src/ggml-cuda/template-instances/generate_cu_files.py
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
---------
Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
DONE state absorbs all tokens including a new start tag, causing any think blocks after the first to run unbudgeted. Observed on unsloth/Qwen3.6-27B-GGUF which interleaves multiple <think> blocks per response.
Fixed by advancing start_matcher in DONE branch and re-arming to COUNTING with a fresh budget on match. Adds regression test (test-reasoning-budget: test 6).