* ggml-cuda: add internal AllReduce provider for tensor parallelism
Introduces a NCCL-free AllReduce implementation for LLAMA_SPLIT_MODE_TENSOR
using a single-phase CUDA kernel that pipelines D2H copy, cross-GPU
handshake via pinned-memory volatile flags, and the reduction in one
kernel launch per GPU.
New files:
- ggml/src/ggml-cuda/comm.cuh — ggml_cuda_allreduce_provider enum
- ggml/src/ggml-cuda/allreduce.cuh — pipeline API declarations
- ggml/src/ggml-cuda/allreduce.cu — kernel + pipeline init/dispatch
ggml-cuda.cu changes:
- ggml_backend_cuda_comm_context gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* llama-bench: add --allreduce flag to select AllReduce provider
Adds --allreduce <auto|nccl|internal> to llama-bench (and via the shared
field pattern, consistent with other multi-value flags). Useful for
isolating hangs or regressions in tensor-parallel mode: pass --allreduce nccl
to force NCCL and bypass the internal provider.
Also fixes ggml_cuda_select_allreduce_provider() to treat an empty
GGML_CUDA_ALLREDUCE env var the same as unset (avoids spurious warning when
llama-bench sets it to "" for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* llama-bench: rename --allreduce to --reduction-provider / -rp
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
via the shared
field pattern, consistent with other multi-value flags). Useful for
isolating hangs or regressions in tensor-parallel mode: pass --allreduce nccl
to force NCCL and bypass the internal provider.
Also fixes ggml_cuda_select_allreduce_provider() to treat an empty
GGML_CUDA_ALLREDUCE env var the same as unset (avoids spurious warning when
llama-bench sets it to "" for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* llama-bench: pass WARN/ERROR log messages through in non-verbose mode
The null log callback was silently dropping all messages. WARN and ERROR
should always be visible since they indicate legitimate issues (e.g. a
requested reduction provider not being available).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
vider.
Also fixes ggml_cuda_select_allreduce_provider() to treat an empty
GGML_CUDA_ALLREDUCE env var the same as unset (avoids spurious warning when
llama-bench sets it to "" for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* cmake: improve NCCL detection for source-tree builds, add static/dynamic switch
FindNCCL.cmake now searches the cmake source-build layout used by the Windows
NCCL port (cmake/lib/Release for static, cmake/src/Release for dynamic import
lib) and also checks src/include for the generated nccl.h header.
New option GGML_CUDA_NCCL_STATIC (default OFF) selects static vs dynamic
linking and controls which paths and library names are searched.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: add AllReduce hang watchdog (GGML_CUDA_AR_WATCHDOG)
When compiled with -DGGML_CUDA_AR_WATCHDOG=ON, uses a debug kernel
variant that writes per-GPU spin diagnostics to pinned host memory.
A host-side blocking poll (cudaEventQuery + volatile reads) detects
hangs and logs WARN with the last observed arrival counters and spin
counts, controlled by GGML_CUDA_AR_WATCHDOG (ms timeout) and
GGML_CUDA_AR_MAX_SPIN (kernel bailout) env vars at runtime.
Zero overhead on the production path — all debug code is behind #ifdef.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: fix intermittent AllReduce hang on Blackwell PCIe
Add __threadfence_system() before the arrival signal write in
signal_set to ensure D2H data is globally visible before the peer
observes the arrival flag. Without this fence, the peer could enter
Phase 3 host reads before the data had fully landed, causing an
intermittent deadlock on RTX 5090 (Blackwell, PCIe-only).
Also redesign the watchdog from a blocking dispatch-thread poll to a
non-blocking background thread, eliminating the ~20ms per-slot
latency the old design added.
Verified: 30/30 soak test runs clean at ~50 t/s (previously ~1-in-15
hang rate).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: fix watchdog shutdown ordering and pipeline_free drain
- Stop watchdog thread BEFORE destroying GPU resources (events, streams)
to prevent polling destroyed handles → spurious "busy" readings
- Add cudaStreamSynchronize in pipeline_free to drain in-flight kernels
before freeing pinned host buffers they may still be reading
- Sleep-first watchdog polling: no +0ms noise, only logs when a kernel
is genuinely stuck past the poll interval
- Check wdog_stop in both outer and inner loops so join() returns
promptly instead of draining the entire queue
- Add Phase 3 breadcrumbs to debug[3] for hang localization
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
RNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: replace event-based watchdog with per-GPU ring buffer
Completely rework the GGML_CUDA_AR_WATCHDOG system:
- Replace the shared debug_buf + event-polling + queue design with
per-GPU ring buffers in pinned host memory
- Kernel writes a debug record only on spin-limit bailout: claims a
ring slot via atomicAdd (single-GPU host atomics work on RTX 5090),
writes fields, fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: normalize line endings to LF (undo Windows CRLF conversion)
Five files were inadvertently converted to CRLF by the Windows
development environment, causing every line to show as changed in
diffs against master.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
imit bailout: claims a
ring slot via atomicAdd (single-GPU host atomics work on RTX 5090),
writes fields, fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* .gitattributes: force LF line endings to prevent Windows CRLF conversion
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
elopment environment, causing every line to show as changed in
diffs against master.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
imit bailout: claims a
ring slot via atomicAdd (single-GPU host atomics work on RTX 5090),
writes fields, fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: move GGML_CUDA_AR_WATCHDOG from CMake option to local define
The watchdog is development-only; a global CMake option is overkill.
Move the toggle to a #define at the top of allreduce.cu (set to 0 by
default) and remove the option from ggml/CMakeLists.txt and the CUDA
CMakeLists.txt add_compile_definitions block.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* unify kernel debug paths
* use __threadfence_system explicitly (not in ggml_cuda_ar_signal_set)
* preferentially use internal reduction for <=2 GPUs
* templatize the main kernel to support fp16/bf16
* restore llama-bench.cpp changes
* revert CMakeLists changes
* remove notes from repo
* remove dead warmup code
* fix comments
* improve reduction provider fallback code
* add messages for allreduce fallback
* rework reduction provider init to not call ncclCommInitAll if using the internal provider
* fix case where a given tensor has not been computed
* add chunked mode to the kernel for unlimited vector size
* rework a few checks/fallbacks
* various small cleanups
* allow disabling CUDA reductions completely (falling back to the non-CUDA butterfly mode)
* simplify reduction provider selection
* minor simplifications
* more cleanups/fixes
* prototype alternate path for large reductions
* chunked version of large reduction path
* use bf16 for large reductions
* experimental reduction using cudaMemcpyPeerAsync (slightly slower)
* revert experimental change
* add combined conversion/reduction kernel
* add bf16 wire format for single kernel mode
* experimental on-stream small reduction kernel
* double buffer arrival slots, use token (incrementing) method
* double buffer host_buf for small reductions
* put in waits for use of host_mem in large reduction case (prevents stomping on in-use memory
* remove watchdog code
* various cleanups / dead code removal
* fix fp16 mode
* fix some comments/logging statements
* use increasing token scheme for arrival signals
* add top-level comment to allreduce.cu
* improve top-level comment in allreduce.cu
* fix comments in ggml_cuda_ar_kernel
* improve event handling for hostmem buffer usage tracking
* change ev_pool to fixed 2D array
* add chunked memcpy fallback for extra-large reductions (>32 MB)
* change thresholds for copy-engine path and bf16 demotion
* multi-block kernel test
* more fine-tuning for chukn-size, etc.
* various fixes for PR review
* more PR fixes
* fix semantics of all host mappings
* require ampere+
* small cleanups
* properly use host pointer for src/dst in cudaMemcpy calls
* allreduce: lazy-init the internal pipeline on first use
A config that lives entirely on NCCL never needs the chunked-kernel
pipeline (host_buf, host_large, dev_tmp, streams, events, arrival ring).
Defer pipeline creation to the first try_allreduce_internal call using the
same std::call_once pattern as ensure_nccl, so those resources stay
unallocated when only NCCL is in use.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: assert n_backends == 2 instead of soft-fallback
ar_pipeline_init already requires n_devices == 2 and bails before any AR can
get here, so by the time we reach try_allreduce_internal we know we have
exactly two backends. Replace the runtime-debug-log fallback with a hard
assert.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
NCCL is in use.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* rework reduction provider selection. internal/nccl is OS dependent; most fallbacks are removed
* remove unneeded Turing arch check (llama.cpp doesn't even compile pre-Turing anyway)
* allreduce: ASCII-only comments and ggml_cuda_cast for value conversions
Replace non-ASCII characters in comments (em dashes, right arrows) with
ASCII equivalents (--, ->) so the source stays in the ggml/upstream norm.
In the kernel-side code, replace static_cast<Twire>/static_cast<Tdst>
with ggml_cuda_cast<...> so the BF16 conversions go through the fast
__float2bfloat16 / __bfloat162float intrinsics from convert.cuh. Pure
pointer and integer casts stay as static_cast.
Also drops two stray garbage tokens that snuck in from earlier merges
(a duplicated 'return ok; }' tail in allreduce.cu and a leftover '_reg)'
fragment in ggml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: use ggml_cuda_memcpy_1 for the chunked-kernel vector copies
The chunked kernel's two 16-byte register<->host transfers (Phase 1 store
and Phase 3 load) used reinterpret_cast<float4 *> on both sides. Replace
with ggml_cuda_memcpy_1<sizeof(wire)>, which is the canonical helper for
this pattern and emits the same int4 LD/ST under the hood.
Conformance passes; 5x reruns of 70b internal pp512 show 1832-1836 t/s,
matching the prior matrix value of 1831 t/s -- no perf change as expected.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ok; }' tail in allreduce.cu and a leftover '_reg)'
fragment in ggml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: assert cuda_ctx->device matches the pipeline's device
Both ggml_cuda_ar_pipeline and ggml_backend_cuda_context carry the device
they were created for; if they ever disagree, every cuda call that follows
runs on the wrong device. Add GGML_ASSERT at each cuda_ctx retrieval site
in the AR path so the misuse fails fast rather than silently corrupting.
Also: rename __nv_bfloat16 -> nv_bfloat16 (typedef alias) for consistency
with the rest of the file, and tighten one cudaGetLastError check to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: expand one-liner for loops to braced bodies
Code-style preference -- match the rest of the file by writing every for
loop with the body on its own braced line. Three sites in the copy-engine
typed dispatch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
in the AR path so the misuse fails fast rather than silently corrupting.
Also: rename __nv_bfloat16 -> nv_bfloat16 (typedef alias) for consistency
with the rest of the file, and tighten one cudaGetLastError check to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: rename template parameters Tdst/Twire/Tsrc -> T_dst/T_wire/T_src
Code-style preference per PR review -- T_dst/T_wire/T_src is more
consistent with surrounding code. Whole-word rename across all 58 sites
in allreduce.cu (kernel definitions, internal uses, and comment text).
Realigned the parameter columns in three function signatures whose
T_src/T_dst lines shifted by 1 char relative to their non-templated
neighbors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: drop hyphen in 'chunked-kernel' across comments
Per PR review feedback -- 'chunked kernel' (no hyphen) reads more naturally
in running prose, especially for ESL readers. Pure comment-only change;
all 10 occurrences in allreduce.cu updated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
three function signatures whose
T_src/T_dst lines shifted by 1 char relative to their non-templated
neighbors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: use ggml_cuda_get_max_cpy_bytes() instead of hardcoded 16
The chunked kernel hardcoded a 16-byte vector unit; replace with the
ggml_cuda_get_max_cpy_bytes() helper that fattn-common.cuh uses for the
same purpose, so ELEMS_PER_VEC self-adjusts to the arch's widest
single-instruction copy.
Perf-neutral on supported targets (Volta+ returns 16).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hbors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* ggml-cuda: PR review fixes -- annotate #endif, fix stale comment, assert nbytes alignment
Three separate but minor changes from PR #22299 review feedback:
1. Annotate the five GGML_USE_NCCL #endif lines with the matching condition
so the pairing is visible without scrolling back.
2. The comment block on ggml_backend_cuda_comm_context claimed NCCL is
lazy-initialised; that was true at one point but the dispatch refactor
(727b141c0) made both NCCL and the internal pipeline eager. Rewrite
the comment to match current behaviour.
3. Assert in ggml_backend_cuda_comm_allreduce_internal that the tensor's
byte size is a 16-byte multiple. The chunked-kernel issues full-width
vector loads/stores, so this is a precondition; tensor-parallel splits
of hidden-dim-multiples satisfy it trivially, but a hard assert turns
any caller-side bug into a clear failure rather than UB.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
device's new AR
records its ev.ker -- otherwise the second device's wait sees the first
device's just-recorded event (the in-flight new AR) and creates a circular
dependency with the in-kernel peer signal. Two-pass dispatch (all waits,
then all launches) avoids this.
Bump POOL_SIZE 2 -> 8 (small memory cost, more breathing room for the
GPU's view of the event chain) and add a runtime env override for the
hybrid kernel chunk size (GGML_CUDA_AR_HYBRID_CHUNK_BYTES) for tuning.
One-shot stderr diagnostic at first AR prints the chosen path + sizing.
Result on 2x RTX 5090 Linux, 70b ub_sweep:
ub=64 (1 MB AR): 913 -> 1036 t/s (+13.5% vs old, +1.8% vs NCCL)
ub=128 (2 MB AR): 1056 -> 1181 (+11.9%, +3.7% vs NCCL)
ub=256 (4 MB AR): 1212 -> 1424 (+17.5%, +3.5% vs NCCL)
Internal now beats NCCL at every size (+1.8% to +15.6%), recovering all
ground in the 1-4 MB regime that was previously a 10-12% loss.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* simplify the init logic
* address some other PR requests
* ggml-cuda: stub internal AllReduce on HIP/MUSA, drop pre-Ampere mention, gate NCCL fallback warning on !HIP
The internal AllReduce relies on cudaHostAllocPortable/Mapped,
cudaHostGetDevicePointer, and __nanosleep -- none of which the HIP or
MUSA shims expose -- so wrap the implementation in
!defined(GGML_USE_HIP) && !defined(GGML_USE_MUSA) and provide
nullptr/no-op/false stubs in the #else branch. The dispatcher already
treats a null pipeline as init failure and silently falls back to the
meta backend's generic AllReduce, so HIP/MUSA builds compile clean and
behave correctly without further call-site changes.
PR review follow-ups:
- drop "or pre-Ampere?" from the internal-init failure warning -- the
kernel doesn't require Ampere or newer.
- guard the "NCCL not compiled in" fallback warning behind
!defined(GGML_USE_HIP); the suggestion to install NCCL only makes
sense on NVIDIA builds.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hind, now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: guard __nanosleep on Volta+ and reject pre-Volta devices at init
__nanosleep is the only Volta-specific intrinsic in the kernel; wrap it
in #if __CUDA_ARCH__ >= GGML_CUDA_CC_VOLTA / NO_DEVICE_CODE so the file
still compiles cleanly when targeting older arches (the dispatcher's
init check below ensures the kernel is never actually launched on
pre-Volta).
Add a per-device compute-capability check in pipeline_init that returns
nullptr if any device is below sm70. The dispatcher already treats
nullptr as init failure and silently falls back to the meta backend's
generic AllReduce.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
rom the internal-init failure warning -- the
kernel doesn't require Ampere or newer.
- guard the "NCCL not compiled in" fallback warning behind
!defined(GGML_USE_HIP); the suggestion to install NCCL only makes
sense on NVIDIA builds.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hind, now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: fix CI -Werror warnings (sign-compare, format, restrict alias, maybe-uninitialized)
The CUDA CI builds with -Werror -Wsign-compare -Wformat -Wrestrict
-Wmaybe-uninitialized. Address each:
- n_devices is size_t; change `int i; i < n_devices` to size_t in the
three init loops, and the matching GGML_LOG_INFO format from %d to %zu.
- ggml_cuda_ar_kernel was launched with sendbuf == recvbuf (in-place
reduction), so the __restrict__ qualifiers on those parameters were
technically UB. Drop __restrict__ from sendbuf and recvbuf; an A/B
sweep showed <0.6% perf delta (within noise) on Linux.
- The buf/src/dst pointer arrays in ggml_cuda_ar_allreduce and the
per-iteration arrays in ggml_cuda_ar_allreduce_copy_outer were
declared with size GGML_CUDA_MAX_DEVICES but the loop only writes
indices [0, n_devices); zero-initialise so the compiler sees the
tail elements as defined.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* ggml-cuda: drop unused-function warning by guarding try_allreduce_nccl behind GGML_USE_NCCL
The only call site (in init_nccl) is already inside #ifdef GGML_USE_NCCL,
so the function is unreferenced in non-NCCL builds and trips
nvcc's -Werror=unused-function check. Move the guard from inside the
function body to around the entire definition.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ce
reduction), so the __restrict__ qualifiers on those parameters were
technically UB. Drop __restrict__ from sendbuf and recvbuf; an A/B
sweep showed <0.6% perf delta (within noise) on Linux.
- The buf/src/dst pointer arrays in ggml_cuda_ar_allreduce and the
per-iteration arrays in ggml_cuda_ar_allreduce_copy_outer were
declared with size GGML_CUDA_MAX_DEVICES but the loop only writes
indices [0, n_devices); zero-initialise so the compiler sees the
tail elements as defined.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
llama.cpp
LLM inference in C/C++
Recent API changes
Hot topics
- Hugging Face cache migration: models downloaded with
-hfare now stored in the standard Hugging Face cache directory, enabling sharing with other HF tools. - guide : using the new WebUI of llama.cpp
- guide : running gpt-oss with llama.cpp
- [FEEDBACK] Better packaging for llama.cpp to support downstream consumers 🤗
- Support for the
gpt-ossmodel with native MXFP4 format has been added | PR | Collaboration with NVIDIA | Comment - Multimodal support arrived in
llama-server: #12898 | documentation - VS Code extension for FIM completions: https://github.com/ggml-org/llama.vscode
- Vim/Neovim plugin for FIM completions: https://github.com/ggml-org/llama.vim
- Hugging Face Inference Endpoints now support GGUF out of the box! https://github.com/ggml-org/llama.cpp/discussions/9669
- Hugging Face GGUF editor: discussion | tool
Quick start
Getting started with llama.cpp is straightforward. Here are several ways to install it on your machine:
- Install
llama.cppusing brew, nix or winget - Run with Docker - see our Docker documentation
- Download pre-built binaries from the releases page
- Build from source by cloning this repository - check out our build guide
Once installed, you'll need a model to work with. Head to the Obtaining and quantizing models section to learn more.
Example command:
# Use a local model file
llama-cli -m my_model.gguf
# Or download and run a model directly from Hugging Face
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
# Launch OpenAI-compatible API server
llama-server -hf ggml-org/gemma-3-1b-it-GGUF
Description
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide
range of hardware - locally and in the cloud.
- Plain C/C++ implementation without any dependencies
- Apple silicon is a first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks
- AVX, AVX2, AVX512 and AMX support for x86 architectures
- RVV, ZVFH, ZFH, ZICBOP and ZIHINTPAUSE support for RISC-V architectures
- 1.5-bit, 2-bit, 3-bit, 4-bit, 5-bit, 6-bit, and 8-bit integer quantization for faster inference and reduced memory use
- Custom CUDA kernels for running LLMs on NVIDIA GPUs (support for AMD GPUs via HIP and Moore Threads GPUs via MUSA)
- Vulkan and SYCL backend support
- CPU+GPU hybrid inference to partially accelerate models larger than the total VRAM capacity
The llama.cpp project is the main playground for developing new features for the ggml library.
Models
Typically finetunes of the base models below are supported as well.
Instructions for adding support for new models: HOWTO-add-model.md
Text-only
- LLaMA 🦙
- LLaMA 2 🦙🦙
- LLaMA 3 🦙🦙🦙
- Mistral 7B
- Mixtral MoE
- DBRX
- Jamba
- Falcon
- Chinese LLaMA / Alpaca and Chinese LLaMA-2 / Alpaca-2
- Vigogne (French)
- BERT
- Koala
- Baichuan 1 & 2 + derivations
- Aquila 1 & 2
- Starcoder models
- Refact
- MPT
- Bloom
- Yi models
- StableLM models
- Deepseek models
- Qwen models
- PLaMo-13B
- Phi models
- PhiMoE
- GPT-2
- Orion 14B
- InternLM2
- CodeShell
- Gemma
- Mamba
- Grok-1
- Xverse
- Command-R models
- SEA-LION
- GritLM-7B + GritLM-8x7B
- OLMo
- OLMo 2
- OLMoE
- Granite models
- GPT-NeoX + Pythia
- Snowflake-Arctic MoE
- Smaug
- Poro 34B
- Bitnet b1.58 models
- Flan T5
- Open Elm models
- ChatGLM3-6b + ChatGLM4-9b + GLMEdge-1.5b + GLMEdge-4b
- GLM-4-0414
- SmolLM
- EXAONE-3.0-7.8B-Instruct
- FalconMamba Models
- Jais
- Bielik-11B-v2.3
- RWKV-7
- RWKV-6
- QRWKV-6
- GigaChat-20B-A3B
- Trillion-7B-preview
- Ling models
- LFM2 models
- Hunyuan models
- BailingMoeV2 (Ring/Ling 2.0) models
Multimodal
Bindings
- Python: ddh0/easy-llama
- Python: abetlen/llama-cpp-python
- Go: go-skynet/go-llama.cpp
- Node.js: withcatai/node-llama-cpp
- JS/TS (llama.cpp server client): lgrammel/modelfusion
- JS/TS (Programmable Prompt Engine CLI): offline-ai/cli
- JavaScript/Wasm (works in browser): tangledgroup/llama-cpp-wasm
- Typescript/Wasm (nicer API, available on npm): ngxson/wllama
- Ruby: yoshoku/llama_cpp.rb
- Rust (more features): edgenai/llama_cpp-rs
- Rust (nicer API): mdrokz/rust-llama.cpp
- Rust (more direct bindings): utilityai/llama-cpp-rs
- Rust (automated build from crates.io): ShelbyJenkins/llm_client
- C#/.NET: SciSharp/LLamaSharp
- C#/VB.NET (more features - community license): LM-Kit.NET
- Scala 3: donderom/llm4s
- Clojure: phronmophobic/llama.clj
- React Native: mybigday/llama.rn
- Java: kherud/java-llama.cpp
- Java: QuasarByte/llama-cpp-jna
- Zig: deins/llama.cpp.zig
- Flutter/Dart: netdur/llama_cpp_dart
- Flutter: xuegao-tzx/Fllama
- PHP (API bindings and features built on top of llama.cpp): distantmagic/resonance (more info)
- Guile Scheme: guile_llama_cpp
- Swift srgtuszy/llama-cpp-swift
- Swift ShenghaiWang/SwiftLlama
- Delphi Embarcadero/llama-cpp-delphi
- Go (no CGo needed): hybridgroup/yzma
- Android: llama.android
UIs
(to have a project listed here, it should clearly state that it depends on llama.cpp)
- AI Sublime Text plugin (MIT)
- BonzAI App (proprietary)
- cztomsik/ava (MIT)
- Dot (GPL)
- eva (MIT)
- iohub/collama (Apache-2.0)
- janhq/jan (AGPL)
- johnbean393/Sidekick (MIT)
- KanTV (Apache-2.0)
- KodiBot (GPL)
- llama.vim (MIT)
- LARS (AGPL)
- Llama Assistant (GPL)
- LlamaLib (Apache-2.0)
- LLMFarm (MIT)
- LLMUnity (MIT)
- LMStudio (proprietary)
- LocalAI (MIT)
- LostRuins/koboldcpp (AGPL)
- MindMac (proprietary)
- MindWorkAI/AI-Studio (FSL-1.1-MIT)
- Mobile-Artificial-Intelligence/maid (MIT)
- Mozilla-Ocho/llamafile (Apache-2.0)
- nat/openplayground (MIT)
- nomic-ai/gpt4all (MIT)
- ollama/ollama (MIT)
- oobabooga/text-generation-webui (AGPL)
- PocketPal AI (MIT)
- psugihara/FreeChat (MIT)
- ptsochantaris/emeltal (MIT)
- pythops/tenere (AGPL)
- ramalama (MIT)
- semperai/amica (MIT)
- withcatai/catai (MIT)
- Autopen (GPL)
Tools
- akx/ggify – download PyTorch models from Hugging Face Hub and convert them to GGML
- akx/ollama-dl – download models from the Ollama library to be used directly with llama.cpp
- crashr/gppm – launch llama.cpp instances utilizing NVIDIA Tesla P40 or P100 GPUs with reduced idle power consumption
- gpustack/gguf-parser - review/check the GGUF file and estimate the memory usage
- Styled Lines (proprietary licensed, async wrapper of inference part for game development in Unity3d with pre-built Mobile and Web platform wrappers and a model example)
- unslothai/unsloth – 🦥 exports/saves fine-tuned and trained models to GGUF (Apache-2.0)
Infrastructure
- Paddler - Open-source LLMOps platform for hosting and scaling AI in your own infrastructure
- GPUStack - Manage GPU clusters for running LLMs
- llama_cpp_canister - llama.cpp as a smart contract on the Internet Computer, using WebAssembly
- llama-swap - transparent proxy that adds automatic model switching with llama-server
- Kalavai - Crowdsource end to end LLM deployment at any scale
- llmaz - ☸️ Easy, advanced inference platform for large language models on Kubernetes.
- LLMKube - Kubernetes operator for llama.cpp with multi-GPU and Apple Silicon Metal support"
Games
- Lucy's Labyrinth - A simple maze game where agents controlled by an AI model will try to trick you.
Supported backends
| Backend | Target devices |
|---|---|
| Metal | Apple Silicon |
| BLAS | All |
| BLIS | All |
| SYCL | Intel and Nvidia GPU |
| OpenVINO [In Progress] | Intel CPUs, GPUs, and NPUs |
| MUSA | Moore Threads GPU |
| CUDA | Nvidia GPU |
| HIP | AMD GPU |
| ZenDNN | AMD CPU |
| Vulkan | GPU |
| CANN | Ascend NPU |
| OpenCL | Adreno GPU |
| IBM zDNN | IBM Z & LinuxONE |
| WebGPU [In Progress] | All |
| RPC | All |
| Hexagon [In Progress] | Snapdragon |
| VirtGPU | VirtGPU APIR |
Obtaining and quantizing models
The Hugging Face platform hosts a number of LLMs compatible with llama.cpp:
You can either manually download the GGUF file or directly use any llama.cpp-compatible models from Hugging Face or other model hosting sites, by using this CLI argument: -hf <user>/<model>[:quant]. For example:
llama-cli -hf ggml-org/gemma-3-1b-it-GGUF
By default, the CLI would download from Hugging Face, you can switch to other options with the environment variable MODEL_ENDPOINT. The MODEL_ENDPOINT must point to a Hugging Face compatible API endpoint.
After downloading a model, use the CLI tools to run it locally - see below.
llama.cpp requires the model to be stored in the GGUF file format. Models in other data formats can be converted to GGUF using the convert_*.py Python scripts in this repo.
The Hugging Face platform provides a variety of online tools for converting, quantizing and hosting models with llama.cpp:
- Use the GGUF-my-repo space to convert to GGUF format and quantize model weights to smaller sizes
- Use the GGUF-my-LoRA space to convert LoRA adapters to GGUF format (more info: https://github.com/ggml-org/llama.cpp/discussions/10123)
- Use the GGUF-editor space to edit GGUF meta data in the browser (more info: https://github.com/ggml-org/llama.cpp/discussions/9268)
- Use the Inference Endpoints to directly host
llama.cppin the cloud (more info: https://github.com/ggml-org/llama.cpp/discussions/9669)
To learn more about model quantization, read this documentation
llama-cli
A CLI tool for accessing and experimenting with most of llama.cpp's functionality.
-
Run in conversation mode
Models with a built-in chat template will automatically activate conversation mode. If this doesn't occur, you can manually enable it by adding
-cnvand specifying a suitable chat template with--chat-template NAMEllama-cli -m model.gguf # > hi, who are you? # Hi there! I'm your helpful assistant! I'm an AI-powered chatbot designed to assist and provide information to users like you. I'm here to help answer your questions, provide guidance, and offer support on a wide range of topics. I'm a friendly and knowledgeable AI, and I'm always happy to help with anything you need. What's on your mind, and how can I assist you today? # # > what is 1+1? # Easy peasy! The answer to 1+1 is... 2! -
Run in conversation mode with custom chat template
# use the "chatml" template (use -h to see the list of supported templates) llama-cli -m model.gguf -cnv --chat-template chatml # use a custom template llama-cli -m model.gguf -cnv --in-prefix 'User: ' --reverse-prompt 'User:' -
Constrain the output with a custom grammar
llama-cli -m model.gguf -n 256 --grammar-file grammars/json.gbnf -p 'Request: schedule a call at 8pm; Command:' # {"appointmentTime": "8pm", "appointmentDetails": "schedule a a call"}The grammars/ folder contains a handful of sample grammars. To write your own, check out the GBNF Guide.
For authoring more complex JSON grammars, check out https://grammar.intrinsiclabs.ai/
llama-server
A lightweight, OpenAI API compatible, HTTP server for serving LLMs.
-
Start a local HTTP server with default configuration on port 8080
llama-server -m model.gguf --port 8080 # Basic web UI can be accessed via browser: http://localhost:8080 # Chat completion endpoint: http://localhost:8080/v1/chat/completions -
Support multiple-users and parallel decoding
# up to 4 concurrent requests, each with 4096 max context llama-server -m model.gguf -c 16384 -np 4 -
Enable speculative decoding
# the draft.gguf model should be a small variant of the target model.gguf llama-server -m model.gguf -md draft.gguf -
Serve an embedding model
# use the /embedding endpoint llama-server -m model.gguf --embedding --pooling cls -ub 8192 -
Serve a reranking model
# use the /reranking endpoint llama-server -m model.gguf --reranking -
Constrain all outputs with a grammar
# custom grammar llama-server -m model.gguf --grammar-file grammar.gbnf # JSON llama-server -m model.gguf --grammar-file grammars/json.gbnf
llama-perplexity
A tool for measuring the perplexity 1 (and other quality metrics) of a model over a given text.
-
Measure the perplexity over a text file
llama-perplexity -m model.gguf -f file.txt # [1]15.2701,[2]5.4007,[3]5.3073,[4]6.2965,[5]5.8940,[6]5.6096,[7]5.7942,[8]4.9297, ... # Final estimate: PPL = 5.4007 +/- 0.67339 -
Measure KL divergence
# TODO
llama-bench
Benchmark the performance of the inference for various parameters.
-
Run default benchmark
llama-bench -m model.gguf # Output: # | model | size | params | backend | threads | test | t/s | # | ------------------- | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | pp512 | 5765.41 ± 20.55 | # | qwen2 1.5B Q4_0 | 885.97 MiB | 1.54 B | Metal,BLAS | 16 | tg128 | 197.71 ± 0.81 | # # build: 3e0ba0e60 (4229)
llama-simple
A minimal example for implementing apps with llama.cpp. Useful for developers.
-
Basic text completion
llama-simple -m model.gguf # Hello my name is Kaitlyn and I am a 16 year old girl. I am a junior in high school and I am currently taking a class called "The Art of
Contributing
- Contributors can open PRs
- Collaborators will be invited based on contributions
- Maintainers can push to branches in the
llama.cpprepo and merge PRs into themasterbranch - Any help with managing issues, PRs and projects is very appreciated!
- See good first issues for tasks suitable for first contributions
- Read the CONTRIBUTING.md for more information
- Make sure to read this: Inference at the edge
- A bit of backstory for those who are interested: Changelog podcast
Other documentation
Development documentation
- How to build
- Running on Docker
- Build on Android
- Multi-GPU usage
- Performance troubleshooting
- GGML tips & tricks
Seminal papers and background on the models
If your issue is with model generation quality, then please at least scan the following links and papers to understand the limitations of LLaMA models. This is especially important when choosing an appropriate model size and appreciating both the significant and subtle differences between LLaMA models and ChatGPT:
- LLaMA:
- GPT-3
- GPT-3.5 / InstructGPT / ChatGPT:
XCFramework
The XCFramework is a precompiled version of the library for iOS, visionOS, tvOS, and macOS. It can be used in Swift projects without the need to compile the library from source. For example:
// swift-tools-version: 5.10
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "MyLlamaPackage",
targets: [
.executableTarget(
name: "MyLlamaPackage",
dependencies: [
"LlamaFramework"
]),
.binaryTarget(
name: "LlamaFramework",
url: "https://github.com/ggml-org/llama.cpp/releases/download/b5046/llama-b5046-xcframework.zip",
checksum: "c19be78b5f00d8d29a25da41042cb7afa094cbf6280a225abe614b03b20029ab"
)
]
)
The above example is using an intermediate build b5046 of the library. This can be modified
to use a different version by changing the URL and checksum.
Completions
Command-line completion is available for some environments.
Bash Completion
$ build/bin/llama-cli --completion-bash > ~/.llama-completion.bash
$ source ~/.llama-completion.bash
Optionally this can be added to your .bashrc or .bash_profile to load it
automatically. For example:
$ echo "source ~/.llama-completion.bash" >> ~/.bashrc
Dependencies
- yhirose/cpp-httplib - Single-header HTTP server, used by
llama-server- MIT license - stb-image - Single-header image format decoder, used by multimodal subsystem - Public domain
- nlohmann/json - Single-header JSON library, used by various tools/examples - MIT License
- miniaudio.h - Single-header audio format decoder, used by multimodal subsystem - Public domain
- subprocess.h - Single-header process launching solution for C and C++ - Public domain
