mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2026-05-11 11:34:10 +00:00
* optimize flash attention kernel by improving score computation and online softmax update * wip * Refactor online softmax update in flash attention kernel for improved performance * Optimize flash attention kernel by replacing float array with HVX_Vector for score computation * wip