Logo
Explore Help
Register Sign In
sdgoij/llama.cpp
1
0
Fork 0
You've already forked llama.cpp
mirror of https://github.com/ggml-org/llama.cpp.git synced 2026-05-15 05:24:06 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
5261aee8d86e71b116cc8f1031687e27a50c2b5f
llama.cpp/common
History
Georgi Gerganov 5261aee8d8 sampling : one sequence per sampling context
ggml-ci
2023-10-12 20:36:44 +03:00
..
CMakeLists.txt
common : fix mirostat state when using multiple sequences (#3543)
2023-10-11 22:35:46 +03:00
common.cpp
examples: support LLaVA v1.5 (multimodal model) (#3436)
2023-10-12 18:23:18 +03:00
common.h
examples: support LLaVA v1.5 (multimodal model) (#3436)
2023-10-12 18:23:18 +03:00
console.cpp
check C++ code with -Wmissing-declarations (#3184)
2023-09-15 15:38:27 -04:00
console.h
gguf : new file format with flexible meta data (beta) (#2398)
2023-08-21 23:07:43 +03:00
grammar-parser.cpp
check C++ code with -Wmissing-declarations (#3184)
2023-09-15 15:38:27 -04:00
grammar-parser.h
gguf : new file format with flexible meta data (beta) (#2398)
2023-08-21 23:07:43 +03:00
log.h
build : enable more non-default compiler warnings (#3200)
2023-09-28 17:41:44 -04:00
sampling.cpp
sampling : one sequence per sampling context
2023-10-12 20:36:44 +03:00
sampling.h
sampling : one sequence per sampling context
2023-10-12 20:36:44 +03:00
stb_image.h
examples: support LLaVA v1.5 (multimodal model) (#3436)
2023-10-12 18:23:18 +03:00
train.cpp
llama.cpp : split llama_context_params into model and context params (#3301)
2023-09-28 22:42:38 +03:00
train.h
train : finetune LORA (#2632)
2023-09-28 21:40:11 +03:00
Powered by Gitea Version: 1.25.3 Page: 38ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API