Logo
Explore Help
Register Sign In
EngineX-Ascend/enginex-ascend-910-llama.cpp
10
0
Fork 0
You've already forked enginex-ascend-910-llama.cpp
Code Issues Pull Requests Actions 4 Projects Releases Wiki Activity
Files
177461104b454163473dced2a5038f4e016cdb7e
enginex-ascend-910-llama.cpp/common
History
Henk Poley 177461104b common : print that one line of the syntax help *also* to standard output (#3823)
2023-10-28 13:16:33 +03:00
..
CMakeLists.txt
common : fix mirostat state when using multiple sequences (#3543)
2023-10-11 22:35:46 +03:00
common.cpp
common : print that one line of the syntax help *also* to standard output (#3823)
2023-10-28 13:16:33 +03:00
common.h
sampling : refactor init to use llama_sampling_params (#3696)
2023-10-20 21:07:23 +03:00
console.cpp
check C++ code with -Wmissing-declarations (#3184)
2023-09-15 15:38:27 -04:00
console.h
gguf : new file format with flexible meta data (beta) (#2398)
2023-08-21 23:07:43 +03:00
grammar-parser.cpp
ggml : fix rope + llama minor optimizations (#3560)
2023-10-20 13:02:12 +03:00
grammar-parser.h
gguf : new file format with flexible meta data (beta) (#2398)
2023-08-21 23:07:43 +03:00
log.h
log : disable pid in log filenames
2023-10-25 10:09:16 +03:00
sampling.cpp
llama : remove token functions with context args in favor of model (#3720)
2023-10-23 22:40:03 +03:00
sampling.h
sampling : refactor init to use llama_sampling_params (#3696)
2023-10-20 21:07:23 +03:00
stb_image.h
examples: support LLaVA v1.5 (multimodal model) (#3436)
2023-10-12 18:23:18 +03:00
train.cpp
llama : remove token functions with context args in favor of model (#3720)
2023-10-23 22:40:03 +03:00
train.h
train : finetune LORA (#2632)
2023-09-28 21:40:11 +03:00
Powered by Gitea Version: 1.24.3 Page: 82ms Template: 6ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API