This website requires JavaScript.
Explore
Help
Register
Sign In
EngineX-Ascend
/
enginex-ascend-910-llama.cpp
Watch
10
Star
0
Fork
0
You've already forked enginex-ascend-910-llama.cpp
Code
Issues
Pull Requests
Actions
4
Projects
Releases
Wiki
Activity
Files
97af49fa395df77e4c18af0e1655b2fee67c9686
enginex-ascend-910-llama.cpp
/
common
History
Jhen-Jie Hong
97af49fa39
server : reuse llama_sample_token common util (
#3494
)
...
* server : reuse llama_sample_token common function * common : use n_probs for temperature sampling
2023-10-06 15:44:24 +03:00
..
CMakeLists.txt
train : finetune LORA (
#2632
)
2023-09-28 21:40:11 +03:00
common.cpp
server : reuse llama_sample_token common util (
#3494
)
2023-10-06 15:44:24 +03:00
common.h
infill : add new example + extend server API (
#3296
)
2023-10-02 10:42:02 +03:00
console.cpp
check C++ code with -Wmissing-declarations (
#3184
)
2023-09-15 15:38:27 -04:00
console.h
gguf : new file format with flexible meta data (beta) (
#2398
)
2023-08-21 23:07:43 +03:00
grammar-parser.cpp
check C++ code with -Wmissing-declarations (
#3184
)
2023-09-15 15:38:27 -04:00
grammar-parser.h
gguf : new file format with flexible meta data (beta) (
#2398
)
2023-08-21 23:07:43 +03:00
log.h
build : enable more non-default compiler warnings (
#3200
)
2023-09-28 17:41:44 -04:00
train.cpp
llama.cpp : split llama_context_params into model and context params (
#3301
)
2023-09-28 22:42:38 +03:00
train.h
train : finetune LORA (
#2632
)
2023-09-28 21:40:11 +03:00