Georgi Gerganov
bcc0eb4591
llama : per-layer KV cache + quantum K cache (#4309)
* per-layer KV
* remove unnecessary copies
* less code duplication, offload k and v separately
* llama : offload KV cache per-layer
* llama : offload K shift tensors
* llama : offload for rest of the model arches
* llama : enable offload debug temporarily
* llama : keep the KV related layers on the device
* llama : remove mirrors, perform Device -> Host when partial offload
* common : add command-line arg to disable KV cache offloading
* llama : update session save/load
* llama : support quantum K cache (#4312)
* llama : support quantum K cache (wip)
* metal : add F32 -> Q8_0 copy kernel
* cuda : add F32 -> Q8_0 copy kernel
ggml-ci
* cuda : use mmv kernel for quantum cache ops
* llama : pass KV cache type through API
* llama : fix build
ggml-ci
* metal : add F32 -> Q4_0 copy kernel
* metal : add F32 -> Q4_1 copy kernel
* cuda : wip
* cuda : add F32 -> Q4_0 and F32 -> Q4_1 copy kernels
* llama-bench : support type_k/type_v
* metal : use mm kernel only for quantum KV cache
* cuda : add comment
* llama : remove memory_f16 and kv_f16 flags
---------
Co-authored-by: slaren <slarengh@gmail.com>
* readme : add API change notice
---------
Co-authored-by: slaren <slarengh@gmail.com>
2023-12-07 13:03:17 +02:00
..
2023-11-07 00:36:23 +03:00
2023-11-02 08:50:16 +02:00
2023-12-01 00:23:08 +02:00
2023-12-07 13:03:17 +02:00
2023-12-07 13:03:17 +02:00
2023-09-15 15:38:27 -04:00
2023-08-21 23:07:43 +03:00
2023-12-04 09:57:35 +02:00
2023-08-21 23:07:43 +03:00
2023-11-01 16:18:27 +02:00
2023-12-06 10:41:03 +02:00
2023-12-05 12:05:51 +02:00
2023-10-12 18:23:18 +03:00
2023-11-17 17:19:16 +02:00
2023-11-13 14:16:23 +02:00