* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault
llama_vocab