* llama : minor indentation during tensor loading ggml-ci * llama : use int for layer iterators [no ci]