初始化项目,由ModelHub XC社区提供模型
Model: matrixportalx/layerskip-llama3.2-1B-GGUF Source: Original Platform
This commit is contained in:
43
README.md
Normal file
43
README.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
license: llama3.2
|
||||
datasets:
|
||||
- teknium/OpenHermes-2.5
|
||||
- NousResearch/hermes-function-calling-v1
|
||||
base_model:
|
||||
- minpeter/QLoRA-Llama-3.2-1B-chatml-tool-v4
|
||||
- meta-llama/Llama-3.2-1B
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- axolotl
|
||||
- merge
|
||||
---
|
||||
|
||||
# layerskip-llama3.2-1B GGUF Quantized Models
|
||||
|
||||
## Technical Details
|
||||
- **Quantization Tool:** llama.cpp
|
||||
- **Version:** version: 5092 (d3bd7193)
|
||||
|
||||
## Model Information
|
||||
- **Base Model:** [facebook/layerskip-llama3.2-1B](https://huggingface.co/facebook/layerskip-llama3.2-1B)
|
||||
- **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
|
||||
|
||||
## Available Files
|
||||
- [`layerskip-llama3.2-1b.q2_k.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q2_k.gguf) (553.96MB)
|
||||
- [`layerskip-llama3.2-1b.q3_k_s.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q3_k_s.gguf) (611.96MB)
|
||||
- [`layerskip-llama3.2-1b.q3_k_m.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q3_k_m.gguf) (658.84MB)
|
||||
- [`layerskip-llama3.2-1b.q3_k_l.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q3_k_l.gguf) (698.59MB)
|
||||
- [`layerskip-llama3.2-1b.q4_0.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q4_0.gguf) (735.21MB)
|
||||
- [`layerskip-llama3.2-1b.q4_k_s.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q4_k_s.gguf) (739.71MB)
|
||||
- [`layerskip-llama3.2-1b.q4_k_m.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q4_k_m.gguf) (770.27MB)
|
||||
- [`layerskip-llama3.2-1b.q5_0.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q5_0.gguf) (851.21MB)
|
||||
- [`layerskip-llama3.2-1b.q5_k_s.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q5_k_s.gguf) (851.21MB)
|
||||
- [`layerskip-llama3.2-1b.q5_k_m.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q5_k_m.gguf) (869.27MB)
|
||||
- [`layerskip-llama3.2-1b.q6_k.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q6_k.gguf) (974.46MB)
|
||||
- [`layerskip-llama3.2-1b.q8_0.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.q8_0.gguf) (1259.88MB)
|
||||
- [`layerskip-llama3.2-1b.f16.gguf`](https://huggingface.co/matrixportal/layerskip-llama3.2-1B-GGUF/resolve/main/layerskip-llama3.2-1b.f16.gguf) (2364.72MB)
|
||||
|
||||
💡 Q4_K_M provides the best balance for most use cases
|
||||
Reference in New Issue
Block a user