初始化项目,由ModelHub XC社区提供模型

Model: matrixportalx/gemma-2-2b-it-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-19 03:00:34 +08:00
commit 78302aeaaf
8 changed files with 118 additions and 0 deletions

41
.gitattributes vendored Normal file
View File

@@ -0,0 +1,41 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
gemma-2-2b-it-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
gemma-2-2b-it-f16.gguf filter=lfs diff=lfs merge=lfs -text
gemma-2-2b-it-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
gemma-2-2b-it-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
gemma-2-2b-it-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
gemma-2-2b-it-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text

59
README.md Normal file
View File

@@ -0,0 +1,59 @@
---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, youre required to review and
agree to Googles usage license. To do this, please ensure youre logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
tags:
- conversational
- llama-cpp
- gguf-my-repo
base_model: google/gemma-2-2b-it
---
# matrixportal/gemma-2-2b-it-GGUF
This model was converted to GGUF format from [`google/gemma-2-2b-it`](https://huggingface.co/google/gemma-2-2b-it) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/google/gemma-2-2b-it) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/gemma-2-2b-it-GGUF --hf-file gemma-2-2b-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/gemma-2-2b-it-GGUF --hf-file gemma-2-2b-it-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/gemma-2-2b-it-GGUF --hf-file gemma-2-2b-it-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/gemma-2-2b-it-GGUF --hf-file gemma-2-2b-it-q5_k_m.gguf -c 2048
```

3
gemma-2-2b-it-f16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f3020c905bffdc3e48d3e403177ba0546d72b5e5e93322d0eb8150e166d92e5a
size 5235214176

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a9de40ca88a0f9a0c229d8ee97f9db1a4f533ef5d9aa955cb6cfba814fb96b29
size 1708582752

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1b514d24747a725660e84ed7778dc0a374fd1155659cd8e1064e57882d57d786
size 1923278688

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e50f490839ae38697165464e6b459966ac70d66c62160cb5778ac5c70357e58b
size 1882543968

3
gemma-2-2b-it-q6_k.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8a32524a1eec1931ebafb5ed24f19dcc2561b1c86d3fa02a578fca4bd34def1c
size 2151393120

3
gemma-2-2b-it-q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0d69e1c933f8297fc3e5d915ac964cb37f9b19ffa42392c81bdfaa37d770d6ee
size 2784495456