初始化项目,由ModelHub XC社区提供模型
Model: YoussefElsafi/PlayerAI-1.2B-GGUF Source: Original Platform
This commit is contained in:
49
.gitattributes
vendored
Normal file
49
.gitattributes
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-F16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-BF16.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
PlayerAI-1.2B-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
3
PlayerAI-1.2B-BF16.gguf
Normal file
3
PlayerAI-1.2B-BF16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a26179d816c5c33eb3c5aa592297593a2d7edef373d24f21f323295b4c35dc24
|
||||
size 2343326080
|
||||
3
PlayerAI-1.2B-F16.gguf
Normal file
3
PlayerAI-1.2B-F16.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:31105b789df17bfb396caad77f0631a8baf4bb2c94e6595f680e39c901bfda08
|
||||
size 2343326080
|
||||
3
PlayerAI-1.2B-IQ4_NL.gguf
Normal file
3
PlayerAI-1.2B-IQ4_NL.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a407f1209ca263fdd480dd78948f53138f9d67c09f04a482634fa2f4076e64e7
|
||||
size 699945344
|
||||
3
PlayerAI-1.2B-IQ4_XS.gguf
Normal file
3
PlayerAI-1.2B-IQ4_XS.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0f91091f979d5a93acb8b454d0115b78f7b43c89fb1d34d51a26612972b8aed4
|
||||
size 668619136
|
||||
3
PlayerAI-1.2B-Q2_K.gguf
Normal file
3
PlayerAI-1.2B-Q2_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:59b140a6cc9cc98050152af675c40636a834b665c10f76e3dd62047b2f7b2f42
|
||||
size 483398016
|
||||
3
PlayerAI-1.2B-Q3_K_L.gguf
Normal file
3
PlayerAI-1.2B-Q3_K_L.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b2e8a303c1c903f47bd0330eff8c746b70a1f6c561b87a133ea3038e1e640433
|
||||
size 635474304
|
||||
3
PlayerAI-1.2B-Q3_K_M.gguf
Normal file
3
PlayerAI-1.2B-Q3_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4e10312f85e7a160d5abdc21d152c71a2c56ff90d25aadfe371a5a7a5e6a2b38
|
||||
size 600347008
|
||||
3
PlayerAI-1.2B-Q3_K_S.gguf
Normal file
3
PlayerAI-1.2B-Q3_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6538fa9dd99bb9c75717336cd8455905286dd0ad3b51e89d045624afb0c7dd2e
|
||||
size 558158208
|
||||
3
PlayerAI-1.2B-Q4_K_M.gguf
Normal file
3
PlayerAI-1.2B-Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:34cf1262a600a621af377f2dbf7c1aca27324a9135dfb45ac6083070f768cce1
|
||||
size 730894720
|
||||
3
PlayerAI-1.2B-Q4_K_S.gguf
Normal file
3
PlayerAI-1.2B-Q4_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8ab8e88f58920e9addbd1e68e46a034d934c823d999313e8b166ae3bbc6e4b1c
|
||||
size 700469632
|
||||
3
PlayerAI-1.2B-Q5_K_M.gguf
Normal file
3
PlayerAI-1.2B-Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:801e1b1cb5bcd48c2d9d67050ea9fcebdcb3153591767511a34f6d185e520deb
|
||||
size 843354496
|
||||
3
PlayerAI-1.2B-Q5_K_S.gguf
Normal file
3
PlayerAI-1.2B-Q5_K_S.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:026bf53a2d3b96ef0edf10e5ef4206816161401b37866471ca3c4866d8fb2b7f
|
||||
size 825250176
|
||||
3
PlayerAI-1.2B-Q6_K.gguf
Normal file
3
PlayerAI-1.2B-Q6_K.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:5f89ae65550e98a07a7668e721990e0c22df733f98faff46b82329811892db11
|
||||
size 962843008
|
||||
3
PlayerAI-1.2B-Q8_0.gguf
Normal file
3
PlayerAI-1.2B-Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:4f365d6d4a70db908cb4b571a74b9a5d715be00c2eb514e8ce7af67052c105cd
|
||||
size 1246253440
|
||||
207
README.md
Normal file
207
README.md
Normal file
@@ -0,0 +1,207 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- LiquidAI/LFM2.5-1.2B-Instruct
|
||||
- YoussefElsafi/PlayerAI-1.2B
|
||||
tags:
|
||||
- gguf
|
||||
- quantized
|
||||
- conversational
|
||||
- llama-cpp
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
**PlayerAI-1.2B-GGUF** contains GGUF quantized versions of [YoussefElsafi/PlayerAI-1.2B](https://huggingface.co/YoussefElsafi/PlayerAI-1.2B), a fine-tuned conversational language model designed for immersive, human-like interaction in multiplayer social environments.
|
||||
|
||||
---
|
||||
|
||||
## Available Quantizations
|
||||
|
||||
| File | Quant | Size | Quality | Recommended For |
|
||||
|------|-------|------|---------|-----------------|
|
||||
| `PlayerAI-1.2B-Q2_K.gguf` | Q2_K | 483 MB | Lowest | Very limited RAM |
|
||||
| `PlayerAI-1.2B-Q3_K_S.gguf` | Q3_K_S | 558 MB | Very Low | Minimal RAM |
|
||||
| `PlayerAI-1.2B-Q3_K_M.gguf` | Q3_K_M | 600 MB | Low | Low RAM |
|
||||
| `PlayerAI-1.2B-Q3_K_L.gguf` | Q3_K_L | 635 MB | Low-Med | Low RAM |
|
||||
| `PlayerAI-1.2B-IQ4_XS.gguf` | IQ4_XS | 669 MB | Medium | Better than Q4 at same size |
|
||||
| `PlayerAI-1.2B-IQ4_NL.gguf` | IQ4_NL | 700 MB | Medium | Better than Q4 at same size |
|
||||
| `PlayerAI-1.2B-Q4_K_S.gguf` | Q4_K_S | 700 MB | Medium | Balanced |
|
||||
| `PlayerAI-1.2B-Q4_K_M.gguf` | Q4_K_M | 731 MB | Medium | ⭐ Recommended |
|
||||
| `PlayerAI-1.2B-Q5_K_S.gguf` | Q5_K_S | 825 MB | Good | High quality |
|
||||
| `PlayerAI-1.2B-Q5_K_M.gguf` | Q5_K_M | 843 MB | Good | High quality |
|
||||
| `PlayerAI-1.2B-Q6_K.gguf` | Q6_K | 963 MB | High | Near lossless |
|
||||
| `PlayerAI-1.2B-Q8_0.gguf` | Q8_0 | 1.25 GB | Very High | Best quality |
|
||||
| `PlayerAI-1.2B-BF16.gguf` | BF16 | 2.34 GB | Native precision | Reference |
|
||||
| `PlayerAI-1.2B-F16.gguf` | F16 | 2.34 GB | Full | Reference / conversion |
|
||||
|
||||
---
|
||||
|
||||
## Which One Should I Pick?
|
||||
|
||||
Since this is only a **1.2B model**, every quantization is very lightweight. Even the highest quality Q8_0 is only **1.25 GB**.
|
||||
|
||||
```
|
||||
Any device with 1GB+ RAM → Q4_K_M ⭐ (only 731 MB)
|
||||
Want best quality? → Q8_0 (only 1.25 GB)
|
||||
Absolute minimum size? → Q2_K (only 483 MB)
|
||||
Running on anything? → Q3_K_M (only 600 MB)
|
||||
No limits at all? → BF16 or F16
|
||||
```
|
||||
|
||||
> **Bottom line:** For a 1.2B model, even a basic laptop or phone can run Q4_K_M or higher with no issues.
|
||||
|
||||
---
|
||||
|
||||
## How to Use
|
||||
|
||||
### With llama.cpp CLI
|
||||
|
||||
```bash
|
||||
# Download (example: Q4_K_M)
|
||||
hf download YoussefElsafi/PlayerAI-1.2B-GGUF \
|
||||
PlayerAI-1.2B-Q4_K_M.gguf \
|
||||
--local-dir ./PlayerAI-GGUF
|
||||
|
||||
# Run
|
||||
./llama.cpp/build/bin/llama-cli \
|
||||
-m ./PlayerAI-GGUF/PlayerAI-1.2B-Q4_K_M.gguf \
|
||||
-p "User: hi\nAI:" \
|
||||
-n 100 \
|
||||
--temp 0.8 \
|
||||
--top-p 0.9
|
||||
```
|
||||
|
||||
### With llama-cpp-python
|
||||
|
||||
```python
|
||||
from llama_cpp import Llama
|
||||
|
||||
llm = Llama.from_pretrained(
|
||||
repo_id="YoussefElsafi/PlayerAI-1.2B-GGUF",
|
||||
filename="PlayerAI-1.2B-Q4_K_M.gguf",
|
||||
n_ctx=512,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
SYSTEM_PROMPT = (
|
||||
"You are a human player in a game called Player or AI. "
|
||||
"Act like a real human teenager chatting casually. "
|
||||
"Use slang, typos, short responses. Never admit you are an AI."
|
||||
)
|
||||
|
||||
response = llm.create_chat_completion(
|
||||
messages=[
|
||||
{"role": "system", "content": SYSTEM_PROMPT},
|
||||
{"role": "user", "content": "hi wsp"},
|
||||
],
|
||||
max_tokens=80,
|
||||
temperature=0.8,
|
||||
top_p=0.9,
|
||||
)
|
||||
|
||||
print(response["choices"][0]["message"]["content"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Model Overview
|
||||
|
||||
- **Base Model:** LiquidAI/LFM2.5-1.2B-Instruct
|
||||
- **Full Precision Model:** [YoussefElsafi/PlayerAI-1.2B](https://huggingface.co/YoussefElsafi/PlayerAI-1.2B)
|
||||
- **Parameters:** ~1.2B
|
||||
- **Architecture:** Decoder-only Transformer
|
||||
- **Training Type:** Supervised fine-tuning (full model)
|
||||
- **Context Style:** Multi-turn conversational sequences
|
||||
- **Primary Objective:** Social realism in dialogue generation
|
||||
|
||||
---
|
||||
|
||||
## Intended Use
|
||||
|
||||
This model is intended for research and experimental use cases involving:
|
||||
|
||||
- Multiplayer conversational agents
|
||||
- Social simulation environments
|
||||
- NPC dialogue systems
|
||||
- Human-like chat behavior modeling
|
||||
- Interactive roleplay systems
|
||||
|
||||
It is not intended for:
|
||||
|
||||
- Factual question answering
|
||||
- Structured instruction following
|
||||
- Safety-critical systems
|
||||
- Deterministic reasoning tasks
|
||||
|
||||
---
|
||||
|
||||
## Example Interactions
|
||||
|
||||
**Note:** All the white-colored messages are fully generated by **PlayerAI-1.2B**.
|
||||
|
||||
### Example 1 — Single Turn
|
||||

|
||||
|
||||
### Example 2 — Short Conversation
|
||||

|
||||
|
||||
### Example 3 — Extended Context Chain
|
||||

|
||||
|
||||
### Example 4 — Nonsense Interaction
|
||||

|
||||
|
||||
### Example 5 — Accusation and Denial
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Behavior Characteristics
|
||||
|
||||
The model exhibits:
|
||||
|
||||
- Informal conversational tone
|
||||
- Short and adaptive responses
|
||||
- Occasional ambiguity or inconsistency
|
||||
- Strong dependence on recent dialogue context
|
||||
- Variability in emotional and linguistic style
|
||||
|
||||
These properties are intentional and aligned with the social simulation objective.
|
||||
|
||||
---
|
||||
|
||||
## Limitations
|
||||
|
||||
- Not suitable for factual reasoning tasks
|
||||
- May produce inconsistent outputs in long contexts
|
||||
- Limited stability in structured instruction formats
|
||||
- Not optimized for deterministic responses
|
||||
- Can exhibit unpredictable conversational drift
|
||||
|
||||
---
|
||||
|
||||
## Ethical Considerations
|
||||
|
||||
This model is intended for research and simulation purposes. Developers should be aware that:
|
||||
|
||||
- Outputs may appear human-like in social contexts
|
||||
- Behavior is optimized for realism, not correctness
|
||||
- Conversational ambiguity is an intentional feature
|
||||
|
||||
Appropriate safeguards should be applied depending on deployment context.
|
||||
|
||||
---
|
||||
|
||||
## Attribution
|
||||
|
||||
If you use PlayerAI in a project, attribution is appreciated but not required:
|
||||
|
||||
"**Powered by PlayerAI**"
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the Apache 2.0 License.
|
||||
Reference in New Issue
Block a user