初始化项目,由ModelHub XC社区提供模型

Model: prithivMLmods/Piaget-4B-GGUF
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-02 17:02:29 +08:00
commit 12a3d4a8c0
17 changed files with 135 additions and 0 deletions

3
Piaget-4B.BF16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c26e7228cf6e02490932b9ebac57a1fce408c389c116751eb94fda67f84a0a72
size 8051284704

3
Piaget-4B.F16.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f3382c86d8dd27abccf71b44f0d2129081d7a128f5ba458438a4b20545f242cd
size 8051284704

3
Piaget-4B.F32.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b3745172852d6c5df496c959d1b5e6c58f8a4ebee181393562946778bbd7291b
size 16095828704

3
Piaget-4B.Q2_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2c72639a0bceda8bed28f95bc44502d25053eae971d2843f722066b02e71d130
size 1669499104

3
Piaget-4B.Q3_K_L.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f164a8df16ea3980518b9d14d5564fedb836e3bc6f0d145b6a7c3a2c0071ecd2
size 2239785184

3
Piaget-4B.Q3_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0eaa70335cebecf8c9c84b8bee224d2e008ea13a66c46be0d4b53a19a79c298d
size 2075617504

3
Piaget-4B.Q3_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8cfdc09f8a3c9d9aafd28dbfca13a860b74a0be932802f974fa7e6acea22bbfe
size 1886996704

3
Piaget-4B.Q4_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a9f91f3846e36022999059d8661b77a6da4e9f43b0e44e9ea21d30ac8e7d8498
size 2497280224

3
Piaget-4B.Q4_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8b164c15c18ae5a3018ee9b98701fc766a616a8446a55d8fe71753a11adb1326
size 2383309024

3
Piaget-4B.Q5_K_M.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d1b5082b402f560491fdb3585ef6a2799e471616679bab5604352e3121b40122
size 2889513184

3
Piaget-4B.Q5_K_S.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d557e032a70c001ec384c90f908c73eeb6cfbaf927626873e2fffc5f5dfbc649
size 2823710944

3
Piaget-4B.Q6_K.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a8306ec008bd5924da43b22d80cb137f99f613153941b742ecf79fecd059b006
size 3306260704

3
Piaget-4B.Q8_0.gguf Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d5b6b5889e9cc084ff50385659cde18a337958bb4cc7cbb486a150e6ea4c226d
size 4280404704

47
.gitattributes vendored Normal file
View File

@@ -0,0 +1,47 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

45
README.md Normal file
View File

@@ -0,0 +1,45 @@
---
license: apache-2.0
base_model:
- gustavecortal/Piaget-4B
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---
# **Piaget-4B-GGUF**
> Piaget, a language model finetuned on 15k psychological and philosophical reasoning traces. Piaget is based on Qwen3 and was finetuned on a subset of open reasoning traces from Dolphin R1 and General Reasoning. Performed domain filtering on Dolphin R1 and General Reasoning. Prompts were embedded, clustered with k-means (k=20 000) and majority-voted for domain labels using Qwen3-1.7B, following the Intelligent Internet pipeline. Clusters tagged psychology or philosophy were retained for LoRA finetuning (rank=8, alpha=16, max length=2048, epoch=1, batch size=16). Piaget aims to reason about psychological and philosophical concepts such as self-image, emotion, and existence. Piaget was inspired by my position paper on emotion analysis: Improving Language Models for Emotion Analysis: Insights from Cognitive Science.
## Model files
| File | Size | Format |
|------|------|--------|
| Piaget-4B.BF16.gguf | 8.05 GB | BF16 |
| Piaget-4B.F16.gguf | 8.05 GB | F16 |
| Piaget-4B.F32.gguf | 16.1 GB | F32 |
| Piaget-4B.Q2_K.gguf | 1.67 GB | Q2_K |
| Piaget-4B.Q3_K_L.gguf | 2.24 GB | Q3_K_L |
| Piaget-4B.Q3_K_M.gguf | 2.08 GB | Q3_K_M |
| Piaget-4B.Q3_K_S.gguf | 1.89 GB | Q3_K_S |
| Piaget-4B.Q4_K_M.gguf | 2.5 GB | Q4_K_M |
| Piaget-4B.Q4_K_S.gguf | 2.38 GB | Q4_K_S |
| Piaget-4B.Q5_K_M.gguf | 2.89 GB | Q5_K_M |
| Piaget-4B.Q5_K_S.gguf | 2.82 GB | Q5_K_S |
| Piaget-4B.Q6_K.gguf | 3.31 GB | Q6_K |
| Piaget-4B.Q8_0.gguf | 4.28 GB | Q8_0 |
| .gitattributes | 2.4 kB | - |
| README.md | 65 Bytes | - |
| config.json | 29 Bytes | - |
## Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)

3
config.json Normal file
View File

@@ -0,0 +1,3 @@
{
"model_type": "qwen3"
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}