初始化项目,由ModelHub XC社区提供模型

Model: ryancook/chromadb-context-1-gguf
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-06 07:58:38 +08:00
commit 5b8a25a4d7
30 changed files with 259 additions and 0 deletions

63
.gitattributes vendored Normal file
View File

@@ -0,0 +1,63 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-MXFP4_MOE.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q5_1.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-TQ1_0.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-TQ2_0.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-bf16.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-BF16.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-F16.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
chromadb-context-1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text

112
README.md Normal file
View File

@@ -0,0 +1,112 @@
---
license: apache-2.0
library_name: gguf
base_model:
- chromadb/context-1
pipeline_tag: text-generation
language: en
tags:
- gguf
- llama.cpp
- gpt-oss
- chromadb
- chroma
- moe
- text-generation
- quantized
---
# Chroma Context-1 — GGUF (llama.cpp)
**GGUF weights for [Chroma Context-1](https://huggingface.co/chromadb/context-1),** converted for **[llama.cpp](https://github.com/ggml-org/llama.cpp)** and any runtime that loads GGUF (LM Studio, Ollama with compatible import paths, local servers, etc.).
This repository exists because **the upstream model is distributed in PyTorch / safetensors form only**. These files are the same weights in **GGUF**, with a range of **llama-quantize** presets so you can trade quality for VRAM and disk.
---
## Upstream (source of truth)
| | Link |
|---|------|
| **Original weights & model card** | [**`chromadb/context-1`**](https://huggingface.co/chromadb/context-1) |
| **Architecture family** | gpt-oss MoE (see upstream card; base traceable to OpenAI **[`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b)**) |
| **License** | **Apache 2.0** (unchanged; you must comply with upstream terms) |
**Attribution:** All tensors are derived from **[chromadb/context-1](https://huggingface.co/chromadb/context-1)**. This repo is a **community conversion** and is **not** affiliated with or endorsed by Chroma. For behavior, safety, and intended use, read the **official** model card first.
---
## Quick start
**1. Install** a recent [llama.cpp](https://github.com/ggml-org/llama.cpp) build (or use a GUI that bundles it).
**2. Download** this repository:
```bash
huggingface-cli download ryancook/chromadb-context-1-gguf --local-dir ./chromadb-context-1-gguf
```
**3. Run** (example — adjust paths and context length to your hardware):
```bash
llama-cli -m ./chromadb-context-1-gguf/chromadb-context-1-Q4_0.gguf -cnv --color -ngl 99
```
Swap the filename for any published `chromadb-context-1-*.gguf` from the **Files** tab (for example `Q4_K_M` or `MXFP4_MOE` when available).
---
## Choosing a file
**Start here (good defaults for most people):**
| Priority | File pattern | When to use |
|----------|----------------|-------------|
| 1 | **`…-Q4_K_M.gguf`** or **`…-Q5_K_M.gguf`** | Best general-purpose balance of quality and size (if present in this repo). |
| 2 | **`…-MXFP4_MOE.gguf`** | Smaller MoE-oriented layout; strong choice when supported by your llama.cpp build/GPU stack. |
| 3 | **`…-Q4_0.gguf`** / **`…-Q5_0.gguf`** | Simpler legacy-style quants; predictable tradeoffs. |
| 4 | **`…-bf16.gguf`** | Full **BF16** fidelity (~40GiB class); for reference or maximum quality when you have RAM/VRAM. |
**Other presets** (IQ*, TQ*, Q2_K, Q3_K*, Q6_K, Q8_0, F16, …) may appear in the **Files** tab as they are published. Lower-bit and ternary formats are **experimental** for quality; profile on your workload before relying on them.
> **Tip:** The **Files and versions** view on Hugging Face is authoritative for what is available in each commit. Filenames follow `chromadb-context-1-<PRESET>.gguf`.
---
## Conversion pipeline
Reproducible high-level steps:
1. **Obtain** weights from [**chromadb/context-1**](https://huggingface.co/chromadb/context-1) (Apache 2.0).
2. **Convert** to GGUF with llama.cpp **`convert_hf_to_gguf.py`** (BF16 output from upstream bf16 checkpoint).
3. **Quantize** with **`llama-quantize`** using the preset named in each filename (`Q4_0`, `Q4_K_M`, `MXFP4_MOE`, etc.).
### Reproducibility
Conversions for this collection were produced with **[ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp)** at commit **`07ba6d275`** (short SHA; matches upstream `convert_hf_to_gguf.py` / `llama-quantize` from that tree). Newer llama.cpp revisions are generally backward compatible for GGUF loading, but you may see small numerical differences if you re-quantize.
---
## Hardware & context
- **VRAM / RAM:** MoE models route only a subset of experts per token; still treat published sizes as a guide and monitor peak usage at your target context length.
- **Context length:** Upstream supports a very long context window; practical limits depend on **KV cache size** and quant. Start with a smaller **`-c`** / context setting and increase only after you confirm stability.
---
## License
Same as upstream: **Apache 2.0**. Keep **[chromadb/context-1](https://huggingface.co/chromadb/context-1)** attribution visible when you redistribute or ship products built on these files.
---
## More from Chroma
- **Official model (safetensors):** [chromadb/context-1](https://huggingface.co/chromadb/context-1)
- **Chroma:** [trychroma.com](https://www.trychroma.com/)
</think>
<tool▁calls▁begin><tool▁call▁begin>
Shell

3
UPSTREAM.md Normal file
View File

@@ -0,0 +1,3 @@
**Canonical source:** [chromadb/context-1](https://huggingface.co/chromadb/context-1) on Hugging Face (Apache 2.0).
This directory is a derived GGUF distribution. Report model-quality or safety questions against the **official** upstream release, not this conversion, unless the issue reproduces only in GGUF form.

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fa1b5cbeb6cfe608542886957ef6b661001450da946eb1782c6dd497890ecabe
size 41860886816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:91cd51746e41264b75be709b5fb06ca9dbb37021337c9643c021d9ecea3ca622
size 12065512736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:58f94a155f6108ab7922bdeb415266f74a70964079a9824c511368c7b28b63e8
size 12065512736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:a61f9fadb9248e7848e8d797127f13eb25560f067ee6a5e5d0c71cd8cca2f3fa
size 12202646816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:89446baef009e3403d52c790d14abeb2afba1d3a8f4ae290394c5b916f742e12
size 12065512736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:cb22e76f25667fa55d000ac92eb690385863fda258c729c713438ea4195aa1d6
size 12065512736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:c3e6ab92ca9d0765e47fcf64334e73fb29eaaa3c6764cdf38f593a792706ad77
size 12165045536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9210bf34b48af00191c78be4974f37a2918a1dd95606e57a16850095956acc17
size 12254625056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:8ddeeffcd58e9aa34e0dfeecefc68b990229342d8e57f5b1d5b2bf7dd809507c
size 12245777696

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b9254b99312acb069e0a30220d2ab03a9c1565127b74a3591b9856072b6a063c
size 12109565216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:29fa22c11bcfbe94031e219f58a0081290fedbd0b762d8216004a4c897bfc403
size 12065512736

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2b9f2cacf67946794bb6b3faf5dd9511eb29843c261fce8c0a17f85e7b0bbde6
size 13335108896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6fe792c77b2cfe2b8878165920be236fce4fb84dff8a6de09f8595cb311dd819
size 12916149536

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:315e79e8d59aafa42cf5012c51c190c53c1df13b212b292fe3e7c6b07a84d690
size 12061089056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4f15b93b92e9774d72421c9e7463292fb3faaf75430a855a584cb7fc6862615d
size 12098690336

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fd651b543444b048f8eea1c880f34ca2934d204baa25f8bbf11872fdf82d84a9
size 13369092896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7f8e99f2e30ddb64f1ab832a76a1845b8e13a6e0ad1a059d1391cb73bacb4177
size 15805135136

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:08f03f3fcd27f5b0fa840112e9784648fab1a2a1423956f153c9d269755ec690
size 14654241056

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:fb6a4138d04090f719547c4c5cb7e11e48803c16e438c08205713261ad517b8a
size 14639495456

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7a59ea704dc7169cc52439110fd45caaed309b7c38489fbcc9746ad3ffade233
size 15909898016

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d6c2f100db5d0ba21be15f312e306b2a26aeafa18a88638cdd3d4b02fb439e19
size 16893060896

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d6ed6a3e9bbbea0885085fbf39ff44991c67ae53d82fe96898b8404566187443
size 15892203296

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1f3c2390684042d239d37e8ff002ce31fa6ce8e26666fe4516f56e1c9831bc74
size 22193343776

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:78f9fb360fca5c1919367e2217f8b15e9720936134483d3f0ec22e48cffdf998
size 22261910816

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:60defe83ac3ad99f81bc2c96596126aec284e0c79b04e18391655d7b89dccb0c
size 12071549216

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b5b02501cb94836ac218ea07bd5be68fe318c062e86aa176e994e72e3117b635
size 12084820256

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:0c977ee963df556190fbaf7f720ae7822b8554611093b84e2779c5553b9ffb46
size 41860886816