Files
Gemma2-2B-OpenHermes2.5-gguf/README.md
ModelHub XC 25c5ea5170 初始化项目,由ModelHub XC社区提供模型
Model: artificialguybr/Gemma2-2B-OpenHermes2.5-gguf
Source: Original Platform
2026-04-28 11:12:05 +08:00

47 lines
1.4 KiB
Markdown

---
base_model: artificialguybr/Gemma2-2B-OpenHermes2.5
datasets:
- teknium/OpenHermes-2.5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: artificialguybr
tags:
- gemma
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
---
## About
<!-- ### quantize_version: 2 -->
---
### 🌐 Website
You can find more of my models, projects, and information on my official website:
- **[artificialguy.com](https://artificialguy.com/)**
### 🚀 Prompt Hub
Need high-quality prompts for image models and LLMs? Explore **[findgoodprompt.com](https://findgoodprompt.com)**.
### 💖 Support My Work
If you find this model useful, please consider supporting my work. It helps me cover server costs and dedicate more time to new open-source projects.
- **Patreon:** [Support on Patreon](https://www.patreon.com/user?u=81570187)
- **Ko-fi:** [Buy me a Ko-fi](https://ko-fi.com/artificialguybr)
- **Buy Me a Coffee:** [Buy me a Coffee](https://buymeacoffee.com/jvkape)
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
Quantization for: https://huggingface.co/artificialguybr/Gemma2-2B-OpenHermes2.5
## How to use
If you are unsure how to use GGUF files, look at the [TheBloke
READMEs](https://huggingface.co/TheBloke/CodeLlama-70B-Python-GGUF) for
more details, including on how to concatenate multi-part files.