初始化项目,由ModelHub XC社区提供模型
Model: akumaburn/llama-3-8b-bnb-4bit-GGUF Source: Original Platform
This commit is contained in:
24
README.md
Normal file
24
README.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
language:
|
||||
- en
|
||||
license: apache-2.0
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- transformers
|
||||
- unsloth
|
||||
- llama
|
||||
- gguf
|
||||
base_model: unsloth/llama-3-8b-bnb-4bit
|
||||
---
|
||||
|
||||
# Uploaded model
|
||||
|
||||
- **Developed by:** akumaburn
|
||||
- **License:** apache-2.0
|
||||
- **Un-finetuned model :** unsloth/llama-3-8b-bnb-4bit
|
||||
|
||||
This llama model was quantized from https://huggingface.co/unsloth/llama-3-8b-bnb-4bit/tree/main without any finetuning.
|
||||
|
||||
Using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
||||
Reference in New Issue
Block a user