初始化项目,由ModelHub XC社区提供模型
Model: artificialguybr/QWEN-2-1.5B-Synthia-II-Redmond Source: Original Platform
This commit is contained in:
93
README.md
Normal file
93
README.md
Normal file
@@ -0,0 +1,93 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
base_model: Qwen/Qwen2-1.5B
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- generated_from_trainer
|
||||
- instruction-tuning
|
||||
model-index:
|
||||
- name: outputs/qwen2.5-1.5b-ft-synthia15-ii
|
||||
results: []
|
||||
---
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
|
||||
|
||||
# Qwen2-1.5B Fine-tuned on Synthia v1.5-II
|
||||
|
||||
A special thanks to Redmond.ai for sponsoring the GPU resources for this fine-tuning process.
|
||||
|
||||
This model is a fine-tuned version of [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B) on the Synthia v1.5-II dataset, which contains over 20.7k instruction-following examples.
|
||||
|
||||
## Model Description
|
||||
|
||||
Qwen2-1.5B is part of the latest Qwen2 series of large language models. The base model brings significant improvements in:
|
||||
- Language understanding and generation
|
||||
- Structured data processing
|
||||
- Support for multiple languages
|
||||
- Long context handling
|
||||
|
||||
This fine-tuned version enhances the base model's instruction-following capabilities through training on the Synthia v1.5-II dataset.
|
||||
|
||||
### Model Architecture
|
||||
- Type: Causal Language Model
|
||||
- Parameters: 1.5B
|
||||
|
||||
---
|
||||
### 🌐 Website
|
||||
You can find more of my models, projects, and information on my official website:
|
||||
- **[artificialguy.com](https://artificialguy.com/)**
|
||||
|
||||
|
||||
### 🚀 Prompt Hub
|
||||
Need high-quality prompts for image models and LLMs? Explore **[findgoodprompt.com](https://findgoodprompt.com)**.
|
||||
### 💖 Support My Work
|
||||
If you find this model useful, please consider supporting my work. It helps me cover server costs and dedicate more time to new open-source projects.
|
||||
- **Patreon:** [Support on Patreon](https://www.patreon.com/user?u=81570187)
|
||||
- **Ko-fi:** [Buy me a Ko-fi](https://ko-fi.com/artificialguybr)
|
||||
- **Buy Me a Coffee:** [Buy me a Coffee](https://buymeacoffee.com/jvkape)
|
||||
- Training Framework: Transformers 4.45.0.dev0
|
||||
|
||||
## Intended Uses & Limitations
|
||||
|
||||
This model is intended for:
|
||||
- Instruction following and task completion
|
||||
- Text generation and completion
|
||||
- Conversational AI applications
|
||||
|
||||
The model inherits the capabilities of the base Qwen2-1.5B model, while being specifically tuned for instruction following.
|
||||
|
||||
## Training Procedure
|
||||
|
||||
### Training Data
|
||||
The model was fine-tuned on the Synthia v1.5-II dataset containing 20.7k instruction-following examples.
|
||||
|
||||
### Training Hyperparameters
|
||||
|
||||
The following hyperparameters were used during training:
|
||||
- Learning rate: 1e-05
|
||||
- Train batch size: 5
|
||||
- Eval batch size: 5
|
||||
- Seed: 42
|
||||
- Gradient accumulation steps: 8
|
||||
- Total train batch size: 40
|
||||
- Optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
||||
- LR scheduler type: cosine
|
||||
- LR scheduler warmup steps: 100
|
||||
- Number of epochs: 3
|
||||
- Sequence length: 4096
|
||||
- Sample packing: enabled
|
||||
- Pad to sequence length: enabled
|
||||
|
||||
## Framework Versions
|
||||
|
||||
- Transformers 4.45.0.dev0
|
||||
- Pytorch 2.3.1+cu121
|
||||
- Datasets 2.21.0
|
||||
- Tokenizers 0.19.1
|
||||
|
||||
<details><summary>See axolotl config</summary>
|
||||
|
||||
axolotl version: `0.4.1`
|
||||
Reference in New Issue
Block a user