初始化项目,由ModelHub XC社区提供模型
Model: jondurbin/airoboros-65b-gpt4-1.3 Source: Original Platform
This commit is contained in:
109
README.md
Normal file
109
README.md
Normal file
@@ -0,0 +1,109 @@
|
||||
---
|
||||
license: cc-by-nc-4.0
|
||||
datasets:
|
||||
- jondurbin/airoboros-gpt4-1.3
|
||||
---
|
||||
|
||||
__This version has problems, use if you dare, or wait for 1.4.__
|
||||
|
||||
### Overview
|
||||
|
||||
This is a qlora fine-tuned 65b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
|
||||
|
||||
This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-65b-gpt4-1.2) with a few enhancements:
|
||||
|
||||
- All coding instructions have an equivalent " PLAINFORMAT" version now.
|
||||
- Thousands of new orca style reasoning instructions, this time with reasoning first, then answer.
|
||||
- Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech.
|
||||
|
||||
This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions.
|
||||
|
||||
```
|
||||
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
|
||||
```
|
||||
|
||||
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
|
||||
|
||||
### Usage
|
||||
|
||||
To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors.
|
||||
```
|
||||
pip install git+https://github.com/jondurbin/FastChat
|
||||
```
|
||||
|
||||
Be sure you are pulling the latest branch!
|
||||
|
||||
Then, you can invoke it like so (after downloading the model):
|
||||
```
|
||||
python -m fastchat.serve.cli \
|
||||
--model-path airoboros-65b-gpt4-1.3 \
|
||||
--temperature 0.5 \
|
||||
--max-new-tokens 2048 \
|
||||
--no-history
|
||||
```
|
||||
|
||||
### Training details
|
||||
|
||||
Fine-tuned with my fork of qlora: https://github.com/jondurbin/qlora
|
||||
|
||||
Using:
|
||||
|
||||
```
|
||||
export WANDB_PROJECT=airoboros-65b-gpt4-1.3
|
||||
|
||||
python qlora.py \
|
||||
--model_name_or_path ./llama-65b-hf \
|
||||
--output_dir ./airoboros-65b-gpt4-1.3-peft \
|
||||
--max_steps 2520 \
|
||||
--logging_steps 1 \
|
||||
--save_strategy steps \
|
||||
--data_seed 11422 \
|
||||
--save_steps 75 \
|
||||
--save_total_limit 3 \
|
||||
--evaluation_strategy "no" \
|
||||
--eval_dataset_size 2 \
|
||||
--max_new_tokens 2800 \
|
||||
--dataloader_num_workers 3 \
|
||||
--logging_strategy steps \
|
||||
--remove_unused_columns False \
|
||||
--do_train \
|
||||
--lora_r 64 \
|
||||
--lora_alpha 16 \
|
||||
--lora_modules all \
|
||||
--double_quant \
|
||||
--quant_type nf4 \
|
||||
--bf16 \
|
||||
--bits 4 \
|
||||
--warmup_ratio 0.03 \
|
||||
--lr_scheduler_type constant \
|
||||
--gradient_checkpointing \
|
||||
--dataset instructions.jsonl \
|
||||
--dataset_format airoboros \
|
||||
--model_max_len 2800 \
|
||||
--per_device_train_batch_size 2 \
|
||||
--gradient_accumulation_steps 16 \
|
||||
--learning_rate 0.0001 \
|
||||
--adam_beta2 0.999 \
|
||||
--max_grad_norm 0.3 \
|
||||
--lora_dropout 0.05 \
|
||||
--weight_decay 0.0 \
|
||||
--seed 11422 \
|
||||
--report_to wandb
|
||||
```
|
||||
|
||||
Three file modifications to the base llama:
|
||||
|
||||
- llama-65b-hf/tokenizer_config.json (see this repo's version, updated to have 4096 max seq length during training to accomodate training data)
|
||||
- llama-65b-hf/special_tokens_map.json (see this repo's version)
|
||||
- llama-65b-hf/config.json (updated to temporarily have max model size 4096 to accomodate training data)
|
||||
|
||||
Afterwards, the changes to max model length and sequence length are reduced back to 2048 to avoid ... issues ...
|
||||
|
||||
### Usage and License Notices
|
||||
|
||||
All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because:
|
||||
|
||||
- the base model is LLaMa, which has it's own special research license
|
||||
- the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai
|
||||
|
||||
So, to reiterate: this model (and datasets) cannot be used commercially.
|
||||
Reference in New Issue
Block a user