初始化项目,由ModelHub XC社区提供模型
Model: Weyaxi/Newton-7B Source: Original Platform
This commit is contained in:
235
README.md
Normal file
235
README.md
Normal file
@@ -0,0 +1,235 @@
|
||||
---
|
||||
license: other
|
||||
tags:
|
||||
- axolotl
|
||||
- finetune
|
||||
- qlora
|
||||
base_model: openchat/openchat-3.5-0106
|
||||
datasets:
|
||||
- hendrycks/competition_math
|
||||
- allenai/ai2_arc
|
||||
- camel-ai/physics
|
||||
- camel-ai/chemistry
|
||||
- camel-ai/biology
|
||||
- camel-ai/math
|
||||
- STEM-AI-mtl/Electrical-engineering
|
||||
- openbookqa
|
||||
- piqa
|
||||
- metaeval/reclor
|
||||
- mandyyyyii/scibench
|
||||
- derek-thomas/ScienceQA
|
||||
- sciq
|
||||
- TIGER-Lab/ScienceEval
|
||||
---
|
||||

|
||||
|
||||
# 🔬👩🔬 Newton-7B
|
||||
|
||||
This model is a fine-tuned version of [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) on datasets related to science.
|
||||
|
||||
This model is fine-tuned using [QLoRa](https://arxiv.org/abs/2305.14314) and [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
|
||||
|
||||
This model's training was sponsored by [sablo.ai](https://sablo.ai).
|
||||
|
||||
<details><summary>See axolotl config</summary>
|
||||
|
||||
axolotl version: `0.3.0`
|
||||
```yaml
|
||||
base_model: openchat/openchat-3.5-0106
|
||||
model_type: MistralForCausalLM
|
||||
tokenizer_type: LlamaTokenizer
|
||||
is_mistral_derived_model: true
|
||||
|
||||
load_in_8bit: false
|
||||
load_in_4bit: true
|
||||
strict: false
|
||||
|
||||
|
||||
datasets:
|
||||
- path: merged_all.json
|
||||
type:
|
||||
field_instruction: instruction
|
||||
field_output: output
|
||||
|
||||
format: "GPT4 Correct User: {instruction}<|end_of_turn|>GPT4 Correct Assistant:"
|
||||
no_input_format: "GPT4 Correct User: {instruction}<|end_of_turn|>GPT4 Correct Assistant:"
|
||||
|
||||
|
||||
dataset_prepared_path: last_run_prepared
|
||||
val_set_size: 0.01 # not sure
|
||||
output_dir: ./newton
|
||||
|
||||
adapter: qlora
|
||||
lora_model_dir:
|
||||
|
||||
sequence_len: 8192
|
||||
sample_packing: true
|
||||
pad_to_sequence_len: true
|
||||
|
||||
lora_r: 128
|
||||
lora_alpha: 64
|
||||
lora_dropout: 0.05
|
||||
lora_target_linear: true
|
||||
lora_fan_in_fan_out:
|
||||
lora_target_modules:
|
||||
- gate_proj
|
||||
- down_proj
|
||||
- up_proj
|
||||
- q_proj
|
||||
- v_proj
|
||||
- k_proj
|
||||
- o_proj
|
||||
lora_modules_to_save:
|
||||
- embed_tokens
|
||||
- lm_head
|
||||
|
||||
wandb_project: huggingface
|
||||
wandb_entity:
|
||||
wandb_watch:
|
||||
wandb_name:
|
||||
wandb_log_model:
|
||||
|
||||
hub_model_id: Weyaxi/newton-lora
|
||||
save_safetensors: true
|
||||
|
||||
# change #
|
||||
gradient_accumulation_steps: 12
|
||||
micro_batch_size: 6
|
||||
num_epochs: 2
|
||||
optimizer: adamw_bnb_8bit
|
||||
lr_scheduler: cosine
|
||||
learning_rate: 0.0002
|
||||
# change #
|
||||
|
||||
train_on_inputs: false
|
||||
group_by_length: false
|
||||
bf16: true
|
||||
fp16: false
|
||||
tf32: false
|
||||
|
||||
gradient_checkpointing: true
|
||||
early_stopping_patience:
|
||||
resume_from_checkpoint:
|
||||
local_rank:
|
||||
logging_steps: 1
|
||||
xformers_attention:
|
||||
flash_attention: true
|
||||
|
||||
warmup_steps: 10 # not sure
|
||||
|
||||
saves_per_epoch: 2
|
||||
|
||||
evals_per_epoch: 4
|
||||
eval_table_size:
|
||||
eval_table_max_new_tokens: 128
|
||||
|
||||
debug:
|
||||
deepspeed:
|
||||
weight_decay: 0.1 # not sure
|
||||
fsdp:
|
||||
fsdp_config:
|
||||
special_tokens:
|
||||
bos_token: "<s>"
|
||||
eos_token: "</s>"
|
||||
unk_token: "<unk>"
|
||||
tokens:
|
||||
- "<|end_of_turn|>"
|
||||
- "<|pad_0|>"
|
||||
```
|
||||
|
||||
</details><br>
|
||||
|
||||
# 📊 Datasets
|
||||
|
||||
You can find the dataset I used and the work I am doing with this datasets here:
|
||||
|
||||
https://huggingface.co/datasets/Weyaxi/sci-datasets
|
||||
|
||||
Following datasets were used in this model:
|
||||
|
||||
- 📐 [MATH](https://huggingface.co/datasets/hendrycks/competition_math)
|
||||
|
||||
- 🧠 [ARC](https://huggingface.co/datasets/allenai/ai2_arc) (Note: Only **train** part)
|
||||
|
||||
- 🧲 [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
|
||||
|
||||
- ⚗️ [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
|
||||
|
||||
- 🦠 [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
|
||||
|
||||
- 📊 [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
|
||||
|
||||
- ⚡ [STEM-AI-mtl/Electrical-engineering](https://huggingface.co/datasets/STEM-AI-mtl/Electrical-engineering)
|
||||
|
||||
- 📚 [openbookqa](https://huggingface.co/datasets/openbookqa)
|
||||
|
||||
- 🧠 [piqa](https://huggingface.co/datasets/piqa)
|
||||
|
||||
- 🎨 [reclor](https://huggingface.co/datasets/metaeval/reclor)
|
||||
|
||||
- 🔬 [scibench](https://github.com/mandyyyyii/scibench)
|
||||
|
||||
- 🧪 [ScienceQA](https://huggingface.co/datasets/derek-thomas/ScienceQA)
|
||||
|
||||
- 🧬 [sciq](https://huggingface.co/datasets/sciq)
|
||||
|
||||
- 📝 [ScienceEval](https://huggingface.co/datasets/TIGER-Lab/ScienceEval)
|
||||
|
||||
## 🛠️ Multiple Choice Question & Answer Datasets Conversion Progress
|
||||
|
||||
I used [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) to generate a reasonable and logical answer by providing it with the question and the answer key.
|
||||
|
||||
I used the [Together AI](https://www.together.ai) API for this task.
|
||||
|
||||
The following datasets are converted using this method:
|
||||
|
||||
- 🧠 [ARC](https://huggingface.co/datasets/allenai/ai2_arc) (Note: Only **train** part)
|
||||
|
||||
- 📚 [openbookqa](https://huggingface.co/datasets/openbookqa)
|
||||
|
||||
- 🎨 [reclor](https://huggingface.co/datasets/metaeval/reclor)
|
||||
|
||||
- 🧬 [sciq](https://huggingface.co/datasets/sciq)
|
||||
|
||||
# 💬 Prompt Template
|
||||
|
||||
You can use this prompt template while using the model:
|
||||
|
||||
### GPT4 Correct [(Openchat)](https://huggingface.co/openchat/openchat-3.5-0106#conversation-templates)
|
||||
|
||||
```
|
||||
GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant: {asistant}<|end_of_turn|>GPT4 Correct User: {user}<|end_of_turn|>GPT4 Correct Assistant:
|
||||
```
|
||||
|
||||
You can also utilize the chat template method from the tokenizer config like here:
|
||||
|
||||
```python
|
||||
messages = [
|
||||
{"role": "user", "content": "Hello"},
|
||||
{"role": "assistant", "content": "Hi"},
|
||||
{"role": "user", "content": "How are you today?"}
|
||||
]
|
||||
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
|
||||
```
|
||||
|
||||
# 🤝 Acknowledgments
|
||||
|
||||
Thanks to [openchat](https://huggingface.co/openchat) team for fine-tuning an excellent model that I used as a base model.
|
||||
|
||||
Thanks to [@jondurbin](https://huggingface.co/jondurbin) for reformatting codes for some datasets: [bagel/data_sources](https://github.com/jondurbin/bagel/tree/main/bagel/data_sources)
|
||||
|
||||
Thanks to [Together AI](https://www.together.ai) for providing everyone with free credits, which I used to generate a dataset in multiple choice to explanations format.
|
||||
|
||||
Thanks to [Tim Dettmers](https://huggingface.co/timdettmers) for his excellent [QLoRA](https://arxiv.org/abs/2305.14314) work.
|
||||
|
||||
Thanks to all the dataset authors mentioned in the datasets section.
|
||||
|
||||
Thanks to [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for making the repository I used to make this model.
|
||||
|
||||
Overall, thanks to all of the open soure AI community! 🚀
|
||||
|
||||
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
||||
|
||||
If you would like to support me:
|
||||
|
||||
[☕ Buy Me a Coffee](https://www.buymeacoffee.com/weyaxi)
|
||||
Reference in New Issue
Block a user