78 lines
2.3 KiB
Markdown
78 lines
2.3 KiB
Markdown
---
|
|
library_name: transformers
|
|
license: other
|
|
datasets:
|
|
- Locutusque/hercules-v4.0
|
|
language:
|
|
- en
|
|
inference:
|
|
parameters:
|
|
do_sample: true
|
|
temperature: 1
|
|
top_p: 0.7
|
|
top_k: 4
|
|
max_new_tokens: 250
|
|
repetition_penalty: 1.1
|
|
---
|
|
|
|
# Hercules-Mini-1.8B
|
|
|
|
<!-- Provide a quick summary of what the model is/does. -->
|
|
We fine-tuned Qwen1.5-1.8B on Locutusque's Hercules-v4.
|
|
|
|
|
|
## Model Details
|
|
|
|
### Model Description
|
|
|
|
<!-- Provide a longer summary of what this model is. -->
|
|
|
|
This model has capabilities in math, coding, function calling, roleplay, and more. We fine-tuned it using 700,000 examples of Hercules-v4.
|
|
|
|
- **Developed by:** M4-ai
|
|
- **Language(s) (NLP):** English and maybe Chinese
|
|
- **License:** tongyi-qianwen license
|
|
- **Finetuned from model:** [Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B)
|
|
|
|
## Uses
|
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
General purpose assistant, question answering, chain-of-thought, etc..
|
|
|
|
## Bias, Risks, and Limitations
|
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
|
|
|
The eos token was not setup properly, so to prevent infinite generation you'll need to implement a stopping criteria when the model generates the <|im_end|> token.
|
|
|
|
### Recommendations
|
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
|
|
|
## Evaluation
|
|
Coming soon
|
|
|
|
|
|
## Training Details
|
|
|
|
### Training Data
|
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
|
https://huggingface.co/datasets/Locutusque/hercules-v4.0
|
|
|
|
|
|
#### Training Hyperparameters
|
|
|
|
- **Training regime:** bf16 non-mixed precision
|
|
## Technical Specifications
|
|
|
|
#### Hardware
|
|
|
|
We used 8 Kaggle TPUs, and we trained at a global batch size of 256 and sequence length of 1536
|
|
|
|
## Contributions
|
|
|
|
Thanks to @Tonic, @aloobun, @fhai50032, and @Locutusque for their contributions to this model. |