181 lines
13 KiB
Markdown
181 lines
13 KiB
Markdown
|
|
Quantization made by Richard Erkhov.
|
|||
|
|
|
|||
|
|
[Github](https://github.com/RichardErkhov)
|
|||
|
|
|
|||
|
|
[Discord](https://discord.gg/pvy7H8DZMG)
|
|||
|
|
|
|||
|
|
[Request more models](https://github.com/RichardErkhov/quant_request)
|
|||
|
|
|
|||
|
|
|
|||
|
|
SambaLingo-Russian-Chat - GGUF
|
|||
|
|
- Model creator: https://huggingface.co/sambanovasystems/
|
|||
|
|
- Original model: https://huggingface.co/sambanovasystems/SambaLingo-Russian-Chat/
|
|||
|
|
|
|||
|
|
|
|||
|
|
| Name | Quant method | Size |
|
|||
|
|
| ---- | ---- | ---- |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q2_K.gguf) | Q2_K | 2.47GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q3_K_S.gguf) | Q3_K_S | 2.87GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q3_K.gguf) | Q3_K | 3.19GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q3_K_M.gguf) | Q3_K_M | 3.19GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q3_K_L.gguf) | Q3_K_L | 3.47GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.IQ4_XS.gguf) | IQ4_XS | 3.53GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q4_0.gguf) | Q4_0 | 3.7GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.IQ4_NL.gguf) | IQ4_NL | 3.72GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q4_K_S.gguf) | Q4_K_S | 3.73GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q4_K.gguf) | Q4_K | 3.94GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q4_K_M.gguf) | Q4_K_M | 3.94GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q4_1.gguf) | Q4_1 | 4.09GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q5_0.gguf) | Q5_0 | 4.48GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q5_K_S.gguf) | Q5_K_S | 4.48GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q5_K.gguf) | Q5_K | 4.6GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q5_K_M.gguf) | Q5_K_M | 4.6GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q5_1.gguf) | Q5_1 | 4.87GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q6_K.gguf) | Q6_K | 5.31GB |
|
|||
|
|
| [SambaLingo-Russian-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/sambanovasystems_-_SambaLingo-Russian-Chat-gguf/blob/main/SambaLingo-Russian-Chat.Q8_0.gguf) | Q8_0 | 6.88GB |
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
Original model description:
|
|||
|
|
---
|
|||
|
|
license: llama2
|
|||
|
|
datasets:
|
|||
|
|
- HuggingFaceH4/ultrachat_200k
|
|||
|
|
- HuggingFaceH4/ultrafeedback_binarized
|
|||
|
|
- HuggingFaceH4/cai-conversation-harmless
|
|||
|
|
language:
|
|||
|
|
- ru
|
|||
|
|
- en
|
|||
|
|
---
|
|||
|
|
|
|||
|
|
|
|||
|
|
|
|||
|
|
# SambaLingo-Russian-Chat
|
|||
|
|
|
|||
|
|
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
|||
|
|
|
|||
|
|
<!-- Provide a quick summary of what the model is/does. -->
|
|||
|
|
SambaLingo-Russian-Chat is a human aligned chat model trained in Russian and English. It is trained using direct preference optimization on top the base model [SambaLingo-Russian-Base](https://huggingface.co/sambanovasystems/SambaLingo-Russian-Base). The base model adapts [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Russian by training on 63 billion tokens from the Russian split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try this model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
|
|||
|
|
|
|||
|
|
## Model Description
|
|||
|
|
<!-- Provide a longer summary of what this model is. -->
|
|||
|
|
|
|||
|
|
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
|
|||
|
|
- **Model type:** Language Model
|
|||
|
|
- **Language(s):** Russian, English
|
|||
|
|
- **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
|
|||
|
|
- **Try this model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
|
|||
|
|
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
|
|||
|
|
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
|
|||
|
|
|
|||
|
|
## Getting Started
|
|||
|
|
|
|||
|
|
### Loading Model With Hugging Face
|
|||
|
|
Please make sure to set use_fast=False when loading the tokenizer.
|
|||
|
|
```python
|
|||
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|||
|
|
|
|||
|
|
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Russian-Chat", use_fast=False)
|
|||
|
|
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Russian-Chat", device_map="auto", torch_dtype="auto")
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### Interacting With Model Pipeline
|
|||
|
|
Please make sure to set use_fast=False when loading the tokenizer.
|
|||
|
|
```python
|
|||
|
|
from transformers import pipeline
|
|||
|
|
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Russian-Chat", device_map="auto", use_fast=False)
|
|||
|
|
messages = [
|
|||
|
|
{"role": "user", "content": {YOUR_QUESTION}},
|
|||
|
|
]
|
|||
|
|
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
|||
|
|
outputs = pipe(prompt)[0]
|
|||
|
|
outputs = outputs["generated_text"]
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### Suggested Inference Parameters
|
|||
|
|
- Temperature: 0.8
|
|||
|
|
- Repetition penalty: 1.0
|
|||
|
|
- Top-p: 0.9
|
|||
|
|
|
|||
|
|
### Prompting Guidelines
|
|||
|
|
To prompt this model, please use the following chat template:
|
|||
|
|
```
|
|||
|
|
<|user|>\n{question}</s>\n<|assistant|>\n
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
### Example Prompts and Generations
|
|||
|
|
```
|
|||
|
|
<|user|>
|
|||
|
|
Завершите параграф: Родион Раскольников - главный персонаж романа "Преступление и наказание" Федора Достоевского. Его характер очень сложен и эволюционирует на протяжении всей истории. Он обладает выдающимися умственными способностями и амбициями, но</s>
|
|||
|
|
<|assistant|>
|
|||
|
|
его внутренняя бор ьба и моральные сомнения мешают ему действовать в соответствии со своими желаниями. Раскольников — бедный студент, который терпит финансовые трудности и чувствует себ я бессильным изменить свою жизнь. Он приходит к теории, согласно которой некоторые люди имеют право совершать преступления ради достижения великих целей, и решает про верить свою теорию, убив старуху-процентщицу. Однако его действия приводят к глубокому чувству вины и психологическому срыву, который заставляет его искать искупления и искупления. На протяжении всего романа Раскольников проходит путь самопознания и личностного роста, осознавая важность сочувствия, сострадания и ответственности пе ред другими. В конечном итоге он принимает на себя ответственность за свои действия и находит искупление через раскаяние и прощение.
|
|||
|
|
```
|
|||
|
|
|
|||
|
|
## Training Details
|
|||
|
|
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
|
|||
|
|
|
|||
|
|
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
|
|||
|
|
|
|||
|
|
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
|
|||
|
|
|
|||
|
|
|
|||
|
|
## Tokenizer Details
|
|||
|
|
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
|
|||
|
|
|
|||
|
|
## Evaluation
|
|||
|
|
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
|
|||
|
|
|
|||
|
|
## Uses
|
|||
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|||
|
|
|
|||
|
|
### Direct Use
|
|||
|
|
|
|||
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
|||
|
|
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
|
|||
|
|
|
|||
|
|
|
|||
|
|
### Out-of-Scope Use
|
|||
|
|
|
|||
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
|||
|
|
SambaLingo should NOT be used for:
|
|||
|
|
|
|||
|
|
- Mission-critical applications
|
|||
|
|
- Applications that involve the safety of others
|
|||
|
|
- Making highly important decisions
|
|||
|
|
|
|||
|
|
## Bias, Risks, and Limitations
|
|||
|
|
|
|||
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
|||
|
|
|
|||
|
|
Like all LLMs, SambaLingo has certain limitations:
|
|||
|
|
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
|
|||
|
|
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
|
|||
|
|
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
|
|||
|
|
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
|
|||
|
|
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
|
|||
|
|
|
|||
|
|
## Acknowledgments
|
|||
|
|
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
|
|||
|
|
|
|||
|
|
We would like to give a special thanks to the following groups:
|
|||
|
|
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
|
|||
|
|
- Nguyen et al for open sourcing CulturaX dataset
|
|||
|
|
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
|
|||
|
|
- EleutherAI for their open source evaluation framework
|
|||
|
|
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
|
|||
|
|
|
|||
|
|
|
|||
|
|
## Cite SambaLingo
|
|||
|
|
```
|
|||
|
|
@misc{csaki2024sambalingo,
|
|||
|
|
title={SambaLingo: Teaching Large Language Models New Languages},
|
|||
|
|
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
|
|||
|
|
year={2024},
|
|||
|
|
eprint={2404.05829},
|
|||
|
|
archivePrefix={arXiv},
|
|||
|
|
primaryClass={cs.CL}
|
|||
|
|
}
|
|||
|
|
```
|
|||
|
|
|