初始化项目,由ModelHub XC社区提供模型
Model: TheBloke/Nous-Capybara-34B-AWQ Source: Original Platform
This commit is contained in:
37
.gitattributes
vendored
Normal file
37
.gitattributes
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
model-00001-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
model-00002-of-00002.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
448
README.md
Normal file
448
README.md
Normal file
@@ -0,0 +1,448 @@
|
||||
---
|
||||
base_model: NousResearch/Nous-Capybara-34B
|
||||
datasets:
|
||||
- LDJnr/LessWrong-Amplify-Instruct
|
||||
- LDJnr/Pure-Dove
|
||||
- LDJnr/Verified-Camel
|
||||
inference: false
|
||||
language:
|
||||
- eng
|
||||
license:
|
||||
- mit
|
||||
model_creator: NousResearch
|
||||
model_name: Nous Capybara 34B
|
||||
model_type: yi
|
||||
prompt_template: 'USER: {prompt} ASSISTANT:'
|
||||
quantized_by: TheBloke
|
||||
tags:
|
||||
- sft
|
||||
- StableLM
|
||||
---
|
||||
<!-- markdownlint-disable MD041 -->
|
||||
|
||||
<!-- header start -->
|
||||
<!-- 200823 -->
|
||||
<div style="width: auto; margin-left: auto; margin-right: auto">
|
||||
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
||||
</div>
|
||||
<div style="display: flex; justify-content: space-between; width: 100%;">
|
||||
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
||||
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
||||
</div>
|
||||
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
||||
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
||||
</div>
|
||||
</div>
|
||||
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
||||
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
||||
<!-- header end -->
|
||||
|
||||
# Nous Capybara 34B - AWQ
|
||||
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
|
||||
- Original model: [Nous Capybara 34B](https://huggingface.co/NousResearch/Nous-Capybara-34B)
|
||||
|
||||
<!-- description start -->
|
||||
## Description
|
||||
|
||||
This repo contains AWQ model files for [NousResearch's Nous Capybara 34B](https://huggingface.co/NousResearch/Nous-Capybara-34B).
|
||||
|
||||
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
|
||||
|
||||
|
||||
### About AWQ
|
||||
|
||||
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
|
||||
|
||||
It is supported by:
|
||||
|
||||
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
|
||||
- [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only
|
||||
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
|
||||
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
|
||||
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
|
||||
|
||||
<!-- description end -->
|
||||
<!-- repositories-available start -->
|
||||
## Repositories available
|
||||
|
||||
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Nous-Capybara-34B-AWQ)
|
||||
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Nous-Capybara-34B-GPTQ)
|
||||
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Nous-Capybara-34B-GGUF)
|
||||
* [NousResearch's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/NousResearch/Nous-Capybara-34B)
|
||||
<!-- repositories-available end -->
|
||||
|
||||
<!-- prompt-template start -->
|
||||
## Prompt template: User-Assistant
|
||||
|
||||
```USER: {prompt} ASSISTANT:```
|
||||
|
||||
<!-- prompt-template end -->
|
||||
|
||||
|
||||
<!-- README_AWQ.md-provided-files start -->
|
||||
## Provided files, and AWQ parameters
|
||||
|
||||
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
|
||||
|
||||
Models are released as sharded safetensors files.
|
||||
|
||||
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
|
||||
| ------ | ---- | -- | ----------- | ------- | ---- |
|
||||
| [main](https://huggingface.co/TheBloke/Nous-Capybara-34B-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 19.23 GB
|
||||
|
||||
<!-- README_AWQ.md-provided-files end -->
|
||||
|
||||
<!-- README_AWQ.md-text-generation-webui start -->
|
||||
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
|
||||
|
||||
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
|
||||
|
||||
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
|
||||
|
||||
1. Click the **Model tab**.
|
||||
2. Under **Download custom model or LoRA**, enter `TheBloke/Nous-Capybara-34B-AWQ`.
|
||||
3. Click **Download**.
|
||||
4. The model will start downloading. Once it's finished it will say "Done".
|
||||
5. In the top left, click the refresh icon next to **Model**.
|
||||
6. In the **Model** dropdown, choose the model you just downloaded: `Nous-Capybara-34B-AWQ`
|
||||
7. Select **Loader: AutoAWQ**.
|
||||
8. Click Load, and the model will load and is now ready for use.
|
||||
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
||||
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
|
||||
<!-- README_AWQ.md-text-generation-webui end -->
|
||||
|
||||
<!-- README_AWQ.md-use-from-vllm start -->
|
||||
## Multi-user inference server: vLLM
|
||||
|
||||
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
|
||||
|
||||
- Please ensure you are using vLLM version 0.2 or later.
|
||||
- When using vLLM as a server, pass the `--quantization awq` parameter.
|
||||
|
||||
For example:
|
||||
|
||||
```shell
|
||||
python3 -m vllm.entrypoints.api_server --model TheBloke/Nous-Capybara-34B-AWQ --quantization awq --dtype auto
|
||||
```
|
||||
|
||||
- When using vLLM from Python code, again set `quantization=awq`.
|
||||
|
||||
For example:
|
||||
|
||||
```python
|
||||
from vllm import LLM, SamplingParams
|
||||
|
||||
prompts = [
|
||||
"Tell me about AI",
|
||||
"Write a story about llamas",
|
||||
"What is 291 - 150?",
|
||||
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
|
||||
]
|
||||
prompt_template=f'''USER: {prompt}
|
||||
ASSISTANT:
|
||||
'''
|
||||
|
||||
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
|
||||
|
||||
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
|
||||
|
||||
llm = LLM(model="TheBloke/Nous-Capybara-34B-AWQ", quantization="awq", dtype="auto")
|
||||
|
||||
outputs = llm.generate(prompts, sampling_params)
|
||||
|
||||
# Print the outputs.
|
||||
for output in outputs:
|
||||
prompt = output.prompt
|
||||
generated_text = output.outputs[0].text
|
||||
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
|
||||
```
|
||||
<!-- README_AWQ.md-use-from-vllm start -->
|
||||
|
||||
<!-- README_AWQ.md-use-from-tgi start -->
|
||||
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
|
||||
|
||||
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
|
||||
|
||||
Example Docker parameters:
|
||||
|
||||
```shell
|
||||
--model-id TheBloke/Nous-Capybara-34B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
|
||||
```
|
||||
|
||||
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
|
||||
|
||||
```shell
|
||||
pip3 install huggingface-hub
|
||||
```
|
||||
|
||||
```python
|
||||
from huggingface_hub import InferenceClient
|
||||
|
||||
endpoint_url = "https://your-endpoint-url-here"
|
||||
|
||||
prompt = "Tell me about AI"
|
||||
prompt_template=f'''USER: {prompt}
|
||||
ASSISTANT:
|
||||
'''
|
||||
|
||||
client = InferenceClient(endpoint_url)
|
||||
response = client.text_generation(prompt,
|
||||
max_new_tokens=128,
|
||||
do_sample=True,
|
||||
temperature=0.7,
|
||||
top_p=0.95,
|
||||
top_k=40,
|
||||
repetition_penalty=1.1)
|
||||
|
||||
print(f"Model output: ", response)
|
||||
```
|
||||
<!-- README_AWQ.md-use-from-tgi end -->
|
||||
|
||||
<!-- README_AWQ.md-use-from-python start -->
|
||||
## Inference from Python code using Transformers
|
||||
|
||||
### Install the necessary packages
|
||||
|
||||
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
|
||||
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
|
||||
|
||||
```shell
|
||||
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
|
||||
```
|
||||
|
||||
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
|
||||
|
||||
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
|
||||
|
||||
```shell
|
||||
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
|
||||
```
|
||||
|
||||
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
|
||||
|
||||
```shell
|
||||
pip3 uninstall -y autoawq
|
||||
git clone https://github.com/casper-hansen/AutoAWQ
|
||||
cd AutoAWQ
|
||||
pip3 install .
|
||||
```
|
||||
|
||||
### Transformers example code (requires Transformers 4.35.0 and later)
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
||||
|
||||
model_name_or_path = "TheBloke/Nous-Capybara-34B-AWQ"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name_or_path,
|
||||
low_cpu_mem_usage=True,
|
||||
device_map="cuda:0"
|
||||
)
|
||||
|
||||
# Using the text streamer to stream output one token at a time
|
||||
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
||||
|
||||
prompt = "Tell me about AI"
|
||||
prompt_template=f'''USER: {prompt}
|
||||
ASSISTANT:
|
||||
'''
|
||||
|
||||
# Convert prompt to tokens
|
||||
tokens = tokenizer(
|
||||
prompt_template,
|
||||
return_tensors='pt'
|
||||
).input_ids.cuda()
|
||||
|
||||
generation_params = {
|
||||
"do_sample": True,
|
||||
"temperature": 0.7,
|
||||
"top_p": 0.95,
|
||||
"top_k": 40,
|
||||
"max_new_tokens": 512,
|
||||
"repetition_penalty": 1.1
|
||||
}
|
||||
|
||||
# Generate streamed output, visible one token at a time
|
||||
generation_output = model.generate(
|
||||
tokens,
|
||||
streamer=streamer,
|
||||
**generation_params
|
||||
)
|
||||
|
||||
# Generation without a streamer, which will include the prompt in the output
|
||||
generation_output = model.generate(
|
||||
tokens,
|
||||
**generation_params
|
||||
)
|
||||
|
||||
# Get the tokens from the output, decode them, print them
|
||||
token_output = generation_output[0]
|
||||
text_output = tokenizer.decode(token_output)
|
||||
print("model.generate output: ", text_output)
|
||||
|
||||
# Inference is also possible via Transformers' pipeline
|
||||
from transformers import pipeline
|
||||
|
||||
pipe = pipeline(
|
||||
"text-generation",
|
||||
model=model,
|
||||
tokenizer=tokenizer,
|
||||
**generation_params
|
||||
)
|
||||
|
||||
pipe_output = pipe(prompt_template)[0]['generated_text']
|
||||
print("pipeline output: ", pipe_output)
|
||||
|
||||
```
|
||||
<!-- README_AWQ.md-use-from-python end -->
|
||||
|
||||
<!-- README_AWQ.md-compatibility start -->
|
||||
## Compatibility
|
||||
|
||||
The files provided are tested to work with:
|
||||
|
||||
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
|
||||
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
|
||||
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
|
||||
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
|
||||
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
|
||||
|
||||
<!-- README_AWQ.md-compatibility end -->
|
||||
|
||||
<!-- footer start -->
|
||||
<!-- 200823 -->
|
||||
## Discord
|
||||
|
||||
For further support, and discussions on these models and AI in general, join us at:
|
||||
|
||||
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
|
||||
|
||||
## Thanks, and how to contribute
|
||||
|
||||
Thanks to the [chirper.ai](https://chirper.ai) team!
|
||||
|
||||
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
|
||||
|
||||
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
|
||||
|
||||
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
|
||||
|
||||
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
|
||||
|
||||
* Patreon: https://patreon.com/TheBlokeAI
|
||||
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
||||
|
||||
**Special thanks to**: Aemon Algiz.
|
||||
|
||||
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
|
||||
|
||||
|
||||
Thank you to all my generous patrons and donaters!
|
||||
|
||||
And thank you again to a16z for their generous grant.
|
||||
|
||||
<!-- footer end -->
|
||||
|
||||
# Original model card: NousResearch's Nous Capybara 34B
|
||||
|
||||
|
||||
## **Nous-Capybara-34B V1.9**
|
||||
|
||||
**This is trained on the Yi-34B model with 200K context length, for 3 epochs on the Capybara dataset!**
|
||||
|
||||
**First 34B Nous model and first 200K context length Nous model!**
|
||||
|
||||
The Capybara series is the first Nous collection of models made by fine-tuning mostly on data created by Nous in-house.
|
||||
|
||||
We leverage our novel data synthesis technique called Amplify-instruct (Paper coming soon), the seed distribution and synthesis method are comprised of a synergistic combination of top performing existing data synthesis techniques and distributions used for SOTA models such as Airoboros, Evol-Instruct(WizardLM), Orca, Vicuna, Know_Logic, Lamini, FLASK and others, all into one lean holistically formed methodology for the dataset and model. The seed instructions used for the start of synthesized conversations are largely based on highly datasets like Airoboros, Know logic, EverythingLM, GPTeacher and even entirely new seed instructions derived from posts on the website LessWrong, as well as being supplemented with certain in-house multi-turn datasets like Dove(A successor to Puffin).
|
||||
|
||||
While performing great in it's current state, the current dataset used for fine-tuning is entirely contained within 20K training examples, this is 10 times smaller than many similar performing current models, this is signficant when it comes to scaling implications for our next generation of models once we scale our novel syntheiss methods to significantly more examples.
|
||||
|
||||
## Process of creation and special thank yous!
|
||||
|
||||
This model was fine-tuned by Nous Research as part of the Capybara/Amplify-Instruct project led by Luigi D.(LDJ) (Paper coming soon), as well as significant dataset formation contributions by J-Supha and general compute and experimentation management by Jeffrey Q. during ablations.
|
||||
|
||||
Special thank you to **A16Z** for sponsoring our training, as well as **Yield Protocol** for their support in financially sponsoring resources during the R&D of this project.
|
||||
|
||||
## Thank you to those of you that have indirectly contributed!
|
||||
|
||||
While most of the tokens within Capybara are newly synthsized and part of datasets like Puffin/Dove, we would like to credit the single-turn datasets we leveraged as seeds that are used to generate the multi-turn data as part of the Amplify-Instruct synthesis.
|
||||
|
||||
The datasets shown in green below are datasets that we sampled from to curate seeds that are used during Amplify-Instruct synthesis for this project.
|
||||
|
||||
Datasets in Blue are in-house curations that previously existed prior to Capybara.
|
||||
|
||||

|
||||
|
||||
|
||||
## Prompt Format
|
||||
|
||||
The reccomended model usage is:
|
||||
|
||||
```
|
||||
USER:
|
||||
|
||||
ASSISTANT:
|
||||
```
|
||||
|
||||
## Mutli-Modality!
|
||||
|
||||
- We currently have a Multi-modal model based on Capybara V1.9!
|
||||
https://huggingface.co/NousResearch/Obsidian-3B-V0.5
|
||||
it is currently only available as a 3B sized model but larger versions coming!
|
||||
|
||||
|
||||
## Notable Features:
|
||||
|
||||
- Uses Yi-34B model as the base which is trained for 200K context length!
|
||||
|
||||
- Over 60% of the dataset is comprised of multi-turn conversations.(Most models are still only trained for single-turn conversations and no back and forths!)
|
||||
|
||||
- Over 1,000 tokens average per conversation example! (Most models are trained on conversation data that is less than 300 tokens per example.)
|
||||
|
||||
- Able to effectively do complex summaries of advanced topics and studies. (trained on hundreds of advanced difficult summary tasks developed in-house)
|
||||
|
||||
- Ability to recall information upto late 2022 without internet.
|
||||
|
||||
- Includes a portion of conversational data synthesized from less wrong posts, discussing very in-depth details and philosophies about the nature of reality, reasoning, rationality, self-improvement and related concepts.
|
||||
|
||||
## Example Outputs from Capybara V1.9 7B version! (examples from 34B coming soon):
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
## Benchmarks! (Coming soon!)
|
||||
|
||||
|
||||
## Future model sizes
|
||||
|
||||
Capybara V1.9 now currently has a 3B, 7B and 34B size, and we plan to eventually have a 13B and 70B version in the future, as well as a potential 1B version based on phi-1.5 or Tiny Llama.
|
||||
|
||||
## How you can help!
|
||||
|
||||
In the near future we plan on leveraging the help of domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from our training curations.
|
||||
|
||||
If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord!
|
||||
|
||||
## Dataset contamination.
|
||||
|
||||
We have checked the capybara dataset for contamination for several of the most popular datasets and can confirm that there is no contaminaton found.
|
||||
|
||||
We leveraged minhash to check for 100%, 99%, 98% and 97% similarity matches between our data and the questions and answers in benchmarks, we found no exact matches, nor did we find any matches down to the 97% similarity level.
|
||||
|
||||
The following are benchmarks we checked for contamination against our dataset:
|
||||
|
||||
- HumanEval
|
||||
|
||||
- AGIEval
|
||||
|
||||
- TruthfulQA
|
||||
|
||||
- MMLU
|
||||
|
||||
- GPT4All
|
||||
35
config.json
Normal file
35
config.json
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"_name_or_path": "/workspace/process/nousresearch_nous-capybara-34b/source",
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 7168,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 20480,
|
||||
"max_position_embeddings": 200000,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 56,
|
||||
"num_hidden_layers": 60,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 0,
|
||||
"pretraining_tp": 1,
|
||||
"quantization_config": {
|
||||
"bits": 4,
|
||||
"group_size": 128,
|
||||
"quant_method": "awq",
|
||||
"version": "gemm",
|
||||
"zero_point": true
|
||||
},
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 5000000.0,
|
||||
"tie_word_embeddings": false,
|
||||
"torch_dtype": "float16",
|
||||
"transformers_version": "4.35.0",
|
||||
"use_cache": true,
|
||||
"vocab_size": 64000
|
||||
}
|
||||
1
configuration.json
Normal file
1
configuration.json
Normal file
@@ -0,0 +1 @@
|
||||
{"framework": "pytorch", "task": "text-generation", "allow_remote": true}
|
||||
7
generation_config.json
Normal file
7
generation_config.json
Normal file
@@ -0,0 +1,7 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": 2,
|
||||
"pad_token_id": 0,
|
||||
"transformers_version": "4.34.1"
|
||||
}
|
||||
3
model-00001-of-00002.safetensors
Normal file
3
model-00001-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3a1e93df938dbbf1420c9172a329d2e54b2b6b38de0aa434ccc2a037364fa9f2
|
||||
size 9963803400
|
||||
3
model-00002-of-00002.safetensors
Normal file
3
model-00002-of-00002.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:da618bb376b53250aeb9435ed4411a01f9a2e11e26e70f6e6853d3d6c8dfb03b
|
||||
size 9262091504
|
||||
1390
model.safetensors.index.json
Normal file
1390
model.safetensors.index.json
Normal file
File diff suppressed because it is too large
Load Diff
6
quant_config.json
Normal file
6
quant_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"zero_point": true,
|
||||
"q_group_size": 128,
|
||||
"w_bit": 4,
|
||||
"version": "GEMM"
|
||||
}
|
||||
30
special_tokens_map.json
Normal file
30
special_tokens_map.json
Normal file
@@ -0,0 +1,30 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|startoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
255
tokenization_yi.py
Normal file
255
tokenization_yi.py
Normal file
@@ -0,0 +1,255 @@
|
||||
import os
|
||||
from shutil import copyfile
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
import sentencepiece as spm
|
||||
from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer
|
||||
from transformers.utils import logging
|
||||
|
||||
logger = logging.get_logger(__name__)
|
||||
|
||||
VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"}
|
||||
|
||||
PRETRAINED_VOCAB_FILES_MAP = {
|
||||
"vocab_file": {},
|
||||
"tokenizer_file": {},
|
||||
}
|
||||
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {}
|
||||
|
||||
|
||||
class YiTokenizer(PreTrainedTokenizer):
|
||||
"""
|
||||
Construct a Yi tokenizer. Based on byte-level Byte-Pair-Encoding.
|
||||
|
||||
Args:
|
||||
vocab_file (`str`):
|
||||
Path to the vocabulary file.
|
||||
"""
|
||||
|
||||
vocab_files_names = VOCAB_FILES_NAMES
|
||||
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
||||
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
||||
model_input_names = ["input_ids", "attention_mask"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
vocab_file,
|
||||
unk_token="<unk>",
|
||||
bos_token="<|startoftext|>",
|
||||
eos_token="<|endoftext|>",
|
||||
pad_token="<unk>",
|
||||
sp_model_kwargs: Optional[Dict[str, Any]] = None,
|
||||
add_bos_token=True,
|
||||
add_eos_token=False,
|
||||
clean_up_tokenization_spaces=False,
|
||||
**kwargs,
|
||||
):
|
||||
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
|
||||
bos_token = (
|
||||
AddedToken(bos_token, lstrip=False, rstrip=False)
|
||||
if isinstance(bos_token, str)
|
||||
else bos_token
|
||||
)
|
||||
eos_token = (
|
||||
AddedToken(eos_token, lstrip=False, rstrip=False)
|
||||
if isinstance(eos_token, str)
|
||||
else eos_token
|
||||
)
|
||||
unk_token = (
|
||||
AddedToken(unk_token, lstrip=False, rstrip=False)
|
||||
if isinstance(unk_token, str)
|
||||
else unk_token
|
||||
)
|
||||
pad_token = (
|
||||
AddedToken(pad_token, lstrip=False, rstrip=False)
|
||||
if isinstance(pad_token, str)
|
||||
else pad_token
|
||||
)
|
||||
self.vocab_file = vocab_file
|
||||
self.add_bos_token = add_bos_token
|
||||
self.add_eos_token = add_eos_token
|
||||
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
||||
self.sp_model.Load(vocab_file)
|
||||
super().__init__(
|
||||
bos_token=bos_token,
|
||||
eos_token=eos_token,
|
||||
unk_token=unk_token,
|
||||
pad_token=pad_token,
|
||||
add_bos_token=add_bos_token,
|
||||
add_eos_token=add_eos_token,
|
||||
sp_model_kwargs=self.sp_model_kwargs,
|
||||
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def __getstate__(self):
|
||||
state = self.__dict__.copy()
|
||||
state["sp_model"] = None
|
||||
return state
|
||||
|
||||
def __setstate__(self, d):
|
||||
self.__dict__ = d
|
||||
self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs)
|
||||
self.sp_model.Load(self.vocab_file)
|
||||
|
||||
@property
|
||||
def vocab_size(self):
|
||||
"""Returns vocab size"""
|
||||
return self.sp_model.get_piece_size()
|
||||
|
||||
def get_vocab(self):
|
||||
"""Returns vocab as a dict"""
|
||||
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
|
||||
vocab.update(self.added_tokens_encoder)
|
||||
return vocab
|
||||
|
||||
def _tokenize(self, text):
|
||||
"""Returns a tokenized string."""
|
||||
return self.sp_model.encode(text, out_type=str)
|
||||
|
||||
def _convert_token_to_id(self, token):
|
||||
"""Converts a token (str) in an id using the vocab."""
|
||||
return self.sp_model.piece_to_id(token)
|
||||
|
||||
def _convert_id_to_token(self, index):
|
||||
"""Converts an index (integer) in a token (str) using the vocab."""
|
||||
token = self.sp_model.IdToPiece(index)
|
||||
return token
|
||||
|
||||
def convert_tokens_to_string(self, tokens):
|
||||
"""Converts a sequence of tokens (string) in a single string."""
|
||||
current_sub_tokens = []
|
||||
out_string = ""
|
||||
prev_is_special = False
|
||||
for i, token in enumerate(tokens):
|
||||
# make sure that special tokens are not decoded using sentencepiece model
|
||||
if token in self.all_special_tokens:
|
||||
if not prev_is_special and i != 0:
|
||||
out_string += " "
|
||||
out_string += self.sp_model.decode(current_sub_tokens) + token
|
||||
prev_is_special = True
|
||||
current_sub_tokens = []
|
||||
else:
|
||||
current_sub_tokens.append(token)
|
||||
prev_is_special = False
|
||||
out_string += self.sp_model.decode(current_sub_tokens)
|
||||
return out_string
|
||||
|
||||
def save_vocabulary(
|
||||
self, save_directory, filename_prefix: Optional[str] = None
|
||||
) -> Tuple[str]:
|
||||
"""
|
||||
Save the vocabulary and special tokens file to a directory.
|
||||
|
||||
Args:
|
||||
save_directory (`str`):
|
||||
The directory in which to save the vocabulary.
|
||||
|
||||
Returns:
|
||||
`Tuple(str)`: Paths to the files saved.
|
||||
"""
|
||||
if not os.path.isdir(save_directory):
|
||||
logger.error(f"Vocabulary path ({save_directory}) should be a directory")
|
||||
return
|
||||
out_vocab_file = os.path.join(
|
||||
save_directory,
|
||||
(filename_prefix + "-" if filename_prefix else "")
|
||||
+ VOCAB_FILES_NAMES["vocab_file"],
|
||||
)
|
||||
|
||||
if os.path.abspath(self.vocab_file) != os.path.abspath(
|
||||
out_vocab_file
|
||||
) and os.path.isfile(self.vocab_file):
|
||||
copyfile(self.vocab_file, out_vocab_file)
|
||||
elif not os.path.isfile(self.vocab_file):
|
||||
with open(out_vocab_file, "wb") as fi:
|
||||
content_spiece_model = self.sp_model.serialized_model_proto()
|
||||
fi.write(content_spiece_model)
|
||||
|
||||
return (out_vocab_file,)
|
||||
|
||||
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
||||
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
|
||||
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
|
||||
|
||||
output = bos_token_id + token_ids_0 + eos_token_id
|
||||
|
||||
if token_ids_1 is not None:
|
||||
output = output + bos_token_id + token_ids_1 + eos_token_id
|
||||
|
||||
return output
|
||||
|
||||
def get_special_tokens_mask(
|
||||
self,
|
||||
token_ids_0: List[int],
|
||||
token_ids_1: Optional[List[int]] = None,
|
||||
already_has_special_tokens: bool = False,
|
||||
) -> List[int]:
|
||||
"""
|
||||
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
|
||||
special tokens using the tokenizer `prepare_for_model` method.
|
||||
|
||||
Args:
|
||||
token_ids_0 (`List[int]`):
|
||||
List of IDs.
|
||||
token_ids_1 (`List[int]`, *optional*):
|
||||
Optional second list of IDs for sequence pairs.
|
||||
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
|
||||
Whether or not the token list is already formatted with special tokens for the model.
|
||||
|
||||
Returns:
|
||||
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
|
||||
"""
|
||||
if already_has_special_tokens:
|
||||
return super().get_special_tokens_mask(
|
||||
token_ids_0=token_ids_0,
|
||||
token_ids_1=token_ids_1,
|
||||
already_has_special_tokens=True,
|
||||
)
|
||||
|
||||
bos_token_id = [1] if self.add_bos_token else []
|
||||
eos_token_id = [1] if self.add_eos_token else []
|
||||
|
||||
if token_ids_1 is None:
|
||||
return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id
|
||||
return (
|
||||
bos_token_id
|
||||
+ ([0] * len(token_ids_0))
|
||||
+ eos_token_id
|
||||
+ bos_token_id
|
||||
+ ([0] * len(token_ids_1))
|
||||
+ eos_token_id
|
||||
)
|
||||
|
||||
def create_token_type_ids_from_sequences(
|
||||
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
|
||||
) -> List[int]:
|
||||
"""
|
||||
Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT
|
||||
sequence pair mask has the following format:
|
||||
|
||||
```
|
||||
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1
|
||||
| first sequence | second sequence |
|
||||
```
|
||||
|
||||
if token_ids_1 is None, only returns the first portion of the mask (0s).
|
||||
|
||||
Args:
|
||||
token_ids_0 (`List[int]`):
|
||||
List of ids.
|
||||
token_ids_1 (`List[int]`, *optional*):
|
||||
Optional second list of IDs for sequence pairs.
|
||||
|
||||
Returns:
|
||||
`List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s).
|
||||
"""
|
||||
bos_token_id = [self.bos_token_id] if self.add_bos_token else []
|
||||
eos_token_id = [self.eos_token_id] if self.add_eos_token else []
|
||||
|
||||
output = [0] * len(bos_token_id + token_ids_0 + eos_token_id)
|
||||
|
||||
if token_ids_1 is not None:
|
||||
output += [1] * len(bos_token_id + token_ids_1 + eos_token_id)
|
||||
|
||||
return output
|
||||
3
tokenizer.model
Normal file
3
tokenizer.model
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:386c49cf943d71aa110361135338c50e38beeff0a66593480421f37b319e1a39
|
||||
size 1033105
|
||||
44
tokenizer_config.json
Normal file
44
tokenizer_config.json
Normal file
@@ -0,0 +1,44 @@
|
||||
{
|
||||
"add_bos_token": false,
|
||||
"add_eos_token": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<unk>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<|startoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": true,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"auto_map": {
|
||||
"AutoTokenizer": [
|
||||
"tokenization_yi.YiTokenizer",
|
||||
null
|
||||
]
|
||||
},
|
||||
"bos_token": "<|startoftext|>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|endoftext|>",
|
||||
"model_max_length": 200000,
|
||||
"pad_token": "<unk>",
|
||||
"sp_model_kwargs": {},
|
||||
"tokenizer_class": "YiTokenizer",
|
||||
"unk_token": "<unk>"
|
||||
}
|
||||
Reference in New Issue
Block a user