初始化项目,由ModelHub XC社区提供模型
Model: afrideva/evolvedSeeker_1_3-GGUF Source: Original Platform
This commit is contained in:
95
README.md
Normal file
95
README.md
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
base_model: TokenBender/evolvedSeeker_1_3
|
||||
inference: false
|
||||
model-index:
|
||||
- name: evolvedSeeker-1_3_v_0_0_1
|
||||
results: []
|
||||
model_creator: TokenBender
|
||||
model_name: evolvedSeeker_1_3
|
||||
pipeline_tag: text-generation
|
||||
quantized_by: afrideva
|
||||
tags:
|
||||
- generated_from_trainer
|
||||
- gguf
|
||||
- ggml
|
||||
- quantized
|
||||
- q2_k
|
||||
- q3_k_m
|
||||
- q4_k_m
|
||||
- q5_k_m
|
||||
- q6_k
|
||||
- q8_0
|
||||
---
|
||||
# TokenBender/evolvedSeeker_1_3-GGUF
|
||||
|
||||
Quantized GGUF model files for [evolvedSeeker_1_3](https://huggingface.co/TokenBender/evolvedSeeker_1_3) from [TokenBender](https://huggingface.co/TokenBender)
|
||||
|
||||
|
||||
| Name | Quant method | Size |
|
||||
| ---- | ---- | ---- |
|
||||
| [evolvedseeker_1_3.fp16.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.fp16.gguf) | fp16 | 2.69 GB |
|
||||
| [evolvedseeker_1_3.q2_k.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q2_k.gguf) | q2_k | 631.71 MB |
|
||||
| [evolvedseeker_1_3.q3_k_m.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q3_k_m.gguf) | q3_k_m | 704.97 MB |
|
||||
| [evolvedseeker_1_3.q4_k_m.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q4_k_m.gguf) | q4_k_m | 873.58 MB |
|
||||
| [evolvedseeker_1_3.q5_k_m.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q5_k_m.gguf) | q5_k_m | 1.00 GB |
|
||||
| [evolvedseeker_1_3.q6_k.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q6_k.gguf) | q6_k | 1.17 GB |
|
||||
| [evolvedseeker_1_3.q8_0.gguf](https://huggingface.co/afrideva/evolvedSeeker_1_3-GGUF/resolve/main/evolvedseeker_1_3.q8_0.gguf) | q8_0 | 1.43 GB |
|
||||
|
||||
|
||||
|
||||
## Original Model Card:
|
||||
# evolvedSeeker-1_3
|
||||
EvolvedSeeker v0.0.1 (First phase)
|
||||
|
||||
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on 50k instructions for 3 epochs.
|
||||
|
||||
I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder.
|
||||
|
||||
Around 3k answers were modified via self-instruct.
|
||||
|
||||
Collaborate or Consult me - [Twitter](https://twitter.com/4evaBehindSOTA), [Discord](https://discord.gg/ftEM63pzs2)
|
||||
|
||||
*Recommended format is ChatML, Alpaca will work but take care of EOT token*
|
||||
|
||||
#### Chat Model Inference
|
||||
```python
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
tokenizer = AutoTokenizer.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True)
|
||||
model = AutoModelForCausalLM.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True).cuda()
|
||||
messages=[
|
||||
{ 'role': 'user', 'content': "write a program to reverse letters in each word in a sentence without reversing order of words in the sentence."}
|
||||
]
|
||||
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
|
||||
# 32021 is the id of <|EOT|> token
|
||||
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
|
||||
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
## Model description
|
||||
|
||||
First model of Project PIC (Partner-in-Crime) in 1.3B range.
|
||||
Almost all the work is pending right now for this model hence v0.0.1
|
||||

|
||||
|
||||
## Intended uses & limitations
|
||||
|
||||
Superfast Copilot
|
||||
Run near lossless quantized in 1G RAM.
|
||||
Useful for code dataset curation and evaluation.
|
||||
|
||||
Limitations - This is a smol model, so smol brain, may have crammed a few things.
|
||||
Reasoning tests may fail beyond a certain point.
|
||||
|
||||
## Training procedure
|
||||
SFT
|
||||
|
||||
### Training results
|
||||
Humaneval Score - 68.29%
|
||||

|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers 4.35.2
|
||||
- Pytorch 2.0.1
|
||||
- Datasets 2.15.0
|
||||
- Tokenizers 0.15.0
|
||||
Reference in New Issue
Block a user