初始化项目,由ModelHub XC社区提供模型
Model: bigcode/tiny_starcoder_py Source: Original Platform
This commit is contained in:
34
.gitattributes
vendored
Normal file
34
.gitattributes
vendored
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
91
README.md
Normal file
91
README.md
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
---
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
inference: true
|
||||||
|
widget:
|
||||||
|
- text: 'def print_hello_world():'
|
||||||
|
example_title: Hello world
|
||||||
|
group: Python
|
||||||
|
license: bigcode-openrail-m
|
||||||
|
datasets:
|
||||||
|
- bigcode/the-stack-dedup
|
||||||
|
metrics:
|
||||||
|
- code_eval
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- code
|
||||||
|
model-index:
|
||||||
|
- name: Tiny-StarCoder-Py
|
||||||
|
results:
|
||||||
|
- task:
|
||||||
|
type: text-generation
|
||||||
|
dataset:
|
||||||
|
type: openai_humaneval
|
||||||
|
name: HumanEval
|
||||||
|
metrics:
|
||||||
|
- name: pass@1
|
||||||
|
type: pass@1
|
||||||
|
value: 7.84%
|
||||||
|
verified: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# TinyStarCoderPy
|
||||||
|
|
||||||
|
This is a 164M parameters model with the same architecture as [StarCoder](https://huggingface.co/bigcode/starcoder) (8k context length, MQA & FIM). It was trained on the Python data from [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata)
|
||||||
|
for ~6 epochs which amounts to 100B tokens.
|
||||||
|
|
||||||
|
|
||||||
|
## Use
|
||||||
|
|
||||||
|
### Intended use
|
||||||
|
|
||||||
|
The model was trained on GitHub code, to assist with some tasks like [Assisted Generation](https://huggingface.co/blog/assisted-generation). For pure code completion, we advise using our 15B models [StarCoder]() or [StarCoderBase]().
|
||||||
|
|
||||||
|
|
||||||
|
### Generation
|
||||||
|
```python
|
||||||
|
# pip install -q transformers
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
checkpoint = "bigcode/tiny_starcoder_py"
|
||||||
|
device = "cuda" # for GPU usage or "cpu" for CPU usage
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
|
||||||
|
|
||||||
|
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
|
||||||
|
outputs = model.generate(inputs)
|
||||||
|
print(tokenizer.decode(outputs[0]))
|
||||||
|
```
|
||||||
|
|
||||||
|
### Fill-in-the-middle
|
||||||
|
Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
|
||||||
|
|
||||||
|
```python
|
||||||
|
input_text = "<fim_prefix>def print_one_two_three():\n print('one')\n <fim_suffix>\n print('three')<fim_middle>"
|
||||||
|
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
|
||||||
|
outputs = model.generate(inputs)
|
||||||
|
print(tokenizer.decode(outputs[0]))
|
||||||
|
```
|
||||||
|
|
||||||
|
# Training
|
||||||
|
|
||||||
|
## Model
|
||||||
|
|
||||||
|
- **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
|
||||||
|
- **Pretraining steps:** 50k
|
||||||
|
- **Pretraining tokens:** 100 billion
|
||||||
|
- **Precision:** bfloat16
|
||||||
|
|
||||||
|
## Hardware
|
||||||
|
|
||||||
|
- **GPUs:** 32 Tesla A100
|
||||||
|
- **Training time:** 18 hours
|
||||||
|
|
||||||
|
## Software
|
||||||
|
|
||||||
|
- **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
|
||||||
|
- **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
|
||||||
|
- **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
|
||||||
|
|
||||||
|
# License
|
||||||
|
The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
|
||||||
39
config.json
Normal file
39
config.json
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "/fsx/bigcode/tinystarcoder/saves/large-model",
|
||||||
|
"activation_function": "gelu_pytorch_tanh",
|
||||||
|
"architectures": [
|
||||||
|
"GPTBigCodeForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_softmax_in_fp32": true,
|
||||||
|
"multi_query": true,
|
||||||
|
"attn_pdrop": 0.1,
|
||||||
|
"bos_token_id": 0,
|
||||||
|
"embd_pdrop": 0.1,
|
||||||
|
"eos_token_id": 0,
|
||||||
|
"inference_runner": 0,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"layer_norm_epsilon": 1e-05,
|
||||||
|
"max_batch_size": null,
|
||||||
|
"max_sequence_length": null,
|
||||||
|
"model_type": "gpt_bigcode",
|
||||||
|
"n_embd": 768,
|
||||||
|
"n_head": 12,
|
||||||
|
"n_inner": 3072,
|
||||||
|
"n_layer": 20,
|
||||||
|
"n_positions": 8192,
|
||||||
|
"pad_key_length": true,
|
||||||
|
"pre_allocate_kv_cache": false,
|
||||||
|
"resid_pdrop": 0.1,
|
||||||
|
"scale_attention_softmax_in_fp32": true,
|
||||||
|
"scale_attn_weights": true,
|
||||||
|
"summary_activation": null,
|
||||||
|
"summary_first_dropout": 0.1,
|
||||||
|
"summary_proj_to_labels": true,
|
||||||
|
"summary_type": "cls_index",
|
||||||
|
"summary_use_proj": true,
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.28.1",
|
||||||
|
"use_cache": true,
|
||||||
|
"validate_runner_input": true,
|
||||||
|
"vocab_size": 49152
|
||||||
|
}
|
||||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 0,
|
||||||
|
"eos_token_id": 0,
|
||||||
|
"transformers_version": "4.28.1"
|
||||||
|
}
|
||||||
48892
merges.txt
Normal file
48892
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:15fa942f055b618d5ca6283f5c27278a475ff12e53dc704b9658ffd5160d4021
|
||||||
|
size 656601304
|
||||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:55e857a0305b1975b3db160253c8ad9604c6cea2f176cc8e41f82b70025cb884
|
||||||
|
size 656652573
|
||||||
26
special_tokens_map.json
Normal file
26
special_tokens_map.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|endoftext|>",
|
||||||
|
"<fim_prefix>",
|
||||||
|
"<fim_middle>",
|
||||||
|
"<fim_suffix>",
|
||||||
|
"<fim_pad>",
|
||||||
|
"<filename>",
|
||||||
|
"<gh_stars>",
|
||||||
|
"<issue_start>",
|
||||||
|
"<issue_comment>",
|
||||||
|
"<issue_closed>",
|
||||||
|
"<jupyter_start>",
|
||||||
|
"<jupyter_text>",
|
||||||
|
"<jupyter_code>",
|
||||||
|
"<jupyter_output>",
|
||||||
|
"<empty_output>",
|
||||||
|
"<commit_before>",
|
||||||
|
"<commit_msg>",
|
||||||
|
"<commit_after>",
|
||||||
|
"<reponame>"
|
||||||
|
],
|
||||||
|
"bos_token": "<|endoftext|>",
|
||||||
|
"eos_token": "<|endoftext|>",
|
||||||
|
"unk_token": "<|endoftext|>"
|
||||||
|
}
|
||||||
98256
tokenizer.json
Normal file
98256
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
30
tokenizer_config.json
Normal file
30
tokenizer_config.json
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"additional_special_tokens": [
|
||||||
|
"<|endoftext|>",
|
||||||
|
"<fim_prefix>",
|
||||||
|
"<fim_middle>",
|
||||||
|
"<fim_suffix>",
|
||||||
|
"<fim_pad>",
|
||||||
|
"<filename>",
|
||||||
|
"<gh_stars>",
|
||||||
|
"<issue_start>",
|
||||||
|
"<issue_comment>",
|
||||||
|
"<issue_closed>",
|
||||||
|
"<jupyter_start>",
|
||||||
|
"<jupyter_text>",
|
||||||
|
"<jupyter_code>",
|
||||||
|
"<jupyter_output>",
|
||||||
|
"<empty_output>",
|
||||||
|
"<commit_before>",
|
||||||
|
"<commit_msg>",
|
||||||
|
"<commit_after>",
|
||||||
|
"<reponame>"
|
||||||
|
],
|
||||||
|
"bos_token": "<|endoftext|>",
|
||||||
|
"eos_token": "<|endoftext|>",
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"tokenizer_class": "GPT2Tokenizer",
|
||||||
|
"unk_token": "<|endoftext|>",
|
||||||
|
"vocab_size": 49152
|
||||||
|
}
|
||||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user