初始化项目,由ModelHub XC社区提供模型
Model: JetBrains/deepseek-coder-6.7B-kexer Source: Original Platform
This commit is contained in:
91
README.md
Normal file
91
README.md
Normal file
@@ -0,0 +1,91 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- JetBrains/KExercises
|
||||
base_model: deepseek-ai/deepseek-coder-6.7b-base
|
||||
results:
|
||||
- task:
|
||||
type: text-generation
|
||||
dataset:
|
||||
name: MultiPL-HumanEval (Kotlin)
|
||||
type: openai_humaneval
|
||||
metrics:
|
||||
- name: pass@1
|
||||
type: pass@1
|
||||
value: 55.28
|
||||
tags:
|
||||
- code
|
||||
---
|
||||
|
||||
# Kexer models
|
||||
|
||||
Kexer models are a collection of open-source generative text models fine-tuned on the [Kotlin Exercices](https://huggingface.co/datasets/JetBrains/KExercises) dataset.
|
||||
This is a repository for the fine-tuned **Deepseek-coder-6.7b** model in the *Hugging Face Transformers* format.
|
||||
|
||||
# How to use
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
# Load pre-trained model and tokenizer
|
||||
model_name = 'JetBrains/deepseek-coder-6.7B-kexer'
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')
|
||||
|
||||
# Create and encode input
|
||||
input_text = """\
|
||||
This function takes an integer n and returns factorial of a number:
|
||||
fun factorial(n: Int): Int {\
|
||||
"""
|
||||
input_ids = tokenizer.encode(
|
||||
input_text, return_tensors='pt'
|
||||
).to('cuda')
|
||||
|
||||
# Generate
|
||||
output = model.generate(
|
||||
input_ids, max_length=60, num_return_sequences=1,
|
||||
early_stopping=True, pad_token_id=tokenizer.eos_token_id,
|
||||
)
|
||||
|
||||
# Decode output
|
||||
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
||||
print(generated_text)
|
||||
```
|
||||
|
||||
As with the base model, we can use FIM. To do this, the following format must be used:
|
||||
```
|
||||
'<|fim▁begin|>' + prefix + '<|fim▁hole|>' + suffix + '<|fim▁end|>'
|
||||
```
|
||||
|
||||
# Training setup
|
||||
|
||||
The model was trained on one A100 GPU with following hyperparameters:
|
||||
|
||||
| **Hyperparameter** | **Value** |
|
||||
|:---------------------------:|:----------------------------------------:|
|
||||
| `warmup` | 10% |
|
||||
| `max_lr` | 1e-4 |
|
||||
| `scheduler` | linear |
|
||||
| `total_batch_size` | 256 (~130K tokens per step) |
|
||||
| `num_epochs` | 4 |
|
||||
|
||||
More details about fine-tuning can be found in the technical report (coming soon!).
|
||||
|
||||
# Fine-tuning data
|
||||
|
||||
For tuning this model, we used 15K exmaples from the synthetically generated [Kotlin Exercices](https://huggingface.co/datasets/JetBrains/KExercises) dataset. Every example follows the HumanEval format. In total, the dataset contains about 3.5M tokens.
|
||||
|
||||
# Evaluation
|
||||
|
||||
For evaluation, we used the [Kotlin HumanEval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval) dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre-processing necessary to obtain our results, including the code for running, on the [datasets's page](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval).
|
||||
|
||||
Here are the results of our evaluation:
|
||||
|
||||
| **Model name** | **Kotlin HumanEval Pass Rate** |
|
||||
|:---------------------------:|:----------------------------------------:|
|
||||
| `Deepseek-coder-6.7B` | 40.99 |
|
||||
| `Deepseek-coder-6.7B-kexer` | **55.28** |
|
||||
|
||||
# Ethical considerations and limitations
|
||||
|
||||
Deepseek-coder-6.7B-kexer is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Deepseek-coder-6.7B-kexer's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of Deepseek-coder-6.7B-kexer, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
||||
Reference in New Issue
Block a user