92 lines
3.6 KiB
Markdown
92 lines
3.6 KiB
Markdown
---
|
||
base_model: stabilityai/stable-code-3b
|
||
inference: false
|
||
language:
|
||
- en
|
||
license: other
|
||
license_link: https://huggingface.co/stabilityai/stable-code-3b/blob/main/LICENSE
|
||
model_creator: stabilityai
|
||
model_name: stable-code-3b
|
||
pipeline_tag: text-generation
|
||
datasets:
|
||
- tiiuae/falcon-refinedweb
|
||
- bigcode/the-stack-github-issues
|
||
- bigcode/commitpackft
|
||
- bigcode/starcoderdata
|
||
- EleutherAI/proof-pile-2
|
||
- meta-math/MetaMathQA
|
||
tags:
|
||
- causal-lm
|
||
- code
|
||
quantized_by: brittlewis12
|
||
---
|
||
|
||
# stable-code-3b GGUF
|
||
|
||
Original model: [stable-code-3b](https://huggingface.co/stabilityai/stable-code-3b)
|
||
Model creator: [StabilityAI](https://huggingface.co/stabilityai/stable-code-3b)
|
||
|
||
This repo contains GGUF format model files for StabilityAI’s stable-code-3b with 16k context.
|
||
|
||
> stable-code-3b is a 2.7B billion parameter decoder-only language model pre-trained on 1.3 trillion tokens of diverse textual and code datasets. stable-code-3b is trained on 18 programming languages (selected based on the 2023 StackOverflow Developer Survey) and demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using BigCode's Evaluation Harness.
|
||
|
||
|
||
### What is GGUF?
|
||
|
||
GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
|
||
Converted using llama.cpp build 1897 (revision [2b3a665](https://github.com/ggerganov/llama.cpp/commit/2b3a665))
|
||
|
||
## Prompt template: Completion or Fill-in-Middle
|
||
|
||
### Completion
|
||
|
||
```
|
||
{{prompt}}
|
||
```
|
||
|
||
### Fill-in-Middle (FIM)
|
||
|
||
```
|
||
<fim_prefix>{{prefix code}}<fim_suffix>{{suffix code}}<fim_middle>
|
||
```
|
||
|
||
Example prompt with special prefix, suffix, and middle tokens in context:
|
||
|
||
```
|
||
<fim_prefix>def fib(n):
|
||
<fim_suffix>
|
||
else:
|
||
return fib(n - 2) + fib(n - 1)
|
||
<fim_middle>
|
||
```
|
||
|
||
---
|
||
|
||
## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac!
|
||
|
||

|
||
|
||
[cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device:
|
||
- create & save **Characters** with custom system prompts & temperature settings
|
||
- download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)!
|
||
- make it your own with custom **Theme colors**
|
||
- powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming!
|
||
- **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)!
|
||
- follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date
|
||
|
||
---
|
||
|
||
# Original Model Evaluation
|
||
|
||

|
||
|
||
| Model | Size | Python | C++ | Javascript | Java | PHP | Rust |
|
||
|------------------|------|--------|------|------------|------|------|------|
|
||
| **Stable Code** | 3B | 32.4% | 30.9%| 32.1% | 32.1%| 24.2%| 23.0%|
|
||
| CodeLLama | 7B | 30.0% | 28.2%| 32.5% | 31.1%| 25.7%| 26.3%|
|
||
| Deepseek Coder | 1.3B | 28.6% | 29.2%| 28.7% | 29.0%| 23.6%| 18.5%|
|
||
| Wizard Coder | 3B | 31.6% | 25.6%| 26.2% | 25.8%| 25.3%| 20.4%|
|
||
| StarCoder | 3B | 21.6% | 19.8%| 21.5% | 20.5%| 19.0%| 16.9%|
|
||
| Replit Code V1.5 | 3B | 23.0% | 25.9%| 26.2% | 23.6%| 23.2%| 21.5%|
|
||
| Deci Coder | 1B | 19.1% | 6.8% | 18.4% | 16.7%| 2.1% | 1.7% |
|