Update README.md
This commit is contained in:
60
README.md
60
README.md
@@ -7,11 +7,69 @@ tags:
|
||||
inference: false
|
||||
license_name: mnpl
|
||||
license_link: https://mistral.ai/licences/MNPL-0.1.md
|
||||
|
||||
extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
||||
---
|
||||
|
||||
# Model Card for Codestral-22B-v0.1
|
||||
|
||||
Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
|
||||
###
|
||||
|
||||
> [!WARNING]
|
||||
> 🚫
|
||||
> The `transformers` tokenizer is not properly configured. Make sure that your encoding and decoding is correct by using `mistral-common` as shown below:
|
||||
|
||||
## Encode and Decode with `mistral_common`
|
||||
|
||||
```py
|
||||
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
|
||||
from mistral_common.protocol.instruct.messages import UserMessage
|
||||
from mistral_common.protocol.instruct.request import ChatCompletionRequest
|
||||
|
||||
mistral_models_path = "MISTRAL_MODELS_PATH"
|
||||
|
||||
tokenizer = MistralTokenizer.v3()
|
||||
|
||||
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
|
||||
|
||||
tokens = tokenizer.encode_chat_completion(completion_request).tokens
|
||||
```
|
||||
|
||||
## Inference with `mistral_inference`
|
||||
|
||||
```py
|
||||
from mistral_inference.model import Transformer
|
||||
from mistral_inference.generate import generate
|
||||
|
||||
model = Transformer.from_folder(mistral_models_path)
|
||||
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
|
||||
|
||||
result = tokenizer.decode(out_tokens[0])
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Inference with hugging face `transformers`
|
||||
|
||||
```py
|
||||
from transformers import AutoModelForCausalLM
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained("mistralai/Codestral-22B-v0.1")
|
||||
model.to("cuda")
|
||||
|
||||
generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
|
||||
|
||||
# decode with mistral tokenizer
|
||||
result = tokenizer.decode(generated_ids[0].tolist())
|
||||
print(result)
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> PRs to correct the `transformers` tokenizer so that it gives 1-to-1 the same results as the `mistral_common` reference implementation are very welcome!
|
||||
|
||||
---
|
||||
|
||||
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
|
||||
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
|
||||
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user