初始化项目,由ModelHub XC社区提供模型

Model: corre-social/Drummond-1b1-Instruct
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-04 10:25:47 +08:00
commit 9df6af561f
28 changed files with 486354 additions and 0 deletions

35
.gitattributes vendored Normal file
View File

@@ -0,0 +1,35 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text

111
README.md Normal file
View File

@@ -0,0 +1,111 @@
---
library_name: transformers
base_model: TucanoBR/Tucano-1b1-Instruct
tags:
- trl
- sft
- portuguese
- pt-br
- reasoning
- chain-of-thought
license: apache-2.0
language:
- pt
datasets:
- corre-social/s1_dataset_ptbr_1k_tokenized
---
# Drummond-1b1-Instruct
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers_logo_name.png" alt="Transformers" width="200"/>
## Resumo do Modelo
O **Drummond-1b1-Instruct** é um modelo de linguagem focado em seguir instruções e raciocínio em Português (PT-BR). Ele é um *fine-tune* do modelo [Tucano-1b1-Instruct](https://huggingface.co/TucanoBR/Tucano-1b1-Instruct), treinado especificamente para gerar cadeias de pensamento ("thinking process") antes de fornecer a resposta final.
Este modelo utiliza a arquitetura herdada do Tucano e foi otimizado para tarefas que exigem raciocínio estruturado com baixo custo computacional.
- **Desenvolvido por:** Corre Social
- **Modelo Base:** TucanoBR/Tucano-1b1-Instruct
- **Idioma:** Português (PT-BR)
- **Tamanho do Contexto:** 2048 tokens
- **Licença:** Apache 2.0 (Verificar modelo base)
## Detalhes de Treinamento
O modelo foi treinado utilizando técnicas modernas de *Supervised Fine-Tuning* (SFT) focadas em eficiência e qualidade de instrução.
### Tecnologias Utilizadas
O treinamento foi realizado utilizando o ecossistema Hugging Face e PyTorch:
* **Biblioteca de Treino:** [TRL (Transformer Reinforcement Learning)](https://github.com/huggingface/trl) versão 0.12.0.
* **Otimização de Memória:** `bitsandbytes` para otimizadores de 8-bit.
* **Monitoramento:** Weights & Biases (WandB).
* **Hardware:** Treinado em GPU com suporte a `bfloat16`.
### Técnicas de Treinamento
1. **Supervised Fine-Tuning (SFT):** O modelo foi ajustado em um dataset de instruções para alinhar o comportamento de resposta.
2. **Completion Only Loss:** Utilizamos o `DataCollatorForCompletionOnlyLM`. Esta técnica é crucial: o modelo **não** aprende a prever a instrução do usuário, apenas a resposta e o raciocínio. Isso evita que o modelo "alucine" instruções e foca a perda (loss) apenas na geração útil.
* *Instruction Template:* `<instruction>`
* *Response Template:* `<|im_start|>think`
3. **Special Tokens & ChatML:** Foram adicionados tokens especiais (`<|im_start|>`, `<|im_end|>`) e o token de gatilho de pensamento `think` para estruturar o formato de *Chain of Thought*.
4. **Otimização de Precisão:**
* Uso de **BF16 (BFloat16)** para estabilidade numérica durante o treino.
* Otimizador **AdamW 8-bit** para reduzir o consumo de VRAM.
* **Gradient Checkpointing** ativado para permitir *batch sizes* maiores ou modelos maiores em GPUs limitadas.
### Hiperparâmetros
| Parâmetro | Valor |
| :--- | :--- |
| **Epochs** | 5 |
| **Learning Rate** | 1e-5 |
| **Batch Size (Efetivo)** | 4 (1 per device * 4 accumulation steps) |
| **Context Window** | 2048 tokens |
| **Optimizer** | adamw_8bit |
| **Precision** | bf16 |
| **LRScheduler** | Linear (com 10 warmup steps) |
### Dados de Treinamento
* **Dataset:** `corre-social/s1_dataset_ptbr_1k_tokenized`
* **Tamanho:** ~1.000 exemplos de alta qualidade.
* **Foco:** O dataset contém exemplos estruturados para estimular o modelo a "pensar" (`think`) antes de responder.
## Como Usar
Para utilizar o modelo, é recomendável usar a formatação de prompt correta para ativar o modo de raciocínio:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "corre-social/Drummond-1b1-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Template específico usado no treino
prompt = """<instruction>Explique como funciona a gravidade de forma simples.</instruction>
<|im_start|>think"""
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

1
chat_template.jinja Normal file
View File

@@ -0,0 +1 @@
{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '<instruction>' + message['content'].strip() + '</instruction>'}}{% elif message['role'] == 'assistant' %}{{ message['content'].strip() + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}

View File

@@ -0,0 +1 @@
{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '<instruction>' + message['content'].strip() + '</instruction>'}}{% elif message['role'] == 'assistant' %}{{ message['content'].strip() + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}

View File

@@ -0,0 +1,30 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"dtype": "float32",
"eos_token_id": 2,
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 5632,
"max_position_embeddings": 2048,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 22,
"num_key_value_heads": 4,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"transformers_version": "4.57.1",
"use_cache": false,
"vocab_size": 32004
}

View File

@@ -0,0 +1,13 @@
{
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": [
2
],
"max_new_tokens": 1024,
"pad_token_id": 0,
"renormalize_logits": true,
"repetition_penalty": 1.2,
"temperature": 0.1,
"transformers_version": "4.57.1"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7067c56fa01a8667547ba397d77d88c61c5bb864a67c9bceea3b865307698346
size 4400282072

View File

@@ -0,0 +1,40 @@
{
"additional_special_tokens": [
{
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
{
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
],
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</stop>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

156149
checkpoint-1200/tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,91 @@
{
"add_bos_token": false,
"add_eos_token": false,
"add_prefix_space": null,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"32000": {
"content": "<instruction>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32001": {
"content": "</instruction>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32002": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"32003": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": "<s>",
"bos_token_id": 1,
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"eos_token_id": 2,
"extra_special_tokens": {},
"legacy": false,
"model_max_length": 2048,
"pad_token": "</stop>",
"pad_token_id": 0,
"padding_side": "right",
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizerFast",
"unk_token": "<unk>",
"unk_token_id": 0,
"use_default_system_prompt": false
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a25e6e63e554236463541ca8e1a87175eaa07d06a0381977b13d4275a55cf89
size 6097

View File

@@ -0,0 +1 @@
{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '<instruction>' + message['content'].strip() + '</instruction>'}}{% elif message['role'] == 'assistant' %}{{ message['content'].strip() + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}

View File

@@ -0,0 +1,30 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"dtype": "float32",
"eos_token_id": 2,
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 5632,
"max_position_embeddings": 2048,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 22,
"num_key_value_heads": 4,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"transformers_version": "4.57.1",
"use_cache": false,
"vocab_size": 32004
}

View File

@@ -0,0 +1,13 @@
{
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": [
2
],
"max_new_tokens": 1024,
"pad_token_id": 0,
"renormalize_logits": true,
"repetition_penalty": 1.2,
"temperature": 0.1,
"transformers_version": "4.57.1"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:097568f37db5d3514ddf9d1d45b5d18a351fd56a746bbe3d0940eb5e9490064c
size 4400282072

View File

@@ -0,0 +1,40 @@
{
"additional_special_tokens": [
{
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
{
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
],
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</stop>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

156149
checkpoint-1250/tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,91 @@
{
"add_bos_token": false,
"add_eos_token": false,
"add_prefix_space": null,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"32000": {
"content": "<instruction>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32001": {
"content": "</instruction>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32002": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"32003": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": "<s>",
"bos_token_id": 1,
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"eos_token_id": 2,
"extra_special_tokens": {},
"legacy": false,
"model_max_length": 2048,
"pad_token": "</stop>",
"pad_token_id": 0,
"padding_side": "right",
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizerFast",
"unk_token": "<unk>",
"unk_token_id": 0,
"use_default_system_prompt": false
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a25e6e63e554236463541ca8e1a87175eaa07d06a0381977b13d4275a55cf89
size 6097

30
config.json Normal file
View File

@@ -0,0 +1,30 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 1,
"dtype": "float32",
"eos_token_id": 2,
"head_dim": 64,
"hidden_act": "silu",
"hidden_size": 2048,
"initializer_range": 0.02,
"intermediate_size": 5632,
"max_position_embeddings": 2048,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 22,
"num_key_value_heads": 4,
"pad_token_id": 0,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 10000.0,
"tie_word_embeddings": false,
"transformers_version": "4.57.1",
"use_cache": false,
"vocab_size": 32004
}

13
generation_config.json Normal file
View File

@@ -0,0 +1,13 @@
{
"bos_token_id": 1,
"do_sample": true,
"eos_token_id": [
2
],
"max_new_tokens": 1024,
"pad_token_id": 0,
"renormalize_logits": true,
"repetition_penalty": 1.2,
"temperature": 0.1,
"transformers_version": "4.57.1"
}

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:097568f37db5d3514ddf9d1d45b5d18a351fd56a746bbe3d0940eb5e9490064c
size 4400282072

40
special_tokens_map.json Normal file
View File

@@ -0,0 +1,40 @@
{
"additional_special_tokens": [
{
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
{
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
],
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": "</stop>",
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

156149
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

91
tokenizer_config.json Normal file
View File

@@ -0,0 +1,91 @@
{
"add_bos_token": false,
"add_eos_token": false,
"add_prefix_space": null,
"added_tokens_decoder": {
"0": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"1": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"2": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"3": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"32000": {
"content": "<instruction>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32001": {
"content": "</instruction>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false,
"special": true
},
"32002": {
"content": "<|im_start|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
},
"32003": {
"content": "<|im_end|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false,
"special": true
}
},
"additional_special_tokens": [
"<|im_start|>",
"<|im_end|>"
],
"bos_token": "<s>",
"bos_token_id": 1,
"clean_up_tokenization_spaces": false,
"eos_token": "</s>",
"eos_token_id": 2,
"extra_special_tokens": {},
"legacy": false,
"model_max_length": 2048,
"pad_token": "</stop>",
"pad_token_id": 0,
"padding_side": "right",
"sp_model_kwargs": {},
"tokenizer_class": "LlamaTokenizerFast",
"unk_token": "<unk>",
"unk_token_id": 0,
"use_default_system_prompt": false
}

3
training_args.bin Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a25e6e63e554236463541ca8e1a87175eaa07d06a0381977b13d4275a55cf89
size 6097