ModelHub XC 08efe4ff32 初始化项目,由ModelHub XC社区提供模型
Model: recogna-nlp/internlm-chatbode-7b
Source: Original Platform
2026-05-07 10:58:53 +08:00

library_name, model-index, language, pipeline_tag
library_name model-index language pipeline_tag
transformers
name results
internlm-chatbode-7b
task dataset metrics source
type name
text-generation Text Generation
name type split args
ENEM Challenge (No Images) eduagarcia/enem_challenge train
num_few_shot
3
type value name
acc 63.05 accuracy
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
BLUEX (No Images) eduagarcia-temp/BLUEX_without_images train
num_few_shot
3
type value name
acc 51.46 accuracy
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
OAB Exams eduagarcia/oab_exams train
num_few_shot
3
type value name
acc 42.32 accuracy
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
Assin2 RTE assin2 test
num_few_shot
15
type value name
f1_macro 91.33 f1-macro
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
Assin2 STS eduagarcia/portuguese_benchmark test
num_few_shot
15
type value name
pearson 80.69 pearson
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
FaQuAD NLI ruanchaves/faquad-nli test
num_few_shot
15
type value name
f1_macro 79.8 f1-macro
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
HateBR Binary ruanchaves/hatebr test
num_few_shot
25
type value name
f1_macro 87.99 f1-macro
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
PT Hate Speech Binary hate_speech_portuguese test
num_few_shot
25
type value name
f1_macro 68.09 f1-macro
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type split args
tweetSentBR eduagarcia/tweetsentbr_fewshot test
num_few_shot
25
type value name
f1_macro 61.11 f1-macro
url name
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm-chatbode-7b Open Portuguese LLM Leaderboard
pt
text-generation

internlm-chatbode-7b

ChatBode Logo

O InternLm-ChatBode é um modelo de linguagem ajustado para o idioma português, desenvolvido a partir do modelo InternLM2. Este modelo foi refinado através do processo de fine-tuning utilizando o dataset UltraAlpaca.

Características Principais

  • Modelo Base: internlm/internlm2-chat-7b
  • Dataset para Fine-tuning: UltraAlpaca
  • Treinamento: O treinamento foi realizado a partir do fine-tuning, usando QLoRA, do internlm2-chat-7b.

Exemplo de uso

A seguir um exemplo de código de como carregar e utilizar o modelo:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("recogna-nlp/internlm-chatbode-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("recogna-nlp/internlm-chatbode-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "Olá", history=[])
print(response)
response, history = model.chat(tokenizer, "O que é o Teorema de Pitágoras? Me dê um exemplo", history=history)
print(response)

As respostas podem ser geradas via stream utilizando o método stream_chat:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "recogna-nlp/internlm-chatbode-7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)

model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Olá", history=[]):
    print(response[length:], flush=True, end="")
    length = len(response)

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the 🚀 Open Portuguese LLM Leaderboard

Metric Value
Average 69.54
ENEM Challenge (No Images) 63.05
BLUEX (No Images) 51.46
OAB Exams 42.32
Assin2 RTE 91.33
Assin2 STS 80.69
FaQuAD NLI 79.80
HateBR Binary 87.99
PT Hate Speech Binary 68.09
tweetSentBR 61.11

Citação

Se você deseja utilizar o Chatbode em sua pesquisa, cite-o da seguinte maneira:

@misc {chatbode_2024,
	author       = { Gabriel Lino Garcia, Pedro Henrique Paiola and  and João Paulo Papa},
	title        = { Chatbode },
	year         = {2024},
	url          = { https://huggingface.co/recogna-nlp/internlm-chatbode-7b/ },
	doi          = { 10.57967/hf/3317 },
	publisher    = { Hugging Face }
}
Description
Model synced from source: recogna-nlp/internlm-chatbode-7b
Readme 122 KiB
Languages
Python 100%