ModelHub XC aa7b188ed1 初始化项目,由ModelHub XC社区提供模型
Model: AI-ModelScope/WizardLM-7B-V1.0
Source: Original Platform
2026-04-13 00:10:55 +08:00

The WizardLM delta weights.

WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions

🤗 HF Repo🐦 Twitter📃 [WizardLM]📃 [WizardCoder]📃 [WizardMath]

👋 Join our Discord

Model Checkpoint Paper HumanEval MBPP Demo License
WizardCoder-Python-34B-V1.0 🤗 HF Link 📃 [WizardCoder] 73.2 61.2 Demo Llama2
WizardCoder-15B-V1.0 🤗 HF Link 📃 [WizardCoder] 59.8 50.6 -- OpenRAIL-M
WizardCoder-Python-13B-V1.0 🤗 HF Link 📃 [WizardCoder] 64.0 55.6 -- Llama2
WizardCoder-3B-V1.0 🤗 HF Link 📃 [WizardCoder] 34.8 37.4 Demo OpenRAIL-M
WizardCoder-1B-V1.0 🤗 HF Link 📃 [WizardCoder] 23.8 28.6 -- OpenRAIL-M
Model Checkpoint Paper GSM8k MATH Online Demo License
WizardMath-70B-V1.0 🤗 HF Link 📃 [WizardMath] 81.6 22.7 Demo Llama 2
WizardMath-13B-V1.0 🤗 HF Link 📃 [WizardMath] 63.9 14.0 Demo Llama 2
WizardMath-7B-V1.0 🤗 HF Link 📃 [WizardMath] 54.9 10.7 Demo Llama 2
Model Checkpoint Paper MT-Bench AlpacaEval WizardEval HumanEval License
WizardLM-13B-V1.2 🤗 HF Link 7.06 89.17% 101.4% 36.6 pass@1 Llama 2 License
WizardLM-13B-V1.1 🤗 HF Link 6.76 86.32% 99.3% 25.0 pass@1 Non-commercial
WizardLM-30B-V1.0 🤗 HF Link 7.01 97.8% 37.8 pass@1 Non-commercial
WizardLM-13B-V1.0 🤗 HF Link 6.35 75.31% 89.1% 24.0 pass@1 Non-commercial
WizardLM-7B-V1.0 🤗 HF Link 📃 [WizardLM] 78.0% 19.1 pass@1 Non-commercial

Example code

import torch
from modelscope import AutoModelForCausalLM, AutoTokenizer


model = AutoModelForCausalLM.from_pretrained("AI-ModelScope/WizardLM-7B-V1.0", revision='v1.0.1', device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("AI-ModelScope/WizardLM-7B-V1.0", revision='v1.0.1')

prompt = """A chat between a curious user and an artificial intelligence assistant. 
The assistant gives helpful, detailed, and polite answers to the user's questions. 
USER: Who are you? 
ASSISTANT:
"""
inputs = tokenizer(prompt, padding=False, add_special_tokens=False, return_tensors="pt")

# Generate
generate_ids = model.generate(
    inputs.input_ids.to(model.device), 
    attention_mask=inputs['attention_mask'].to(model.device), 
    do_sample=True,
    top_k=10,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=200)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
Description
Model synced from source: AI-ModelScope/WizardLM-7B-V1.0
Readme 582 KiB