Files
Falcon3-10B-Instruct/README.md
ModelHub XC 800aa0950b 初始化项目,由ModelHub XC社区提供模型
Model: tiiuae/Falcon3-10B-Instruct
Source: Original Platform
2026-05-01 15:52:07 +08:00

12 KiB

base_model, library_name, license, license_name, license_link, tags, model-index
base_model library_name license license_name license_link tags model-index
tiiuae/Falcon3-10B-Base transformers other falcon-llm-license https://falconllm.tii.ae/falcon-terms-and-conditions.html
falcon3
name results
Falcon3-10B-Instruct
task dataset metrics source
type name
text-generation Text Generation
name type args
IFEval (0-Shot) HuggingFaceH4/ifeval
num_few_shot
0
type value name
inst_level_strict_acc and prompt_level_strict_acc 78.17 strict accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
BBH (3-Shot) BBH
num_few_shot
3
type value name
acc_norm 44.82 normalized accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
MATH Lvl 5 (4-Shot) hendrycks/competition_math
num_few_shot
4
type value name
exact_match 25.91 exact match
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
GPQA (0-shot) Idavidrein/gpqa
num_few_shot
0
type value name
acc_norm 10.51 acc_norm
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type args
MuSR (0-shot) TAUR-Lab/MuSR
num_few_shot
0
type value name
acc_norm 13.61 acc_norm
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Instruct Open LLM Leaderboard
task dataset metrics source
type name
text-generation Text Generation
name type config split args
MMLU-PRO (5-shot) TIGER-Lab/MMLU-Pro main test
num_few_shot
5
type value name
acc 38.1 accuracy
url name
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=tiiuae/Falcon3-10B-Instruct Open LLM Leaderboard
drawing

Falcon3-10B-Instruct

Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.

This repository contains the Falcon3-10B-Instruct. It achieves state-of-the-art results (at the time of release) on reasoning, language understanding, instruction following, code and mathematics tasks. Falcon3-10B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.

Model Details

  • Architecture
    • Transformer-based causal decoder-only architecture
    • 40 decoder blocks
    • Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads
    • Wider head dimension: 256
    • High RoPE value to support long context understanding: 1000042
    • Uses SwiGLu and RMSNorm
    • 32K context length
    • 131K vocab size
  • Depth up-scaled from Falcon3-7B-Base with 2 Teratokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips
  • Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data
  • Supports EN, FR, ES, PT
  • Developed by Technology Innovation Institute
  • License: TII Falcon-LLM License 2.0
  • Model Release Date: December 2024

Getting started

Click to expand
from transformers import AutoTokenizer, AutoModelForCausalLM


from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "tiiuae/Falcon3-10B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "How many hours in one day?"
messages = [
    {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=1024
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Benchmarks

We report the official HuggingFace leaderboard normalized evaluations Open LLM Leaderboard Evaluation Results in the following table.

Benchmark Yi-1.5-9B-Chat Mistral-Nemo-Instruct-2407 (12B) Gemma-2-9b-it Falcon3-10B-Instruct
IFEval 60.46 63.80 74.36 78.17
BBH (3-shot) 36.95 29.68 42.14 44.82
MATH Lvl-5 (4-shot) 12.76 6.50 0.23 25.91
GPQA (0-shot) 11.30 5.37 14.77 10.51
MUSR (0-shot) 12.84 8.48 9.74 13.61
MMLU-PRO (5-shot) 33.06 27.97 31.95 38.10

Also, we report in the following table our internal pipeline benchmarks.

  • We use lm-evaluation harness.
  • We report raw scores obtained by applying chat template and fewshot_as_multiturn.
  • We use same batch-size across all models.
Category Benchmark Yi-1.5-9B-Chat Mistral-Nemo-Instruct-2407 (12B) Falcon3-10B-Instruct
General MMLU (5-shot) 68.8 66.0 73.9
MMLU-PRO (5-shot) 38.8 34.3 44
IFEval 57.8 63.4 78
Math GSM8K (5-shot) 77.1 77.6 84.9
GSM8K (8-shot, COT) 76 80.4 84.6
MATH Lvl-5 (4-shot) 3.3 5.9 22.1
Reasoning Arc Challenge (25-shot) 58.3 63.4 66.2
GPQA (0-shot) 35.6 33.2 33.5
GPQA (0-shot, COT) 16 12.7 32.6
MUSR (0-shot) 41.9 38.1 41.1
BBH (3-shot) 50.6 47.5 58.4
CommonSense Understanding PIQA (0-shot) 76.4 78.2 78.4
SciQ (0-shot) 61.7 76.4 90.4
Winogrande (0-shot) - - 71
OpenbookQA (0-shot) 43.2 47.4 48.2
Instructions following MT-Bench (avg) 8.3 8.6 8.2
Alpaca (WC) 25.8 45.4 24.7
Tool use BFCL AST (avg) 48.4 74.2 90.5
Code EvalPlus (0-shot) (avg) 69.4 58.9 74.7
Multipl-E (0-shot) (avg) - 34.5 45.8

Technical Report

Coming soon....

Citation

If Falcon3 family were helpful in your work, feel free to give us a cite.

@misc{Falcon3,
    title = {The Falcon 3 family of Open Models},
    author = {TII Team},
    month = {December},
    year = {2024}
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.19
IFEval (0-Shot) 78.17
BBH (3-Shot) 44.82
MATH Lvl 5 (4-Shot) 25.91
GPQA (0-shot) 10.51
MuSR (0-shot) 13.61
MMLU-PRO (5-shot) 38.10