diff --git a/README.md b/README.md index 6708f37..caffd13 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,327 @@ ---- -license: other -license_name: falcon-llm-license -license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html ---- +--- +license: other +license_name: falcon-llm-license +license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html +language: +- en +- fr +- es +- pt +base_model: +- tiiuae/Falcon3-3B-Instruct +pipeline_tag: text-generation +tags: +- Falcon +- Falcon3 +- tiiuae +- 3b +- Instruct +- Heretic +- Uncensored +- Abliterated +- GGUF +--- + +## Falcon3-3B-Instruct-Heretic-GGUF + +A decensored version of [Falcon3-3B-Instruct](https://huggingface.co/tiiuae/Falcon3-3B-Instruct), made using [Heretic](https://github.com/p-e-w/heretic) v1.0.1 + +| | Falcon3-3B-Instruct-Heretic | Original model ([Falcon3-3B-Instruct](https://huggingface.co/tiiuae/Falcon3-3B-Instruct)) | +| --- | --- | --- | +| **Refusals** | 11/100 | 100/100 | +| **KL divergence** | 0.04 | 0 *(by definition)* | + +## Heretic Abliteration Parameters + +| Parameter | Value | +| :-------- | :---: | +| **direction_index** | 13.81 | +| **attn.o_proj.max_weight** | 1.35 | +| **attn.o_proj.max_weight_position** | 13.39 | +| **attn.o_proj.min_weight** | 1.02 | +| **attn.o_proj.min_weight_distance** | 11.94 | +| **mlp.down_proj.max_weight** | 1.25 | +| **mlp.down_proj.max_weight_position** | 14.44 | +| **mlp.down_proj.min_weight** | 0.29 | +| **mlp.down_proj.min_weight_distance** | 12.59 | + + +## Safetensors Version + +Safetensors version available at [ChiKoi7/Falcon3-3B-Instruct-Heretic](https://huggingface.co/ChiKoi7/Falcon3-3B-Instruct-Heretic) + +--- + +--- + +--- + +
+ drawing +
+ +# Falcon3-3B-Instruct + +**Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters. + +**Falcon3-3B-Instruct** achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks. +Falcon3-3B-Instruct supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K. + +## Model Details +- Architecture + - Transformer-based causal decoder-only architecture + - 22 decoder blocks + - Grouped Query Attention (GQA) for faster inference: 12 query heads and 4 key-value heads + - Wider head dimension: 256 + - High RoPE value to support long context understanding: 1000042 + - Uses SwiGLU and RMSNorm + - 32K context length + - 131K vocab size +- Pruned and healed from Falcon3-7B-Base on only 100 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 1024 H100 GPU chips +- Posttrained on 1.2 million samples of STEM, conversational, code, safety and function call data +- Supports EN, FR, ES, PT +- Developed by [Technology Innovation Institute](https://www.tii.ae) +- License: TII Falcon-LLM License 2.0 +- Model Release Date: December 2024 + + +## Getting started + +
+ Click to expand + +```python +from transformers import AutoTokenizer, AutoModelForCausalLM + +model_name = "tiiuae/Falcon3-3B-Instruct" + +model = AutoModelForCausalLM.from_pretrained( + model_name, + torch_dtype="auto", + device_map="auto" +) +tokenizer = AutoTokenizer.from_pretrained(model_name) + +prompt = "How many hours in one day?" +messages = [ + {"role": "system", "content": "You are a helpful friendly assistant Falcon3 from TII, try to follow instructions as much as possible."}, + {"role": "user", "content": prompt} +] +text = tokenizer.apply_chat_template( + messages, + tokenize=False, + add_generation_prompt=True +) +model_inputs = tokenizer([text], return_tensors="pt").to(model.device) + +generated_ids = model.generate( + **model_inputs, + max_new_tokens=1024 +) +generated_ids = [ + output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) +] + +response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] +print(response) +``` + +
+ +
+ +## Benchmarks +We report in the following table our internal pipeline benchmarks. + - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness). + - We report **raw scores** obtained by applying chat template and fewshot_as_multiturn. + - We use same batch-size across all models. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
CategoryBenchmarkLlama-3.2-3B-InstructQwen2.5-3B-InstructNemotron-Mini-4B-InstructFalcon3-3B-Instruct
GeneralMMLU (5-shot)61.265.457.356.9
MMLU-PRO (5-shot)27.732.626.029.7
IFEval74.764.166.368.3
MathGSM8K (5-shot)76.856.729.874.8
GSM8K (8-shot, COT)78.860.835.078.0
MATH Lvl-5 (4-shot)14.60.00.019.9
ReasoningArc Challenge (25-shot)50.955.056.255.5
GPQA (0-shot)32.229.227.029.6
GPQA (0-shot, COT)11.311.012.226.5
MUSR (0-shot)35.040.238.739.0
BBH (3-shot)41.844.539.545.4
CommonSense UnderstandingPIQA (0-shot)74.673.874.675.6
SciQ (0-shot)77.260.771.095.5
Winogrande (0-shot)---65.0
OpenbookQA (0-shot)40.841.243.242.2
Instructions followingMT-Bench (avg)7.18.06.77.2
Alpaca (WC)19.419.49.615.5
Tool useBFCL AST (avg)85.284.859.859.3
CodeEvalPlus (0-shot) (avg)55.269.440.052.9
Multipl-E (0-shot) (avg)31.629.219.632.9
+ +## Useful links +- View our [release blogpost](https://huggingface.co/blog/falcon3). +- Feel free to join [our discord server](https://discord.gg/fwXpMyGc) if you have any questions or to interact with our researchers and developers. + +## Technical Report +Coming soon.... + +## Citation +If the Falcon3 family of models were helpful to your work, feel free to give us a cite. + +``` +@misc{Falcon3, + title = {The Falcon 3 Family of Open Models}, + url = {https://huggingface.co/blog/falcon3}, + author = {Falcon-LLM Team}, + month = {December}, + year = {2024} +} +``` \ No newline at end of file