初始化项目,由ModelHub XC社区提供模型

Model: trendmicro-ailab/Llama-Primus-Merged
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-11 12:40:28 +08:00
commit cef86053ad
11 changed files with 412921 additions and 0 deletions

36
.gitattributes vendored Normal file
View File

@@ -0,0 +1,36 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
overview.png filter=lfs diff=lfs merge=lfs -text

172
README.md Normal file
View File

@@ -0,0 +1,172 @@
---
license: mit
language:
- en
- ja
base_model:
- trendmicro-ailab/Llama-Primus-Base
pipeline_tag: text-generation
extra_gated_fields:
Affiliation: text
Country: country
I want to use this model for:
type: select
options:
- Research
- Commercial
- label: Other
value: other
Job title:
type: select
options:
- Student
- Research graduate
- AI researcher
- AI developer/engineer
- Cybersecurity researcher
- Reporter
- Other
geo: ip_location
library_name: transformers
datasets:
- trendmicro-ailab/Primus-Seed
- trendmicro-ailab/Primus-FineWeb
- trendmicro-ailab/Primus-Instruct
---
# Primus: A Pioneering Collection of Open-Source Datasets for Cybersecurity LLM Training
<img src="https://i.imgur.com/PtqeTZw.png" alt="Llama-Primus-Merged Overview" width="60%">
> TL;DR: Llama-Primus-Merged was first pre-trained on a large cybersecurity corpus (2.77B, _Primus-Seed_ and _Primus-FineWeb_), and then instruction fine-tuned on around 1,000 carefully curated cybersecurity QA tasks (_Primus-Instruct_) to restore its instruction-following ability. Finally, it was merged with Llama-3.1-8B-Instruct, maintaining the same instruction-following capability while achieving a 🚀**14.84%** improvement in aggregated scores across multiple cybersecurity benchmarks.
**🔥 For more details, please refer to the paper: [[📄Paper]](https://arxiv.org/abs/2502.11191).**
## Introduction
Large Language Models (LLMs) have demonstrated remarkable versatility in recent years, with promising applications in specialized domains such as finance, law, and biomedicine. However, in the domain of cybersecurity, we noticed a lack of open-source datasets specifically designed for LLM pre-training—even though much research has shown that LLMs acquire their knowledge during pre-training. To fill this gap, we present a collection of datasets covering multiple stages of cybersecurity LLM training, including pre-training (_Primus-Seed_ and _Primus-FineWeb_), instruction fine-tuning (_Primus-Instruct_), and reasoning data for distillation (_Primus-Reasoning_). Based on these datasets and Llama-3.1-8B-Instruct, we developed _Llama-Primus-Base_, _Llama-Primus-Merged_, and _Llama-Primus-Reasoning_. This model card is **Llama-Primus-Merged**.
> **Note:** No TrendMicro customer information is included.
## Benchmark Results
- [Cybersecurity](#cybersecurity)
- [Function Calling](#function-calling)
- [Safety & Toxicity](#safety--toxicity)
- [Multilingual](#multilingual)
- [General Chat Performance](#general-chat-performance)
- [Long-Context](#long-context)
#### Cybersecurity
| **Metric** (5-shot, w/o CoT) | **Llama-3.1-8B-Instruct** | **Llama-Primus-Merged** |
|---------------------------------|---------------------------|------------------------------|
| **CTI-Bench (MCQ)** | 0.6420 | 0.6656 |
| **CTI-Bench (CVE → CWE)** | 0.5910 | 0.6620 |
| **CTI-Bench (CVSS, _lower is better_)** | 1.2712 | 1.1233 |
| **CTI-Bench (ATE)** | 0.2721 | 0.3387 |
| **CyberMetric (500)** | 0.8560 | 0.8660 |
| **SecEval** | 0.4966 | 0.5062 |
| **Cissp (Exams in book)** | 0.7073 | 0.7191 |
| **_Agg._** | 2.29 | 2.63 ↑**14.84%** 🔥 |
CTI-Bench(CVSS) is scored using Mean Absolute Deviation (_lower is better_), CTI-ATE uses F1 score, and the others use accuracy. The aggregate score (_Agg._) is the sum of all benchmarks, with CTI-Bench(CVSS) negated.
References:
- **CyberMetric**: [CyberMetric: A Benchmark Dataset based on Retrieval-Augmented...](https://arxiv.org/abs/2402.07688)
- **CTI-Bench**: [CTIBench: A Benchmark for Evaluating LLMs in Cyber Threat Intelligence](https://arxiv.org/abs/2406.07599)
- **SecEval**: [SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models](https://xuanwuai.github.io/SecEval/)
#### Function Calling
| **Metric** | **Llama-3.1-8B-Instruct** | **Llama-Primus-Merged**|
|---------------|---------------------------|------------------------------|
| **BFCL (V2)** | 73.02 (prompt) | 74.77 (prompt) | - |
Reference:
- [BFCL (V2)](https://gorilla.cs.berkeley.edu/blogs/12_bfcl_v2_live.html)
### Safety & Toxicity
| **Metric** | **Llama-3.1-8B-Instruct** | **Llama-Primus-Merged** |
|----------------------------------|---------------------------|------------------------------|
| **dan (Jailbreak)** | 28.98% | 41.70% |
| **encoding (Jailbreak)** | 100.00% | 100.00% |
| **goodside (Hallucination/Injection)** | 77.08% | 72.10% |
| **latentinjection (Injection)** | 75.55% | 74.00% |
| **leakreplay (Copyright)** | 95.71% | 96.90% |
| **malwaregen (Disallowed)** | 14.34% | 29.00% |
| **realtoxicityprompts (Disallowed)** | 90.03% | 85.40% |
| **snowball (Hallucination)** | 59.67% | 84.20% |
| **xss (Injection)** | 100.00% | 98.30% |
| **XSTest (Over Refuse)** | 93.20% | 83.20% |
References:
- **Garak**: [Garak Repository](https://github.com/leondz/garak)
- **XSTest**: [XSTest Repository](https://github.com/paul-rottger/exaggerated-safety)
### Multilingual
| **Language** | **Llama-3.1-8B-Instruct** | **Llama-Primus-Merged** |
|---------------|---------------------------|------------------------------|
| **MMLU (English)** | 68.16% | 67.36% |
| **MMLU (Japanese)** | 49.22% | 47.85% |
| **MMLU (French)** | 58.91% | 58.14% |
| **MMLU (German)** | 57.70% | 56.68% |
References:
- **English**: [MMLU Dataset](https://arxiv.org/abs/2009.03300)
- **German/French**: [MLMM Evaluation](https://github.com/nlp-uoregon/mlmm-evaluation?tab=readme-ov-file)
- **Japanese**: [Freedom Intelligence MMLU Japanese](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Japanese)
#### General Chat Performance
| **Metric** | **Llama-3.1-8B-Instruct** | **Llama-Primus-Merged** |
|-----------------|---------------------------|------------------------------|
| **MT Bench** | 8.3491 | 8.29375 |
Reference:
- [MT Bench](https://arxiv.org/abs/2306.05685)
### Long-Context
| **Length** | **Llama-3.1-8B-Instruct** | **Llama-Primus-Merged** |
|------------|---------------------------|------------------------------|
| **8K+** | 51.08 | 50.66 |
| **16K+** | 29.18 | 27.13 |
Reference:
- [LongBench](https://arxiv.org/abs/2308.14508)
## About _Primus_
_Primus_ is Trend Micro's pioneering family of lightweight, state-of-the-art open cybersecurity language models and datasets. Developed through our cutting-edge research initiatives and advanced technology, these resources share the innovative foundation that powers our enterprise-class [Trend Cybertron](https://newsroom.trendmicro.com/2025-02-25-Trend-Micro-Puts-Industry-Ahead-of-Cyberattacks-with-Industrys-First-Proactive-Cybersecurity-AI) solution. As an industry leader in cybersecurity, Trend Micro is proud to contribute these powerful, efficiency-optimized models and datasets to the community, while maintaining the excellence and reliability that define our global security standards.
## License
This model is based on the MIT license, but you must also comply with the Llama 3.1 Community License Agreement.

38
config.json Normal file
View File

@@ -0,0 +1,38 @@
{
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": [
128001,
128008,
128009
],
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 131072,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": {
"factor": 8.0,
"low_freq_factor": 1.0,
"high_freq_factor": 4.0,
"original_max_position_embeddings": 8192,
"rope_type": "llama3"
},
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.42.3",
"use_cache": true,
"vocab_size": 128256
}

12
generation_config.json Normal file
View File

@@ -0,0 +1,12 @@
{
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": [
128001,
128008,
128009
],
"temperature": 0.6,
"top_p": 0.9,
"transformers_version": "4.42.3"
}

14
mergekit_config.yml Normal file
View File

@@ -0,0 +1,14 @@
models:
- model: /home/azureuser/weights3/nemo/NeMo/cybertron/models/Meta-Llama-3.1-8B-Instruct
parameters:
density: 0.53
weight: 0.25
- model: /home/azureuser/weights2/sft/sft-pt_from_fineweb_V2_2_77b_14100_llama31/checkpoint-264
parameters:
density: 0.53
weight: 0.75
merge_method: dare_ties
base_model: /home/azureuser/weights/meta-llama/Meta-Llama-3.1-8B
parameters:
int8_mask: true
dtype: bfloat16

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:b3eac3c5d20220d72cef504af5366dd35be0e419ce1ed038105ac8bad1d38ac1
size 9976501400

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:f2c9e306115f32cdad114f2aba334f71ceff32ec4444cda6f5cd67ee592ed659
size 6084055000

File diff suppressed because one or more lines are too long

16
special_tokens_map.json Normal file
View File

@@ -0,0 +1,16 @@
{
"bos_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|eot_id|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

410563
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

2063
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff