初始化项目,由ModelHub XC社区提供模型

Model: prithivMLmods/Rapeto-ReDistill-14B-GOP
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-04-12 10:46:56 +08:00
commit 848f71a8f8
14 changed files with 303550 additions and 0 deletions

53
.gitattributes vendored Normal file
View File

@@ -0,0 +1,53 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bin.* filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zstandard filter=lfs diff=lfs merge=lfs -text
*.tfevents* filter=lfs diff=lfs merge=lfs -text
*.db* filter=lfs diff=lfs merge=lfs -text
*.ark* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*data* filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.meta filter=lfs diff=lfs merge=lfs -text
**/*ckpt*.index filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.gguf* filter=lfs diff=lfs merge=lfs -text
*.ggml filter=lfs diff=lfs merge=lfs -text
*.llamafile* filter=lfs diff=lfs merge=lfs -text
*.pt2 filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
model-00001-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00002-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00003-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00004-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00005-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text
model-00006-of-00006.safetensors filter=lfs diff=lfs merge=lfs -text

123
README.md Normal file
View File

@@ -0,0 +1,123 @@
---
license: apache-2.0
library_name: transformers
language:
- en
base_model:
- Qwen/Qwen2.5-14B-Instruct-1M
pipeline_tag: text-generation
tags:
- text-generation-inference
- GOP
- Code
- RL
- Math
---
![7.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/YkdzjiIwFhhju5WVNL2vP.png)
# **Rapeto-ReDistill-14B-GOP**
> **Rapeto-ReDistill-14B-GOP** is based on the Qwen 2.5 14B modality architecture, designed to optimize performance for mathematical reasoning, general-purpose problem solving, and robust policy optimization using distributed reinforcement learning (RL). This model excels in contextual understanding, logical deduction, multi-step reasoning, and optimization-based tasks. It has been fine-tuned using long chain-of-thought datasets, optimization problem-solving corpora, and structured reasoning datasets to improve comprehension, structured responses, and intelligent decision-making.
## **Key Improvements**
1. **Advanced Mathematical and Logical Reasoning**:
Enhanced capabilities for solving complex equations, optimization tasks, symbolic computation, theorem proving, and step-by-step math problem-solving.
2. **Robust Policy Optimization**:
Fine-tuned for distributed reinforcement learning (RL) tasks, improving decision-making robustness and solution generalization across complex optimization problems.
3. **General Knowledge and Problem Solving**:
Strong foundation across diverse domains, excelling in answering factual questions and executing structured multi-step reasoning processes.
4. **Instruction Following and Adaptability**:
Improved performance in understanding complex instructions and adapting to diverse prompts, maintaining coherence across extended conversations.
5. **Long-Context Understanding**:
Supports up to 128K tokens for input, and can generate up to 8K tokens, ideal for deep, multi-turn dialogues, mathematical derivations, and long-chain logical reasoning.
6. **Coding and Algorithmic Mastery**:
Excels in code generation, debugging, algorithm design, refactoring, and analysis across multiple programming languages, with a special focus on optimization algorithms.
## **Quickstart with transformers**
Here's how to load and use the model with the `transformers` library and `apply_chat_template`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Rapeto-ReDistill-14B-GOP"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the key techniques used in robust policy optimization."
messages = [
{"role": "system", "content": "You are an expert assistant in optimization, reinforcement learning, and general-purpose reasoning."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## **Intended Use**
1. **Optimization Problem Solving**:
Specialized for solving and explaining general optimization problems, including convex, non-convex, and combinatorial optimization.
2. **Mathematical and Logical Reasoning**:
Excels at solving equations, mathematical proofs, symbolic manipulations, and structured logical reasoning.
3. **Reinforcement Learning Applications**:
Useful for designing, analyzing, and explaining RL algorithms, particularly robust and distributed RL.
4. **Educational and Research Assistance**:
Suitable for providing detailed explanations, mathematical derivations, and research-oriented insights for students, educators, and researchers.
5. **Coding and Algorithm Development**:
Ideal for writing, improving, debugging, and explaining code, with a strong emphasis on optimization algorithms and computational logic.
6. **Conversational AI and Chatbots**:
Supports intelligent, context-aware dialogue generation for technical domains, education, and professional assistance.
7. **Long-Form Technical Content Generation**:
Capable of producing extensive, coherent articles, reports, and tutorials, especially for technical and mathematical content.
8. **Structured Data Processing**:
Analyzes and generates structured outputs such as JSON, tables, and formal proofs, beneficial for data science and automation.
## **Limitations**
1. **High Hardware Requirements**:
Requires substantial memory and high-performance GPUs or TPUs due to large parameter size and long-context processing.
2. **Potential Training Biases**:
May reflect biases present in optimization-specific datasets or mathematical corpora.
3. **Creative Generation Limitations**:
Less optimized for freeform creative writing or storytelling compared to technical reasoning.
4. **No Real-Time Awareness**:
Lacks knowledge of real-world events or developments post-training cutoff.
5. **Error Propagation in Long-Chain Tasks**:
Small early errors in long mathematical or optimization tasks may propagate in extended outputs.
6. **Prompt Sensitivity**:
The quality of outputs can be sensitive to prompt clarity and structure, especially for complex optimization or technical questions.

28
config.json Normal file
View File

@@ -0,0 +1,28 @@
{
"architectures": [
"Qwen2ForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 151643,
"eos_token_id": 151643,
"hidden_act": "silu",
"hidden_size": 5120,
"initializer_range": 0.02,
"intermediate_size": 13824,
"max_position_embeddings": 131072,
"max_window_layers": 48,
"model_type": "qwen2",
"num_attention_heads": 40,
"num_hidden_layers": 48,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": 131072,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.51.3",
"use_cache": true,
"use_sliding_window": false,
"vocab_size": 152064
}

1
configuration.json Normal file
View File

@@ -0,0 +1 @@
{"framework": "pytorch", "task": "others", "allow_remote": true}

9
generation_config.json Normal file
View File

@@ -0,0 +1,9 @@
{
"_from_model_config": true,
"bos_token_id": 151646,
"eos_token_id": 151643,
"do_sample": true,
"temperature": 0.6,
"top_p": 0.95,
"transformers_version": "4.51.3"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:80b176d66cd18208b362ffa13d10b3308ab7ed204370e3106ffdbd8c61baee36
size 4907454960

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:5426b22a70d5b20293aa939536b3cf75555cd40c62726565ca82632cefa17c5c
size 4954847384

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2d148eba84ae3c2ebc56593acea659ea67b7b93e0a2b9f1e6e197c60819d6a44
size 4954847376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:16935e75014d6a634028dbba0a18dc84a4a78c8e6078240f76cbcbf463f495f5
size 4954847376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:98387888f10ab31683acb0f38d98e6719764173c5ea81fa2eb2730dcf5fd6566
size 4954847376

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:95c4bbfd091a4aa5d649bd5a55f91e1660e901b75da878b5dc1c7b30b7c9ca54
size 4813289432

File diff suppressed because one or more lines are too long

303282
tokenizer.json Normal file

File diff suppressed because it is too large Load Diff

35
tokenizer_config.json Normal file
View File

@@ -0,0 +1,35 @@
{
"add_bos_token": true,
"add_eos_token": false,
"bos_token": {
"__type": "AddedToken",
"content": "<begin▁of▁sentence>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"clean_up_tokenization_spaces": false,
"eos_token": {
"__type": "AddedToken",
"content": "<end▁of▁sentence>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"legacy": true,
"model_max_length": 16384,
"pad_token": {
"__type": "AddedToken",
"content": "<end▁of▁sentence>",
"lstrip": false,
"normalized": true,
"rstrip": false,
"single_word": false
},
"sp_model_kwargs": {},
"unk_token": null,
"tokenizer_class": "LlamaTokenizerFast",
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<User>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<Assistant><tool▁calls▁begin><tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<tool▁call▁end>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<tool▁call▁end>'}}{{'<tool▁calls▁end><end▁of▁sentence>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<tool▁outputs▁end>' + message['content'] + '<end▁of▁sentence>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<Assistant>' + content + '<end▁of▁sentence>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<tool▁outputs▁begin><tool▁output▁begin>' + message['content'] + '<tool▁output▁end>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<tool▁output▁begin>' + message['content'] + '<tool▁output▁end>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<tool▁outputs▁end>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<Assistant><think>\\n'}}{% endif %}"
}