初始化项目,由ModelHub XC社区提供模型
Model: S-miguel/The-Trinity-Coder-7B Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
98
README.md
Normal file
98
README.md
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
---
|
||||||
|
license: apache-2.0
|
||||||
|
language:
|
||||||
|
- en
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- Code Generation
|
||||||
|
- Logical Reasoning
|
||||||
|
- Problem Solving
|
||||||
|
- Text Generation
|
||||||
|
- AI Programming Assistant
|
||||||
|
---
|
||||||
|
<h1>The-Trinity-Coder-7B: 3 Blended Coder Models - Unified Coding Intelligence</h1>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
<p><strong>Overview</strong></p>
|
||||||
|
<p>The-Trinity-Coder-7B derives from the fusion of three distinct AI models, each specializing in unique aspects of coding and programming challenges. This model unifies the capabilities of beowolx_CodeNinja-1.0-OpenChat-7B, NeuralExperiment-7b-MagicCoder, and Speechless-Zephyr-Code-Functionary-7B, creating a versatile and powerful new blended model. The integration of these models was achieved through a merging technique, in order to harmonize their strengths and mitigate their individual weaknesses.</p>
|
||||||
|
|
||||||
|
<h2>The Blend</h2>
|
||||||
|
<ul>
|
||||||
|
<li><strong>Comprehensive Coding Knowledge:</strong> TrinityAI combines knowledge of coding instructions across a wide array of programming languages, including Python, C, C++, Rust, Java, JavaScript, and more, making it a versatile assistant for coding projects of any scale.</li>
|
||||||
|
<li><strong>Advanced Code Completion:</strong> With its extensive context window, TrinityAI excels in project-level code completion, offering suggestions that are contextually relevant and syntactically accurate.</li>
|
||||||
|
<li><strong>Specialized Skills Integration:</strong> The-Trinity-Coder provides code completion but is also good at logical reasoning for its size, mathematical problem-solving, and understanding complex programming concepts.</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
<h2>Model Synthesis Approach</h2>
|
||||||
|
<p>The blending of the three models into TrinityAI utilized a unique merging technique that focused on preserving the core strengths of each component model:</p>
|
||||||
|
<ul>
|
||||||
|
<li><strong>beowolx_CodeNinja-1.0-OpenChat-7B:</strong> This model brings an expansive database of coding instructions, refined through Supervised Fine Tuning, making it an advanced coding assistant.</li>
|
||||||
|
<li><strong>NeuralExperiment-7b-MagicCoder:</strong> Trained on datasets focusing on logical reasoning, mathematics, and programming, this model enhances TrinityAI's problem-solving and logical reasoning capabilities.</li>
|
||||||
|
<li><strong>Speechless-Zephyr-Code-Functionary-7B:</strong> Part of the Moloras experiments, this model contributes enhanced coding proficiency and dynamic skill integration through its unique LoRA modules.</li>
|
||||||
|
</ul>
|
||||||
|
|
||||||
|
<h2>Usage and Implementation</h2>
|
||||||
|
<pre><code>from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||||
|
|
||||||
|
model_name = "YourRepository/The-Trinity-Coder-7B"
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||||
|
model = AutoModelForCausalLM.from_pretrained(model_name)
|
||||||
|
|
||||||
|
prompt = "Your prompt here"
|
||||||
|
inputs = tokenizer(prompt, return_tensors="pt")
|
||||||
|
outputs = model.generate(**inputs)
|
||||||
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||||
|
</code></pre>
|
||||||
|
|
||||||
|
<h2>Acknowledgments</h2>
|
||||||
|
<p>Special thanks to the creators and contributors of CodeNinja, NeuralExperiment-7b-MagicCoder, and Speechless-Zephyr-Code-Functionary-7B for providing the base models for blending.</p>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
---
|
||||||
|
base_model: []
|
||||||
|
library_name: transformers
|
||||||
|
tags:
|
||||||
|
- mergekit
|
||||||
|
- merge
|
||||||
|
|
||||||
|
---
|
||||||
|
# merged_folder
|
||||||
|
|
||||||
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
||||||
|
|
||||||
|
## Merge Details
|
||||||
|
### Merge Method
|
||||||
|
|
||||||
|
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using uukuguy_speechless-zephyr-code-functionary-7b as a base.
|
||||||
|
|
||||||
|
### Models Merged
|
||||||
|
|
||||||
|
The following models were included in the merge:
|
||||||
|
*uukuguy_speechless-zephyr-code-functionary-7b
|
||||||
|
* Kukedlc_NeuralExperiment-7b-MagicCoder-v7.5
|
||||||
|
* beowolx_CodeNinja-1.0-OpenChat-7B
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
The following YAML configuration was used to produce this model:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
base_model: X:/text-generation-webui-main/models/uukuguy_speechless-zephyr-code-functionary-7b
|
||||||
|
models:
|
||||||
|
- model: X:/text-generation-webui-main/models/beowolx_CodeNinja-1.0-OpenChat-7B
|
||||||
|
parameters:
|
||||||
|
density: 0.5
|
||||||
|
weight: 0.4
|
||||||
|
- model: X:/text-generation-webui-main/models/Kukedlc_NeuralExperiment-7b-MagicCoder-v7.5
|
||||||
|
parameters:
|
||||||
|
density: 0.5
|
||||||
|
weight: 0.4
|
||||||
|
merge_method: ties
|
||||||
|
parameters:
|
||||||
|
normalize: true
|
||||||
|
dtype: float16
|
||||||
|
|
||||||
|
```
|
||||||
26
config.json
Normal file
26
config.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "X:/text-generation-webui-main/models/uukuguy_speechless-zephyr-code-functionary-7b",
|
||||||
|
"architectures": [
|
||||||
|
"MistralForCausalLM"
|
||||||
|
],
|
||||||
|
"attention_dropout": 0.0,
|
||||||
|
"bos_token_id": 1,
|
||||||
|
"eos_token_id": 2,
|
||||||
|
"hidden_act": "silu",
|
||||||
|
"hidden_size": 4096,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"intermediate_size": 14336,
|
||||||
|
"max_position_embeddings": 32768,
|
||||||
|
"model_type": "mistral",
|
||||||
|
"num_attention_heads": 32,
|
||||||
|
"num_hidden_layers": 32,
|
||||||
|
"num_key_value_heads": 8,
|
||||||
|
"rms_norm_eps": 1e-05,
|
||||||
|
"rope_theta": 10000.0,
|
||||||
|
"sliding_window": 4096,
|
||||||
|
"tie_word_embeddings": false,
|
||||||
|
"torch_dtype": "float16",
|
||||||
|
"transformers_version": "4.39.0.dev0",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 32000
|
||||||
|
}
|
||||||
14
mergekit_config.yml
Normal file
14
mergekit_config.yml
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
base_model: X:/text-generation-webui-main/models/uukuguy_speechless-zephyr-code-functionary-7b
|
||||||
|
models:
|
||||||
|
- model: X:/text-generation-webui-main/models/beowolx_CodeNinja-1.0-OpenChat-7B
|
||||||
|
parameters:
|
||||||
|
density: 0.5
|
||||||
|
weight: 0.4
|
||||||
|
- model: X:/text-generation-webui-main/models/Kukedlc_NeuralExperiment-7b-MagicCoder-v7.5
|
||||||
|
parameters:
|
||||||
|
density: 0.5
|
||||||
|
weight: 0.4
|
||||||
|
merge_method: ties
|
||||||
|
parameters:
|
||||||
|
normalize: true
|
||||||
|
dtype: float16
|
||||||
3
model-00001-of-00008.safetensors
Normal file
3
model-00001-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c88085cfca031db257627fee2e37f9acddb20b9942348f7ca394d18dd142ee39
|
||||||
|
size 1979773096
|
||||||
3
model-00002-of-00008.safetensors
Normal file
3
model-00002-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:2fab56b49a41b7b4167bb3e39ecab33165a6efb8f3647c83186590911ffa272c
|
||||||
|
size 1946235600
|
||||||
3
model-00003-of-00008.safetensors
Normal file
3
model-00003-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:9da9f307ca659b816134deebc140f4f9009ac3be2f8639c7d8aa9e90190c5b9b
|
||||||
|
size 1973490176
|
||||||
3
model-00004-of-00008.safetensors
Normal file
3
model-00004-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:06de084b7b4d268738e26a9da4ebb40f807e0908048b92b47c4f6bb6562538da
|
||||||
|
size 1979781432
|
||||||
3
model-00005-of-00008.safetensors
Normal file
3
model-00005-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:be887b2cff34bf224c1ddc614ff43a0a9ccb681d56538be9880206352a392af4
|
||||||
|
size 1946243944
|
||||||
3
model-00006-of-00008.safetensors
Normal file
3
model-00006-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:15d616d1ae9138ac8236ac9a93c9a65b662f35a93f8743d70b95bc52c2ee69b5
|
||||||
|
size 1923166008
|
||||||
3
model-00007-of-00008.safetensors
Normal file
3
model-00007-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:88d260ffd60ec26b599411ab6c12943d4d81bc116b848dcdb72a7ca1ca42fdde
|
||||||
|
size 1946243944
|
||||||
3
model-00008-of-00008.safetensors
Normal file
3
model-00008-of-00008.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:8ada78bf63bba80282885623a7aee7f4f22b51b88462d72c9b338a48422dc8c6
|
||||||
|
size 788563536
|
||||||
1
model.safetensors.index.json
Normal file
1
model.safetensors.index.json
Normal file
File diff suppressed because one or more lines are too long
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
91122
tokenizer.json
Normal file
91122
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
BIN
tokenizer.model
(Stored with Git LFS)
Normal file
Binary file not shown.
42
tokenizer_config.json
Normal file
42
tokenizer_config.json
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
{
|
||||||
|
"add_bos_token": true,
|
||||||
|
"add_eos_token": false,
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"0": {
|
||||||
|
"content": "<unk>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"1": {
|
||||||
|
"content": "<s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"2": {
|
||||||
|
"content": "</s>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"additional_special_tokens": [],
|
||||||
|
"bos_token": "<s>",
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "</s>",
|
||||||
|
"legacy": true,
|
||||||
|
"model_max_length": 1000000000000000019884624838656,
|
||||||
|
"pad_token": null,
|
||||||
|
"sp_model_kwargs": {},
|
||||||
|
"spaces_between_special_tokens": false,
|
||||||
|
"tokenizer_class": "LlamaTokenizer",
|
||||||
|
"unk_token": "<unk>",
|
||||||
|
"use_default_system_prompt": false
|
||||||
|
}
|
||||||
Reference in New Issue
Block a user