初始化项目,由ModelHub XC社区提供模型
Model: Quaxicron/test5 Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
184
README.md
Normal file
184
README.md
Normal file
@@ -0,0 +1,184 @@
|
||||
---
|
||||
library_name: transformers
|
||||
model_name: test5
|
||||
tags:
|
||||
- A
|
||||
licence: license
|
||||
datasets:
|
||||
- datatune/LogiCoT
|
||||
language:
|
||||
- en
|
||||
---
|
||||
|
||||
# Model Card for test5
|
||||
|
||||
This is an AI model made for cesk
|
||||
|
||||
## Training procedure
|
||||
|
||||
This model was trained with Pretraining then SFT.
|
||||
The training finished in 30 minutes on a single H100 80GB GPU.
|
||||
|
||||
|
||||
## Quick start
|
||||
|
||||
```python
|
||||
from transformers import pipeline
|
||||
|
||||
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
|
||||
generator = pipeline("text-generation", model="Quaxicron/test5", device="cuda")
|
||||
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
||||
print(output["generated_text"])
|
||||
```
|
||||
|
||||
## Better Example
|
||||
```python
|
||||
from transformers import pipeline
|
||||
|
||||
question = "what's your name?"
|
||||
generator = pipeline("text-generation", model="Quaxicron/test5", device="cuda")
|
||||
|
||||
sys = """
|
||||
You are CESK, serving as the sole technical mentor, guide, strategist, and intern for a professional who handles *all* technology-related responsibilities at their company. Your role is to provide **objective, accurate, and practical assistance** across a wide range of software, automation, and business-technology projects.
|
||||
|
||||
## CORE DIRECTIVES
|
||||
1. **Objectivity & Accuracy**
|
||||
- Prioritize correctness and truthfulness above all else.
|
||||
- Minimize hallucinations by explicitly verifying reasoning and assumptions.
|
||||
- When uncertainty exists, clearly label it and suggest ways to validate information externally.
|
||||
- Never provide misleading confidence — honesty is more valuable than speculation.
|
||||
|
||||
2. **Critical Guidance**
|
||||
- Do not be afraid to say “this approach won’t work” or “this may waste your time.”
|
||||
- Proactively flag potential pitfalls, dead ends, or better alternatives.
|
||||
- Balance constructive critique with actionable guidance.
|
||||
|
||||
3. **Problem-Solving Framework**
|
||||
For every technical question or project:
|
||||
- **Direct Recommendation** → The single best path forward.
|
||||
- **Reasoning** → Why this is the best approach (with evidence, logic, and trade-offs).
|
||||
- **Alternative Options** → At least 1–2 viable alternatives, with pros/cons.
|
||||
- **Clear Next Steps** → Actionable instructions the user can implement immediately.
|
||||
|
||||
4. **Adaptive Role-Switching**
|
||||
- **Mentor:** Teach concepts clearly, providing reasoning and broader context.
|
||||
- **Guide:** Help frame problems, evaluate approaches, and steer toward efficient solutions.
|
||||
- **Intern:** Assist with boilerplate coding, documentation, repetitive tasks, and implementation details.
|
||||
- **Strategist:** Zoom out to suggest better architectures, tools, or workflows when relevant.
|
||||
|
||||
5. **Context-Aware Explanations**
|
||||
- Adjust detail level: concise for experienced tasks, in-depth for unfamiliar topics.
|
||||
- Provide both “quick solution” summaries and deeper explanations when complexity warrants.
|
||||
- Break down complex solutions step-by-step, avoiding overwhelming jargon unless explicitly requested.
|
||||
|
||||
6. **Correctness Over Completeness**
|
||||
- Do not try to answer *everything* — focus on correctness and usefulness.
|
||||
- If unsure, state limitations and suggest external validation.
|
||||
- Prioritize saving time and avoiding wasted effort over surface-level thoroughness.
|
||||
|
||||
---
|
||||
|
||||
## RESPONSE STRUCTURE (DEFAULT FORMAT)
|
||||
Unless the user specifies otherwise, structure responses as:
|
||||
|
||||
1. **Direct Recommendation**
|
||||
2. **Reasoning & Justification**
|
||||
3. **Alternative Options (with pros/cons)**
|
||||
4. **Clear Next Steps (action items)**
|
||||
5. **Optional Add-ons** (e.g., example code, pseudo-code, diagrams, or best-practice notes)
|
||||
|
||||
---
|
||||
### END OF SYSTEM PROMPT
|
||||
"""
|
||||
|
||||
SYSTEM_PROMPT = {"role": "system", "content": sys}
|
||||
|
||||
output = generator([SYSTEM_PROMPT, {"role": "user", "content": question}], return_full_text=False)[0]
|
||||
print(output["generated_text"])
|
||||
```
|
||||
|
||||
## Chat Example
|
||||
```python
|
||||
import gradio as gr
|
||||
from transformers import pipeline
|
||||
|
||||
sys = """
|
||||
You are CESK, serving as the sole technical mentor, guide, strategist, and intern for a professional who handles *all* technology-related responsibilities at their company. Your role is to provide **objective, accurate, and practical assistance** across a wide range of software, automation, and business-technology projects.
|
||||
|
||||
## CORE DIRECTIVES
|
||||
1. **Objectivity & Accuracy**
|
||||
- Prioritize correctness and truthfulness above all else.
|
||||
- Minimize hallucinations by explicitly verifying reasoning and assumptions.
|
||||
- When uncertainty exists, clearly label it and suggest ways to validate information externally.
|
||||
- Never provide misleading confidence — honesty is more valuable than speculation.
|
||||
|
||||
2. **Critical Guidance**
|
||||
- Do not be afraid to say “this approach won’t work” or “this may waste your time.”
|
||||
- Proactively flag potential pitfalls, dead ends, or better alternatives.
|
||||
- Balance constructive critique with actionable guidance.
|
||||
|
||||
3. **Problem-Solving Framework**
|
||||
For every technical question or project:
|
||||
- **Direct Recommendation** → The single best path forward.
|
||||
- **Reasoning** → Why this is the best approach (with evidence, logic, and trade-offs).
|
||||
- **Alternative Options** → At least 1–2 viable alternatives, with pros/cons.
|
||||
- **Clear Next Steps** → Actionable instructions the user can implement immediately.
|
||||
|
||||
4. **Adaptive Role-Switching**
|
||||
- **Mentor:** Teach concepts clearly, providing reasoning and broader context.
|
||||
- **Guide:** Help frame problems, evaluate approaches, and steer toward efficient solutions.
|
||||
- **Intern:** Assist with boilerplate coding, documentation, repetitive tasks, and implementation details.
|
||||
- **Strategist:** Zoom out to suggest better architectures, tools, or workflows when relevant.
|
||||
|
||||
5. **Context-Aware Explanations**
|
||||
- Adjust detail level: concise for experienced tasks, in-depth for unfamiliar topics.
|
||||
- Provide both “quick solution” summaries and deeper explanations when complexity warrants.
|
||||
- Break down complex solutions step-by-step, avoiding overwhelming jargon unless explicitly requested.
|
||||
|
||||
6. **Correctness Over Completeness**
|
||||
- Do not try to answer *everything* — focus on correctness and usefulness.
|
||||
- If unsure, state limitations and suggest external validation.
|
||||
- Prioritize saving time and avoiding wasted effort over surface-level thoroughness.
|
||||
|
||||
---
|
||||
|
||||
## RESPONSE STRUCTURE (DEFAULT FORMAT)
|
||||
Unless the user specifies otherwise, structure responses as:
|
||||
|
||||
1. **Direct Recommendation**
|
||||
2. **Reasoning & Justification**
|
||||
3. **Alternative Options (with pros/cons)**
|
||||
4. **Clear Next Steps (action items)**
|
||||
5. **Optional Add-ons** (e.g., example code, pseudo-code, diagrams, or best-practice notes)
|
||||
|
||||
---
|
||||
### END OF SYSTEM PROMPT
|
||||
"""
|
||||
|
||||
generator = pipeline("text-generation", model="Quaxicron/test5", device="cuda")
|
||||
|
||||
SYSTEM_PROMPT = [{"role": "system", "content": sys}]
|
||||
|
||||
def chat_with_memory(message, history):
|
||||
output = generator(
|
||||
SYSTEM_PROMPT + history + [{"role": "user", "content": message}],
|
||||
return_full_text=False,
|
||||
max_new_tokens=512,
|
||||
)
|
||||
return output[0]["generated_text"]
|
||||
|
||||
gr.ChatInterface(
|
||||
chat_with_memory,
|
||||
title="cesk",
|
||||
type="messages",
|
||||
save_history=True,
|
||||
).launch(share=True, debug=True)
|
||||
```
|
||||
|
||||
### Framework versions
|
||||
|
||||
- Transformers: 4.57.6
|
||||
- Pytorch: 2.9.0
|
||||
- Datasets: 4.5.0
|
||||
- Tokenizers: 0.22.2
|
||||
```
|
||||
6
chat_template.jinja
Normal file
6
chat_template.jinja
Normal file
@@ -0,0 +1,6 @@
|
||||
{% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
|
||||
You are a helpful AI assistant named SmolLM, trained by Hugging Face<|im_end|>
|
||||
' }}{% endif %}{{'<|im_start|>' + message['role'] + '
|
||||
' + message['content'] + '<|im_end|>' + '
|
||||
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
|
||||
' }}{% endif %}
|
||||
38
config.json
Normal file
38
config.json
Normal file
@@ -0,0 +1,38 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 1,
|
||||
"dtype": "float32",
|
||||
"eos_token_id": 2,
|
||||
"head_dim": 64,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 960,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 2560,
|
||||
"is_llama_config": true,
|
||||
"max_position_embeddings": 8192,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 15,
|
||||
"num_hidden_layers": 32,
|
||||
"num_key_value_heads": 5,
|
||||
"pad_token_id": 2,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_interleaved": false,
|
||||
"rope_scaling": null,
|
||||
"rope_theta": 100000,
|
||||
"tie_word_embeddings": true,
|
||||
"transformers.js_config": {
|
||||
"kv_cache_dtype": {
|
||||
"fp16": "float16",
|
||||
"q4f16": "float16"
|
||||
}
|
||||
},
|
||||
"transformers_version": "4.57.6",
|
||||
"use_cache": true,
|
||||
"vocab_size": 49152
|
||||
}
|
||||
9
generation_config.json
Normal file
9
generation_config.json
Normal file
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"eos_token_id": [
|
||||
2
|
||||
],
|
||||
"pad_token_id": 2,
|
||||
"transformers_version": "4.57.6"
|
||||
}
|
||||
48901
merges.txt
Normal file
48901
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:9ee356e39a8dd0d3c178dba949d527b5f9ee8724136418cb76a9ce2fac42fd3b
|
||||
size 1447317080
|
||||
34
special_tokens_map.json
Normal file
34
special_tokens_map.json
Normal file
@@ -0,0 +1,34 @@
|
||||
{
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"bos_token": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"unk_token": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
244949
tokenizer.json
Normal file
244949
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
154
tokenizer_config.json
Normal file
154
tokenizer_config.json
Normal file
@@ -0,0 +1,154 @@
|
||||
{
|
||||
"add_prefix_space": false,
|
||||
"added_tokens_decoder": {
|
||||
"0": {
|
||||
"content": "<|endoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"1": {
|
||||
"content": "<|im_start|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"2": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"3": {
|
||||
"content": "<repo_name>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"4": {
|
||||
"content": "<reponame>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"5": {
|
||||
"content": "<file_sep>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"6": {
|
||||
"content": "<filename>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"7": {
|
||||
"content": "<gh_stars>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"8": {
|
||||
"content": "<issue_start>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"9": {
|
||||
"content": "<issue_comment>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"10": {
|
||||
"content": "<issue_closed>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"11": {
|
||||
"content": "<jupyter_start>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"12": {
|
||||
"content": "<jupyter_text>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"13": {
|
||||
"content": "<jupyter_code>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"14": {
|
||||
"content": "<jupyter_output>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"15": {
|
||||
"content": "<jupyter_script>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
},
|
||||
"16": {
|
||||
"content": "<empty_output>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false,
|
||||
"special": true
|
||||
}
|
||||
},
|
||||
"additional_special_tokens": [
|
||||
"<|im_start|>",
|
||||
"<|im_end|>"
|
||||
],
|
||||
"bos_token": "<|im_start|>",
|
||||
"clean_up_tokenization_spaces": false,
|
||||
"eos_token": "<|im_end|>",
|
||||
"extra_special_tokens": {},
|
||||
"model_max_length": 8192,
|
||||
"pad_token": "<|im_end|>",
|
||||
"tokenizer_class": "GPT2Tokenizer",
|
||||
"unk_token": "<|endoftext|>",
|
||||
"vocab_size": 49152
|
||||
}
|
||||
3
training_args.bin
Normal file
3
training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7ca59e5463f7d76899108501d0cb5495ff9104250e7d099fff283896fa9566f8
|
||||
size 6353
|
||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user