Update README.md
This commit is contained in:
116
README.md
116
README.md
@@ -1,47 +1,81 @@
|
||||
---
|
||||
license: Apache License 2.0
|
||||
|
||||
#model-type:
|
||||
##如 gpt、phi、llama、chatglm、baichuan 等
|
||||
#- gpt
|
||||
|
||||
#domain:
|
||||
##如 nlp、cv、audio、multi-modal
|
||||
#- nlp
|
||||
|
||||
#language:
|
||||
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
|
||||
#- cn
|
||||
|
||||
#metrics:
|
||||
##如 CIDEr、Blue、ROUGE 等
|
||||
#- CIDEr
|
||||
|
||||
#tags:
|
||||
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
|
||||
#- pretrained
|
||||
|
||||
#tools:
|
||||
##如 vllm、fastchat、llamacpp、AdaSeq 等
|
||||
#- vllm
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
- zh
|
||||
base_model:
|
||||
- Qwen/Qwen2.5-7B-Instruct
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- trl
|
||||
- coder
|
||||
- 7B
|
||||
---
|
||||
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。
|
||||
#### 您可以通过如下git clone命令,或者ModelScope SDK来下载模型
|
||||

|
||||
|
||||
# **Viper-Coder-HybridMini-v1.3**
|
||||
|
||||
Viper-Coder-HybridMini-v1.3 is based on the Qwen 2.5 7B modality architecture, designed to be the **best** for coding and reasoning tasks. It has been fine-tuned on a synthetic dataset leveraging the latest coding logits and CoT datasets, further optimizing its **chain-of-thought (CoT) reasoning** and **logical problem-solving** abilities. The model demonstrates significant improvements in **context understanding, structured data processing, and long-context comprehension**, making it ideal for **complex coding tasks, instruction-following, and text generation**.
|
||||
|
||||
### **Key Improvements**
|
||||
1. **Best-in-Class Coding Proficiency**: Enhanced understanding of programming languages, debugging, and code generation.
|
||||
2. **Fine-Tuned Instruction Following**: Optimized for precise responses, structured outputs (e.g., JSON, YAML), and extended text generation (**8K+ tokens**).
|
||||
3. **Advanced Logical & Mathematical Reasoning**: Improved multi-step problem-solving and theorem proving.
|
||||
4. **Long-Context Mastery**: Handles up to **128K tokens** with an output capability of **8K tokens** per response.
|
||||
5. **Multilingual Code Support**: Excels in **Python, JavaScript, C++, Java, SQL**, and other major programming languages, with documentation in **29+ languages**.
|
||||
|
||||
### **Quickstart with Transformers**
|
||||
|
||||
SDK下载
|
||||
```bash
|
||||
#安装ModelScope
|
||||
pip install modelscope
|
||||
```
|
||||
```python
|
||||
#SDK模型下载
|
||||
from modelscope import snapshot_download
|
||||
model_dir = snapshot_download('prithivMLmods/Viper-Coder-HybridMini-v1.3')
|
||||
```
|
||||
Git下载
|
||||
```
|
||||
#Git模型下载
|
||||
git clone https://www.modelscope.cn/prithivMLmods/Viper-Coder-HybridMini-v1.3.git
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "prithivMLmods/Viper-Coder-HybridMini-v1.3"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto",
|
||||
trust_remote_code=True
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
prompt = "Write a Python function to merge two sorted lists."
|
||||
messages = [
|
||||
{"role": "system", "content": "You are an advanced AI assistant with expert-level coding and reasoning abilities."},
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=512
|
||||
)
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
print(response)
|
||||
```
|
||||
|
||||
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p>
|
||||
### **Intended Use**
|
||||
- **Elite Coding & Debugging**: Best-in-class model for writing, analyzing, and optimizing code.
|
||||
- **Complex Algorithmic Reasoning**: Solves intricate logic problems and algorithm-based challenges.
|
||||
- **Scientific & Mathematical Computation**: Advanced support for formulas, equations, and theorem verification.
|
||||
- **Structured Data Processing**: Seamlessly handles JSON, XML, SQL, and data pipeline automation.
|
||||
- **Multilingual Programming Support**: Proficient in Python, JavaScript, C++, Java, Go, and more.
|
||||
- **Extended Technical Content Generation**: Ideal for writing documentation, research papers, and technical blogs.
|
||||
|
||||
### **Limitations**
|
||||
1. **Moderate Computational Demand**: Requires GPUs/TPUs for smooth inference due to **7B parameters**, but more lightweight than larger models.
|
||||
2. **Language-Specific Variability**: Performance may vary across different programming languages.
|
||||
3. **Possible Error Propagation**: Extended text outputs might introduce logical inconsistencies.
|
||||
4. **Limited Real-World Awareness**: The model does not have access to real-time internet updates.
|
||||
5. **Prompt Sensitivity**: Performance depends on how well the prompt is structured.
|
||||
Reference in New Issue
Block a user