初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Lynx-TinySync-0.6B Source: Original Platform
This commit is contained in:
115
README.md
Normal file
115
README.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- Qwen/Qwen3-0.6B
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- code
|
||||
- general-reasoning
|
||||
- moe
|
||||
- math
|
||||
---
|
||||
|
||||

|
||||
|
||||
# **Lynx-TinySync-0.6B**
|
||||
|
||||
> **Lynx-TinySync-0.6B** is a lightweight, high-performance model designed for **mathematical reasoning**, **code generation**, and **general-purpose inference**. Built on a custom modular dataset and powered by an efficient architecture, it excels in delivering structured, accurate outputs even in mid-resource environments. Despite its compact **0.6B parameter size**, it demonstrates remarkable proficiency in math, code, and technical language understanding.
|
||||
|
||||
> \[!note]
|
||||
> GGUF: [https://huggingface.co/prithivMLmods/Lynx-TinySync-0.6B-GGUF](https://huggingface.co/prithivMLmods/Lynx-TinySync-0.6B-GGUF)
|
||||
|
||||
---
|
||||
|
||||
## **Key Features**
|
||||
|
||||
1. **Custom Modular Dataset Training**
|
||||
Fine-tuned using a handcrafted blend of math, code, and reasoning datasets, ensuring high performance in symbolic tasks and general queries.
|
||||
|
||||
2. **Mathematical Reasoning**
|
||||
Handles algebra, calculus, geometry, and symbolic logic with clarity—ideal for tutoring, educational support, and math competitions.
|
||||
|
||||
3. **Compact Code Assistant**
|
||||
Generates clean, efficient code in Python, JavaScript, and more—complete with explanations and bug-fix breakdowns.
|
||||
|
||||
4. **Structured Output Generation**
|
||||
Outputs in JSON, Markdown, LaTeX, and tabular formats—well-suited for documentation, structured data templates, and technical content.
|
||||
|
||||
5. **Multilingual Technical Reasoning**
|
||||
Supports math and code queries in 20+ languages with consistent output—enabling multilingual academic and professional use cases.
|
||||
|
||||
6. **Optimized for Low-Resource Deployment**
|
||||
With only 0.6B parameters, it's ideal for inference on edge devices, local machines, and GPU-constrained environments.
|
||||
|
||||
---
|
||||
|
||||
## **Quickstart with Transformers**
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "prithivMLmods/Lynx-TinySync-0.6B"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
prompt = "Solve the equation: 2(x - 4) + 3 = 11. Show all steps."
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a step-by-step math tutor."},
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=512
|
||||
)
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
print(response)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Intended Use**
|
||||
|
||||
* Mathematical problem solving and symbolic logic
|
||||
* Lightweight code generation and debugging
|
||||
* Generation of structured content (e.g., JSON, LaTeX, Markdown)
|
||||
* Educational support across languages and domains
|
||||
* Low-resource deployment in academic or field settings
|
||||
|
||||
---
|
||||
|
||||
## **Limitations**
|
||||
|
||||
* May underperform on long-form creative generation tasks
|
||||
* Smaller context window may limit deep multi-turn reasoning
|
||||
* Less capable in adversarial or abstract reasoning queries
|
||||
* Technical multilingual use focused—general dialogue fluency limited
|
||||
|
||||
---
|
||||
|
||||
## **References**
|
||||
|
||||
1. [Qwen2.5 Technical Report](https://arxiv.org/pdf/2412.15115)
|
||||
2. [YaRN: Efficient Context Window Extension of Large Language Models](https://arxiv.org/pdf/2309.00071)
|
||||
Reference in New Issue
Block a user