初始化项目,由ModelHub XC社区提供模型
Model: prithivMLmods/Ross-640-BMath-1.5B Source: Original Platform
This commit is contained in:
104
README.md
Normal file
104
README.md
Normal file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- Qwen/Qwen2.5-1.5B-Instruct
|
||||
pipeline_tag: text-generation
|
||||
library_name: transformers
|
||||
tags:
|
||||
- text-generation-inference
|
||||
- math
|
||||
- trl
|
||||
- SFT
|
||||
---
|
||||
|
||||

|
||||
|
||||
# **Ross-640-BMath-1.5B**
|
||||
|
||||
> **Ross-640-BMath-1.5B** is an **experimental, high-precision math explanation model** fine-tuned on **Qwen2-1.5B**, designed to provide **step-by-step mathematical derivations** and **detailed concept explanations** across a wide range of mathematical domains. It is **not optimized for general reasoning or conversation**, and focuses primarily on **structured, non-reasoning math workflows** including algebra, calculus, number theory, and combinatorics.
|
||||
|
||||
> \[!note]
|
||||
> GGUF: [https://huggingface.co/prithivMLmods/Ross-640-BMath-1.5B-GGUF](https://huggingface.co/prithivMLmods/Ross-640-BMath-1.5B-GGUF)
|
||||
|
||||
---
|
||||
|
||||
## **Key Features**
|
||||
|
||||
1. **Hard Math Concept Focus**
|
||||
Specializes in **algebra**, **calculus**, **combinatorics**, **linear algebra**, **number theory**, and more—delivering fine-tuned, low-latency outputs ideal for **math-intensive applications**.
|
||||
|
||||
2. **Step-by-Step Explanations**
|
||||
Emphasizes **procedural clarity** over abstract reasoning, offering structured, educational breakdowns of mathematical problems and derivations.
|
||||
|
||||
3. **Symbolic Computation & Annotation**
|
||||
Outputs include LaTeX-compatible syntax, inline math symbols, and clear annotation to support academic and technical workflows.
|
||||
|
||||
4. **Educational Utility**
|
||||
Optimized for **learning and teaching**, providing clear responses to mathematical queries with minimal noise or conversational drift.
|
||||
|
||||
5. **Lightweight Architecture**
|
||||
Built on Qwen2-1.5B and fine-tuned for **efficiency and precision**, making it suitable for deployment in **resource-constrained environments**, educational tools, or math-centric chat interfaces.
|
||||
|
||||
---
|
||||
|
||||
## **Quickstart with Transformers**
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "prithivMLmods/Ross-640-BMath-1.5B"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
prompt = "Explain step-by-step how to integrate (x^2 + 1)/(x^3 + 3x) dx."
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful assistant skilled in solving complex math problems with clear and structured steps."},
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=512
|
||||
)
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
print(response)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Intended Use**
|
||||
|
||||
* Detailed mathematical explanations and problem-solving
|
||||
* Education-focused tutoring and math derivation tools
|
||||
* Math-focused applications and formula documentation
|
||||
* Symbolic derivations and LaTeX generation
|
||||
* Integration with learning platforms and academic software
|
||||
|
||||
---
|
||||
|
||||
## **Limitations**
|
||||
|
||||
* Not suitable for general-purpose conversation or reasoning tasks
|
||||
* Context length constraints may limit effectiveness on large proofs
|
||||
* May struggle with non-mathematical or open-ended creative tasks
|
||||
* Experimental: Fine-tuned primarily for **explanation clarity**, not deep symbolic reasoning or formal proof validation
|
||||
Reference in New Issue
Block a user