103 lines
5.1 KiB
Markdown
103 lines
5.1 KiB
Markdown
|
|
---
|
||
|
|
license: llama3.2
|
||
|
|
datasets:
|
||
|
|
- prithivMLmods/PyThagoreans-Merged
|
||
|
|
language:
|
||
|
|
- en
|
||
|
|
base_model:
|
||
|
|
- meta-llama/Llama-3.2-1B-Instruct
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- math
|
||
|
|
- coder
|
||
|
|
- problem-solve
|
||
|
|
- open_coder
|
||
|
|
---
|
||
|
|

|
||
|
|
|
||
|
|
# **PyThagorean-1B**
|
||
|
|
|
||
|
|
PyThagorean [Python + Math] is a Python and mathematics-based model designed to solve mathematical problems using Python libraries and coding. It has been fine-tuned on 1.5 million entries and is built on LLaMA's architecture. The model supports different parameter sizes, including 10B, 3B, and 1B (Tiny). These instruction-tuned, text-only models are optimized for multilingual dialogue use cases, including agent-based retrieval and summarization tasks. PyThagorean leverages an auto-regressive language model that uses an optimized transformer architecture. The tuned versions employ supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
|
||
|
|
|
||
|
|
|
||
|
|
# **Use with transformers**
|
||
|
|
|
||
|
|
Starting with `transformers >= 4.43.0` onward, you can run conversational inference using the Transformers `pipeline` abstraction or by leveraging the Auto classes with the `generate()` function.
|
||
|
|
|
||
|
|
Make sure to update your transformers installation via `pip install --upgrade transformers`.
|
||
|
|
|
||
|
|
```python
|
||
|
|
import transformers
|
||
|
|
import torch
|
||
|
|
|
||
|
|
model_id = "prithivMLmods/PyThagorean-Tiny"
|
||
|
|
|
||
|
|
pipeline = transformers.pipeline(
|
||
|
|
"text-generation",
|
||
|
|
model=model_id,
|
||
|
|
model_kwargs={"torch_dtype": torch.bfloat16},
|
||
|
|
device_map="auto",
|
||
|
|
)
|
||
|
|
|
||
|
|
messages = [
|
||
|
|
{"role": "system", "content": "You are the helpful assistant. Solve the mathematical problem in Python programming."},
|
||
|
|
{"role": "user", "content": "Find all real numbers $x$ such that \[\frac{x^3+2x^2}{x^2+3x+2} + x = -6.\]Enter all the solutions, separated by commas."},
|
||
|
|
]
|
||
|
|
|
||
|
|
outputs = pipeline(
|
||
|
|
messages,
|
||
|
|
max_new_tokens=256,
|
||
|
|
)
|
||
|
|
print(outputs[0]["generated_text"][-1])
|
||
|
|
```
|
||
|
|
|
||
|
|
# **Intended Use**
|
||
|
|
|
||
|
|
1. **Mathematical Problem Solving**:
|
||
|
|
PyThagorean is designed for solving complex mathematical problems, including algebra, calculus, trigonometry, and more, by leveraging Python-based libraries. It is ideal for educational tools, tutoring platforms, and automated math assistants.
|
||
|
|
|
||
|
|
2. **Python Code Generation**:
|
||
|
|
The model generates Python code snippets for mathematical computations, simulations, and problem-solving, making it valuable for developers, researchers, and students.
|
||
|
|
|
||
|
|
3. **Multilingual Dialogue Systems**:
|
||
|
|
With support for multiple languages, PyThagorean can assist users worldwide in understanding and solving mathematical problems through dialogue-based interfaces.
|
||
|
|
|
||
|
|
4. **Instruction-Following Tasks**:
|
||
|
|
The model excels at adhering to precise mathematical instructions and delivering accurate, step-by-step solutions for problems embedded in text.
|
||
|
|
|
||
|
|
5. **Agent-Based Knowledge Retrieval**:
|
||
|
|
PyThagorean can retrieve and summarize mathematical concepts or problem-solving techniques, enabling quick access to relevant knowledge for educational and research purposes.
|
||
|
|
|
||
|
|
6. **Educational Content Creation**:
|
||
|
|
It generates educational content such as example problems, solutions, and Python-based tutorials, aiding teachers and content creators.
|
||
|
|
|
||
|
|
7. **Summarization and Explanation**:
|
||
|
|
The model provides clear explanations and breakdowns of mathematical solutions, helping users understand the rationale and process behind the answers.
|
||
|
|
|
||
|
|
|
||
|
|
# **Limitations**
|
||
|
|
|
||
|
|
1. **Performance on Ambiguous Instructions**:
|
||
|
|
The model may struggle with ambiguous, vague, or poorly framed mathematical instructions, potentially leading to incorrect or incomplete solutions.
|
||
|
|
|
||
|
|
2. **Edge Cases and Special Scenarios**:
|
||
|
|
For highly specialized or niche mathematical problems, especially those not commonly encountered in training data, the model's performance may degrade.
|
||
|
|
|
||
|
|
3. **Errors in Multi-Step Reasoning**:
|
||
|
|
While trained on reasoning datasets, the model may sometimes produce incorrect results for multi-step or highly complex reasoning tasks, particularly if intermediate steps are not explicitly defined.
|
||
|
|
|
||
|
|
4. **Bias Toward Common Solutions**:
|
||
|
|
The model may favor standard or commonly used approaches to mathematical problems, potentially missing creative or less conventional methods of solution.
|
||
|
|
|
||
|
|
5. **Resource Intensity**:
|
||
|
|
As a large-scale model, PyThagorean requires significant computational resources, including high-end GPUs, for efficient inference and deployment.
|
||
|
|
|
||
|
|
6. **Context Window Limitations**:
|
||
|
|
The model's finite context window may lead to incomplete understanding or truncated responses for problems requiring extensive context or lengthy input.
|
||
|
|
|
||
|
|
7. **Handling of Non-Mathematical Queries**:
|
||
|
|
While capable of engaging in general conversations, its performance for non-mathematical tasks may not match models specifically tuned for broader use cases.
|
||
|
|
|
||
|
|
8. **Dependency on Python Libraries**:
|
||
|
|
Generated solutions may rely on specific Python libraries or functions, which users must have installed and configured correctly to execute the code successfully.
|