初始化项目,由ModelHub XC社区提供模型
Model: Writer/palmyra-mini-thinking-a Source: Original Platform
This commit is contained in:
151
README.md
Normal file
151
README.md
Normal file
@@ -0,0 +1,151 @@
|
||||
---
|
||||
tags:
|
||||
- Coder
|
||||
- Math
|
||||
- qwen2
|
||||
- thinking
|
||||
- reasoning
|
||||
model-index:
|
||||
- name: Palmyra-mini-thinking-a
|
||||
results: []
|
||||
license: apache-2.0
|
||||
pipeline_tag: text-generation
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
---
|
||||
|
||||
<div align="center">
|
||||
<h1>Palmyra-mini-thinking-a</h1>
|
||||
|
||||
</div>
|
||||
|
||||
### Model Description
|
||||
|
||||
- **Language(s) (NLP):** English
|
||||
- **License:** Apache-2.0
|
||||
- **Finetuned from model:** Qwen/Qwen2.5-1.5B
|
||||
- **Context window:** 131,072 tokens
|
||||
- **Parameters:** 1.7 billion
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
The palmyra-mini-thinking-a model demonstrates exceptional performance in advanced mathematical reasoning and competitive programming. Its capabilities are highlighted by an outstanding score of 0.886 on the 'MATH500' benchmark, showcasing a robust ability to solve complex mathematical problems. The strength of the model in quantitative challenges is further confirmed by its score of 0.8287 on 'gsm8k (strict-match)', which demonstrates proficiency in multi-step arithmetic reasoning. Additionally, the model proves its aptitude for high-level problem-solving with a score of 0.8 on 'AMC23'. The model also shows strong potential in the coding domain, achieving a score of 0.5631 on 'Codeforces (pass_rate)' and 0.5481 on 'Olympiadbench (extractive_match)', indicating competence in generating correct solutions for programming challenges.
|
||||
|
||||
## Benchmark Performance
|
||||
|
||||
This section provides a detailed breakdown of the palmyra-mini-thinking-a model's performance across a standardized set of industry benchmarks. The data is presented in its original order from the source evaluation.
|
||||
|
||||
| Benchmark | Score |
|
||||
|:-----------------------------------------------------------------|---------:|
|
||||
| gsm8k (strict-match) | 0.8287 |
|
||||
| minerva_math(exact_match) | 0.3842 |
|
||||
| mmlu_pro(exact_match) | 0.2748 |
|
||||
| hendrycks_math | 0.0054 |
|
||||
| ifeval (inst_level_loose_acc) | 0.3657 |
|
||||
| mathqa (acc) | 0.4171 |
|
||||
| humaneval (pass@1) | 0.2378 |
|
||||
| BBH (get-answer)(exact_match) | 0.462 |
|
||||
| mbpp | 0.304 |
|
||||
| leadboard_musr (acc_norm) | 0.3413 |
|
||||
| gpqa lighteval gpqa diamond_pass@1:8_samples | 0.3826 |
|
||||
| AIME24(pass@1)(avg-of-1) | 0.4333 |
|
||||
| AIME25(pass@1)(avg-of-1) | 0.3667 |
|
||||
| Livecodebench-codegen (livecodebench/code_generation_lite v4_v5) | 0.1784 |
|
||||
| AMC23 | 0.8 |
|
||||
| MATH500 | 0.886 |
|
||||
| Minerva | 0.3493 |
|
||||
| Olympiadbench (extractive_match) | 0.5481 |
|
||||
| Codecontests (pass_rate) | 0.1778 |
|
||||
| Codeforces (pass_rate) | 0.5631 |
|
||||
| Taco (pass_rate) | 0.3083 |
|
||||
| APPS (all_levels) | 0.0447 |
|
||||
| HMMT23 (extractive_match) | 0.1 |
|
||||
| Average | 0.380839 |
|
||||
|
||||
|
||||
|
||||
### Use with transformers
|
||||
|
||||
You can run conversational inference using the Transformers Auto classes with the `generate()` function. Here's an example:
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
model_id = "Writer/palmyra-mini-thinking-a"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
attn_implementation="flash_attention_2",
|
||||
)
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
|
||||
}
|
||||
],
|
||||
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
|
||||
)
|
||||
|
||||
gen_conf = {
|
||||
"max_new_tokens": 256,
|
||||
"eos_token_id": tokenizer.eos_token_id,
|
||||
"temperature": 0.3,
|
||||
"top_p": 0.9,
|
||||
}
|
||||
|
||||
with torch.inference_mode():
|
||||
output_id = model.generate(input_ids, **gen_conf)
|
||||
|
||||
output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :])
|
||||
|
||||
print(output_text)
|
||||
```
|
||||
|
||||
## Running with vLLM
|
||||
```py
|
||||
vllm serve Writer/palmyra-mini-thinking-a
|
||||
```
|
||||
```py
|
||||
curl -X POST http://localhost:8000/v1/chat/completions \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "Writer/palmyra-mini-thinking-a",
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
|
||||
}
|
||||
],
|
||||
"max_tokens": 8000,
|
||||
"temperature": 0.2
|
||||
}'
|
||||
```
|
||||
|
||||
|
||||
## Ethical Considerations
|
||||
|
||||
As with any language model, there is a potential for generating biased or inaccurate information. Users should be aware of these limitations and use the model responsibly.
|
||||
|
||||
### Citation and Related Information
|
||||
|
||||
To cite this model:
|
||||
```
|
||||
@misc{Palmyra-mini-thinking-a,
|
||||
author = {Writer Engineering team},
|
||||
title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
|
||||
howpublished = {\url{https://dev.writer.com}},
|
||||
year = 2025,
|
||||
month = Sep
|
||||
}
|
||||
```
|
||||
Contact Hello@writer.com
|
||||
Reference in New Issue
Block a user