初始化项目,由ModelHub XC社区提供模型
Model: Brigham-Young-University/Qwen2.5-Coder-3B-Ilograph-Instruct Source: Original Platform
This commit is contained in:
123
README.md
Normal file
123
README.md
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
base_model: unsloth/Qwen2.5-Coder-3B-Instruct
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- qwen2.5
|
||||
- sft
|
||||
- transformers
|
||||
- ilograph
|
||||
license: mit
|
||||
datasets:
|
||||
- Brigham-Young-University/Ilograph_dataset
|
||||
language:
|
||||
- en
|
||||
new_version: Brigham-Young-University/Qwen3-Coder-30B-A3B-Ilograph-Instruct
|
||||
---
|
||||
|
||||
# Model Card for Qwen2.5-Coder-3B-Instruct (fine-tuned model)
|
||||
|
||||
A fully fine-tuned version of **Qwen2.5-Coder-3B-Instruct**, trained with LoRA using Unsloth and then merged into a standalone model. This checkpoint can be used directly as a regular Transformers causal language model. It is specialized for **Ilograph diagrams**: it generates valid **Ilograph Diagram Language (IDL)** specifications from natural-language instructions.
|
||||
|
||||
The repository includes a **system prompt** you can pass to the model and an **IDL schema** (JSON) that describes the expected output format; the schema is available in the repository.
|
||||
|
||||
## Model Details
|
||||
|
||||
- **Developed by:** Chris Mijangos (AI student architect at BYU)
|
||||
- **Shared by:** Brigham Young University (BYU)
|
||||
- **Model type:** Causal language model (decoder-only), fine-tuned Qwen2.5-Coder-3B-Instruct (trained with LoRA, merged into base weights)
|
||||
- **Language(s):** Primarily English; capabilities depend on base model and fine-tuning data
|
||||
- **License:** Same as base model; verify [Qwen2.5-Coder-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct) license terms before use
|
||||
- **Finetuned from:** [unsloth/Qwen2.5-Coder-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-Coder-3B-Instruct)
|
||||
|
||||
### Model Sources
|
||||
|
||||
- **Repository:** This model card and weights are shared via the associated Hugging Face repo
|
||||
- **Demo:** N/A — In construction
|
||||
|
||||
## Uses
|
||||
|
||||
### Direct Use
|
||||
|
||||
Load the adapter with the base model to generate **Ilograph (IDL)** diagram specifications from instructions. Use the system prompt and schema in the repository (see below). Use the “How to Get Started” section below for loading the model.
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
This model is not intended for high-risk or safety-critical applications without further evaluation. Do not use for generating misleading, harmful, or illegal content. Users are responsible for complying with applicable laws and the base model’s license.
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
As with other language models, this adapter may reflect biases present in the base model and in the fine-tuning data. Outputs should be validated for your use case. No formal bias or safety evaluation is provided with this release.
|
||||
|
||||
Due to limited and focused training data and the small size of the base model, this adapter is primarily suited for relatively simple Ilograph diagrams centered on **resources, relationships, and sequences**. For more complex, large-scale, or highly customized diagram structures, the model may not perform as well and additional fine-tuning or a larger base model may be required. I you want to have more complex diagrams you can use our Qwen3 30B newwer version.
|
||||
|
||||
|
||||
### Recommendations
|
||||
|
||||
Users should evaluate the model on their own data and tasks and be aware of potential biases and limitations before deployment.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Requires the base model and PEFT. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip install transformers peft accelerate
|
||||
```
|
||||
|
||||
Load the fine-tuned model directly from this repository:
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_id = "Brigham-Young-University/Qwen2.5-Coder-3B-Ilograph-Instruct"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
device_map="auto",
|
||||
trust_remote_code=True,
|
||||
)
|
||||
|
||||
inputs = tokenizer("Your prompt here", return_tensors="pt").to(model.device)
|
||||
outputs = model.generate(**inputs, max_new_tokens=256)
|
||||
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
### Ilograph (IDL) system prompt and schema
|
||||
|
||||
The repository includes a **system prompt** and an **IDL schema** (JSON). Use the schema to fill in the placeholder in the prompt, then append your instruction. Example system prompt:
|
||||
|
||||
```
|
||||
You are an expert in the Ilograph Diagram Language (IDL). You have been trained on data that is formatted in the following way:
|
||||
|
||||
<insert the schema JSON here>
|
||||
|
||||
Your task is to create a valid IDL specification for the diagram. You will be given an instruction of what to create, and you will need to create a valid IDL specification for the diagram.
|
||||
|
||||
CRITICAL RULES:
|
||||
- NEVER use JSON format
|
||||
- NEVER use Mermaid syntax
|
||||
- NEVER use any format except ilograph YAML
|
||||
- Use YAML syntax with proper indentation
|
||||
|
||||
Here is the instruction:
|
||||
```
|
||||
|
||||
The schema is provided in the repository; inject its contents (e.g. as formatted JSON) where indicated above, then add your diagram instruction after “Here is the instruction:”.
|
||||
|
||||
## Evaluation
|
||||
|
||||
No formal evaluation results are provided with this release. Users are encouraged to evaluate the model on their own benchmarks and tasks.
|
||||
|
||||
|
||||
## Model Card Authors
|
||||
|
||||
- Chris Mijangos (BYU)
|
||||
|
||||
## Model Card Contact
|
||||
|
||||
For questions about this model card or the adapter, please open an issue on the associated Hugging Face repository or contact through BYU
|
||||
|
||||
### Framework versions
|
||||
|
||||
- PEFT 0.18.1
|
||||
Reference in New Issue
Block a user