初始化项目,由ModelHub XC社区提供模型
Model: chipcraftx-io/chipcraftx-rtlgen-7b Source: Original Platform
This commit is contained in:
136
README.md
Normal file
136
README.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
license: cc-by-nc-4.0
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
tags:
|
||||
- verilog
|
||||
- rtl
|
||||
- hardware-design
|
||||
- eda
|
||||
- code-generation
|
||||
- chip-design
|
||||
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
||||
pipeline_tag: text-generation
|
||||
model-index:
|
||||
- name: chipcraftx-rtlgen-7b
|
||||
results:
|
||||
- task:
|
||||
type: text-generation
|
||||
name: Verilog Code Generation
|
||||
dataset:
|
||||
name: VerilogEval-Human
|
||||
type: verilogeval
|
||||
metrics:
|
||||
- name: Functional Pass Rate (standalone)
|
||||
type: pass@1
|
||||
value: 36.5
|
||||
verified: true
|
||||
- name: Functional Pass Rate (ChipCraftX hybrid system)
|
||||
type: pass@1
|
||||
value: 98.7
|
||||
verified: true
|
||||
---
|
||||
|
||||
# chipcraftx-rtlgen-7b
|
||||
|
||||
**The local RTL generation engine powering [ChipCraftX](https://chipcraftx.io)** -- an AI platform that converts natural language specifications into synthesizable Verilog.
|
||||
|
||||
`chipcraftx-rtlgen-7b` handles first-pass Verilog generation at zero API cost. Within the full ChipCraftX hybrid pipeline, the system achieves **98.7% functional pass rate on VerilogEval-Human (154/156)**.
|
||||
|
||||
## Benchmark Results
|
||||
|
||||
### VerilogEval-Human (156 problems, functional simulation)
|
||||
|
||||
| Model | Parameters | Pass Rate |
|
||||
|-------|-----------|-----------|
|
||||
| VeriGen | 16B | 26.0% |
|
||||
| **chipcraftx-rtlgen-7b (standalone)** | **7B** | **36.5%** |
|
||||
| RTLCoder | 7B | 37.0% |
|
||||
| CodeV | 7B | 53.2% |
|
||||
| **ChipCraftX hybrid system** | **7B + Claude** | **98.7%** |
|
||||
|
||||
## Model Details
|
||||
|
||||
- **Base model**: [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
|
||||
- **Fine-tuning**: QLoRA (rank 64, alpha 128) on 76,811 Verilog training samples
|
||||
- **Training**: 3 epochs, learning rate 2e-4, batch size 4
|
||||
- **Architecture**: 28 layers, 3584 hidden size, 28 attention heads
|
||||
- **Context window**: 4,096 tokens (generation), 32,768 (max position embeddings)
|
||||
- **Precision**: bfloat16
|
||||
|
||||
## Usage
|
||||
|
||||
### Transformers
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "ChipCraftX/chipcraftx-rtlgen-7b"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
|
||||
|
||||
system_prompt = """You are ChipCraft-RTL, an expert Verilog design engineer.
|
||||
Generate synthesizable, lint-clean RTL that exactly matches the specification.
|
||||
Rules:
|
||||
- Output ONLY Verilog code (no prose, no markdown fences).
|
||||
- Use reg and wire types ONLY -- NEVER use logic.
|
||||
- Use always @(posedge clk) and always @(*) -- NEVER use always_ff or always_comb.
|
||||
- Module name MUST be TopModule."""
|
||||
|
||||
spec = """Implement a module named TopModule with the following interface.
|
||||
- input clk
|
||||
- input reset
|
||||
- output [3:0] count
|
||||
|
||||
The module should implement a 4-bit up counter with synchronous reset."""
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": spec},
|
||||
]
|
||||
|
||||
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||
inputs = tokenizer(text, return_tensors="pt").to(model.device)
|
||||
outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.2, do_sample=True)
|
||||
print(tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
### Ollama
|
||||
|
||||
See [chipcraftx-rtlgen-7b-GGUF](https://huggingface.co/ChipCraftX/chipcraftx-rtlgen-7b-GGUF) for quantized GGUF versions compatible with Ollama and llama.cpp.
|
||||
|
||||
## Training Data
|
||||
|
||||
The model was fine-tuned on a proprietary dataset of 76,811 Verilog samples.
|
||||
|
||||
## Intended Use
|
||||
|
||||
This model is designed for:
|
||||
- First-pass RTL/Verilog code generation from natural language specs
|
||||
- Integration into automated EDA pipelines with validation feedback loops
|
||||
- Educational use in digital design courses
|
||||
- Rapid prototyping of hardware modules
|
||||
|
||||
### Limitations
|
||||
|
||||
- Standalone pass rate (36.5%) means ~2 out of 3 complex problems need iteration or human review
|
||||
- Strongest on combinational logic; weaker on FSMs and sequential designs
|
||||
- Outputs target Verilog-2001 / Icarus Verilog compatibility (not full SystemVerilog)
|
||||
- Should always be paired with EDA validation (iverilog, Yosys) before use in production
|
||||
|
||||
## About ChipCraftX
|
||||
|
||||
[ChipCraftX](https://chipcraftx.io) is an AI-powered platform that converts natural language specifications into verified, synthesizable hardware descriptions. The platform combines local and cloud models with automated EDA validation to achieve near-perfect scores on standard RTL benchmarks.
|
||||
|
||||
## Citation
|
||||
|
||||
```bibtex
|
||||
@misc{chipcraftx-rtlgen-7b,
|
||||
title={chipcraftx-rtlgen-7b: Local RTL Generation Engine for ChipCraftX},
|
||||
author={Eryilmaz, Cagri},
|
||||
year={2026},
|
||||
publisher={HuggingFace},
|
||||
url={https://huggingface.co/ChipCraftX/chipcraftx-rtlgen-7b}
|
||||
}
|
||||
```
|
||||
Reference in New Issue
Block a user