Files
nishka-gkc-phi3-merged/README.md

51 lines
1.1 KiB
Markdown

---
license: mit
language:
- en
tags:
- pql
- compliance
- governance
- phi-3
base_model: microsoft/Phi-3-mini-4k-instruct
---
# NISHKA GKC
Governance Knowledge Corpus model trained on 1.12M tokens of regulatory content across 15 compliance frameworks.
## Model Details
- **Base Model**: microsoft/Phi-3-mini-4k-instruct
- **Architecture**: Phi-3 (3.8B parameters)
- **Training**: LoRA adapter merged into base model
- **Format**: Full model weights (no adapter needed)
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"openpql/nishka-gkc",
device_map="auto",
torch_dtype="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("openpql/nishka-gkc")
# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))
```
## Deployment
This model is ready for deployment with vLLM, TGI, or other inference servers.
```bash
# vLLM example
vllm serve openpql/nishka-gkc --dtype float16
```