license, language, tags, base_model
license language tags base_model
mit
en
pql
compliance
governance
phi-3
microsoft/Phi-3-mini-4k-instruct

NISHKA GKC

Governance Knowledge Corpus model trained on 1.12M tokens of regulatory content across 15 compliance frameworks.

Model Details

  • Base Model: microsoft/Phi-3-mini-4k-instruct
  • Architecture: Phi-3 (3.8B parameters)
  • Training: LoRA adapter merged into base model
  • Format: Full model weights (no adapter needed)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "openpql/nishka-gkc",
    device_map="auto",
    torch_dtype="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("openpql/nishka-gkc")

# Generate
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_length=512)
print(tokenizer.decode(outputs[0]))

Deployment

This model is ready for deployment with vLLM, TGI, or other inference servers.

# vLLM example
vllm serve openpql/nishka-gkc --dtype float16
Description
Model synced from source: openpql/nishka-gkc-phi3-merged
Readme 682 KiB
Languages
Python 99.5%
Jinja 0.5%