4.6 KiB
4.6 KiB
license, language, library_name, tags
| license | language | library_name | tags | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| cc-by-sa-4.0 |
|
transformers |
|
⚖️ Subrit's Legal AI (Quecto V1)
'Model:' subrit-legal-gpt2-quecto-v1
'Author:' Subrit Dikshit
'License:' CC BY-SA 4.0
This is a specialized miniature 'Legal AI' trained from scratch & fine-tuned on the 'Indian Penal Code (IPC)', 'CrPC', and 'Constitution'. It runs efficiently on consumer hardware (CPUs) using GGUF quantization.
⚠️ Limitations & Disclaimer
- 'Model Architecture:' This model uses GPT-2 custom configuration (defined from scratch) architecture. It is trained from scratch and not a direct fine-tune of the gpt2-small checkpoint, but utilizes the standard GPT2LMHeadModel structure for compatibility. It performs best on simple definition and punishment questions.
- 'Reasoning Limits:' Due to its small size, it is "NOT" capable of complex reasoning, multi-turn logic, or "lawyer-level" argumentation.
- 'Hallucinations:' Like all Small Language Models (SLMs), this model can "hallucinate" (generate plausible-sounding but incorrect information). 'Always verify specific section numbers and punishments against official legal texts.'
- 'Usage:' This is a research prototype for educational purposes. It is 'NOT' a substitute for professional legal advice.
📦 Model Details
- 'Architecture:' Custom GPT-2 configuration (Trained from scratch).
- 'Training Data:' Indian Legal Texts (IPC, CrPC, Constitution).
- 'Formats Included:'
- 'PyTorch:' Standard non-quantized weights for GPU inference or further fine-tuning (~500 MB).
- 'GGUF (Q8_0):' 8-bit quantized for fast CPU/Edge inference (~130 MB).
👨💻 Credits & Attribution
This model was trained by 'Subrit Dikshit'.
- 'Training Data:' Techmaestro369/indian-legal-texts-finetuning (CC BY-SA 4.0).
- 'Base Model:' Custom GPT-2 configuration (Trained from scratch).
🚀 How to Run (Demo Script)
This repository contains 'two versions' of the model. Choose the one that fits your needs.
🔧 Prerequisites
- Python: 3.10 or 3.11 is recommended.
- OS: Windows, macOS, or Linux.
Option 1: Run the PyTorch Version (Standard HF). Use this if you are using the standard transformers library or have a GPU.
'Requires:' pip install transformers torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# 1. Load from Hugging Face
model_name = "subrit/subrit-legal-gpt2-quecto-v1"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# 2. Ask a Question
input_text = "Question: What is Article 14 of the Constitution?\nAnswer:"
inputs = tokenizer(input_text, return_tensors="pt")
# 3. Generate Answer
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Option 2: Run the GGUF Version (Recommended for Speed/CPU)
Use this if you want to run the model on a laptop CPU without a GPU.
'Requires:' pip install llama-cpp-python huggingface_hub
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
# 1. Download the GGUF file
model_path = hf_hub_download(
repo_id="subrit/subrit-legal-gpt2-quecto-v1",
filename="subrit_legal_gpt2_q8.gguf"
)
# 2. Load the Engine
llm = Llama(model_path=model_path, n_ctx=512, verbose=False)
# 3. Ask a Question
question = "What is the punishment for murder under Section 302?"
output = llm(f"Question: {question}\nAnswer:", max_tokens=60, stop=["Question:", "\n"])
print(output['choices'][0]['text'])
Please cite it as follows:
@misc{dikshit2025legalgpt2,
author = {Dikshit, Subrit},
title = {Subrit's Legal AI (Quecto V1): A Quantized GPT-2 Fine-Tune on Indian Law},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{[https://huggingface.co/subrit/subrit-legal-gpt2-quecto-v1](https://huggingface.co/subrit/subrit-legal-gpt2-quecto-v1)}}
}
Acknowledgements:
@dataset{indian_legal_texts,
author = {Gupta, Akshat (Techmaestro369)},
title = {Indian Legal Texts Finetuning Dataset},
year = {2024},
publisher = {Hugging Face},
url = {[https://huggingface.co/datasets/Techmaestro369/indian-legal-texts-finetuning](https://huggingface.co/datasets/Techmaestro369/indian-legal-texts-finetuning)}
}
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}