初始化项目,由ModelHub XC社区提供模型
Model: ShahriarFerdoush/llama-3.2-1b-code-instruct Source: Original Platform
This commit is contained in:
36
.gitattributes
vendored
Normal file
36
.gitattributes
vendored
Normal file
@@ -0,0 +1,36 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
||||
169
README.md
Normal file
169
README.md
Normal file
@@ -0,0 +1,169 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- sahil2801/CodeAlpaca-20k
|
||||
base_model:
|
||||
- meta-llama/Llama-3.2-1B
|
||||
---
|
||||
|
||||
|
||||
# 🧠 Llama-3.2-1B Code Solver (QLoRA Fine-Tuned)
|
||||
|
||||
A lightweight yet powerful **code-focused language model** fine-tuned from **Meta Llama-3.2-1B** using **QLoRA (4-bit)** on the **CodeAlpaca-20K** dataset.
|
||||
Designed for **efficient code generation, reasoning, and problem-solving** on limited GPU resources.
|
||||
|
||||
> 🚀 Trained on a single Tesla P100 GPU
|
||||
> ⚡ Optimized for Kaggle, Colab, and low-VRAM environments
|
||||
> 🧩 Ideal for research, education, and rapid prototyping
|
||||
|
||||
---
|
||||
|
||||
## 🔍 Model Overview
|
||||
|
||||
| Attribute | Value |
|
||||
|---------|------|
|
||||
| **Base Model** | `meta-llama/Llama-3.2-1B` |
|
||||
| **Model Type** | Decoder-only causal language model |
|
||||
| **Fine-Tuning Method** | QLoRA (4-bit quantization + LoRA) |
|
||||
| **LoRA Rank** | 16 |
|
||||
| **Task Domain** | Code generation & code reasoning |
|
||||
| **Training Samples** | 10,000 |
|
||||
| **Training Time** | ~5 hours |
|
||||
| **Hardware** | NVIDIA Tesla P100 |
|
||||
| **Precision** | 4-bit (NF4) |
|
||||
| **Frameworks** | Hugging Face Transformers, PEFT, BitsAndBytes |
|
||||
|
||||
---
|
||||
|
||||
## 🎯 What This Model Is Good At
|
||||
|
||||
- 🧑💻 Code generation (Python-focused, but generalizable)
|
||||
- 🧠 Step-by-step coding reasoning
|
||||
- 🧪 Algorithmic problem solving
|
||||
- 📘 Educational coding assistance
|
||||
- ⚙️ Running efficiently on **low-VRAM GPUs**
|
||||
|
||||
---
|
||||
|
||||
## 📚 Training Dataset
|
||||
|
||||
### **CodeAlpaca-20K**
|
||||
|
||||
A high-quality instruction-tuning dataset derived from the Alpaca format and specialized for coding tasks.
|
||||
|
||||
- **Total dataset size**: 20,000 samples
|
||||
- **Used for training**: 10,000 samples (50%)
|
||||
- **Data format**:
|
||||
```json
|
||||
{
|
||||
"instruction": "Describe the coding task",
|
||||
"input": "Optional context or input code",
|
||||
"output": "Expected code solution"
|
||||
}
|
||||
```
|
||||
|
||||
* **Task Types**:
|
||||
|
||||
* Algorithm implementation
|
||||
* Code completion
|
||||
* Debugging
|
||||
* Function writing
|
||||
* Problem solving
|
||||
|
||||
---
|
||||
|
||||
## 🏗️ Training Methodology
|
||||
|
||||
This model was fine-tuned using **QLoRA**, enabling efficient adaptation of large language models on limited hardware.
|
||||
|
||||
### Key Techniques Used
|
||||
|
||||
* **4-bit Quantization (NF4)** via BitsAndBytes
|
||||
* **LoRA adapters** applied to attention layers
|
||||
* **Frozen base model weights**
|
||||
* **Low-rank updates only**
|
||||
|
||||
### Why QLoRA?
|
||||
|
||||
* 🔻 Drastically reduces GPU memory usage
|
||||
* ⚡ Enables training on consumer-grade GPUs
|
||||
* 📈 Maintains strong downstream performance
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Training Configuration
|
||||
|
||||
| Parameter | Value |
|
||||
| --------------------- | ----------------------- |
|
||||
| Max Sequence Length | 1024 |
|
||||
| LoRA Rank (r) | 16 |
|
||||
| LoRA Alpha | 32 |
|
||||
| LoRA Dropout | 0.05 |
|
||||
| Optimizer | AdamW |
|
||||
| Learning Rate | 2e-4 |
|
||||
| Batch Size | Small (GPU-constrained) |
|
||||
| Gradient Accumulation | Enabled |
|
||||
| Quantization | 4-bit |
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Usage
|
||||
|
||||
### Load the Model
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_id = "YOUR_USERNAME/llama-3.2-1b-code-solver"
|
||||
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
device_map="auto",
|
||||
load_in_4bit=True
|
||||
)
|
||||
```
|
||||
|
||||
### Example Inference
|
||||
|
||||
```python
|
||||
prompt = "Write a Python function to check if a number is prime."
|
||||
|
||||
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
||||
outputs = model.generate(**inputs, max_new_tokens=200)
|
||||
|
||||
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
||||
```
|
||||
|
||||
|
||||
## 🧪 Evaluation Notes
|
||||
|
||||
* This model is **instruction-tuned**, not benchmark-optimized
|
||||
* No formal benchmarks (HumanEval / MBPP) were run
|
||||
* Best evaluated through **qualitative code generation**
|
||||
|
||||
## ⚠️ Limitations
|
||||
|
||||
* 1B parameters → limited long-context reasoning
|
||||
* Not optimized for natural language chat
|
||||
* May hallucinate on complex or ambiguous prompts
|
||||
* English-centric training data
|
||||
|
||||
|
||||
## 🧭 Intended Use
|
||||
|
||||
✅ **Allowed**
|
||||
|
||||
* Research and experimentation
|
||||
* Coding assistants
|
||||
* Educational tools
|
||||
* Prototyping LLM systems
|
||||
|
||||
|
||||
## 🙏 Acknowledgements
|
||||
|
||||
* **Meta AI** for Llama 3.2
|
||||
* **CodeAlpaca** dataset creators
|
||||
* **Hugging Face** ecosystem
|
||||
* **QLoRA & PEFT** authors
|
||||
35
config.json
Normal file
35
config.json
Normal file
@@ -0,0 +1,35 @@
|
||||
{
|
||||
"architectures": [
|
||||
"LlamaForCausalLM"
|
||||
],
|
||||
"attention_bias": false,
|
||||
"attention_dropout": 0.0,
|
||||
"bos_token_id": 128000,
|
||||
"dtype": "float16",
|
||||
"eos_token_id": 128001,
|
||||
"head_dim": 64,
|
||||
"hidden_act": "silu",
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 8192,
|
||||
"max_position_embeddings": 131072,
|
||||
"mlp_bias": false,
|
||||
"model_type": "llama",
|
||||
"num_attention_heads": 32,
|
||||
"num_hidden_layers": 16,
|
||||
"num_key_value_heads": 8,
|
||||
"pretraining_tp": 1,
|
||||
"rms_norm_eps": 1e-05,
|
||||
"rope_scaling": {
|
||||
"factor": 8.0,
|
||||
"high_freq_factor": 4.0,
|
||||
"low_freq_factor": 1.0,
|
||||
"original_max_position_embeddings": 8192,
|
||||
"rope_type": "llama3"
|
||||
},
|
||||
"rope_theta": 500000.0,
|
||||
"tie_word_embeddings": true,
|
||||
"transformers_version": "4.57.1",
|
||||
"use_cache": true,
|
||||
"vocab_size": 128256
|
||||
}
|
||||
6
generation_config.json
Normal file
6
generation_config.json
Normal file
@@ -0,0 +1,6 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 128000,
|
||||
"eos_token_id": 128001,
|
||||
"transformers_version": "4.57.1"
|
||||
}
|
||||
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:400682e2bd7bd96ed45bdfc70940fe2455299d93ce7ceab47eace79817b93454
|
||||
size 2471645464
|
||||
17
special_tokens_map.json
Normal file
17
special_tokens_map.json
Normal file
@@ -0,0 +1,17 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|begin_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|end_of_text|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": "<|end_of_text|>"
|
||||
}
|
||||
3
tokenizer.json
Normal file
3
tokenizer.json
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ade1dac458f86f9bea8bf35b713f14e1bbed24228429534038e9f7e54ea3e8b6
|
||||
size 17208712
|
||||
2063
tokenizer_config.json
Normal file
2063
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user