fa8617fb0b5e818accf46e04206998c559092910
Model: ansh0x/ace-0.5b-gguf Source: Original Platform
license, base_model, tags, language, library_name, pipeline_tag
| license | base_model | tags | language | library_name | pipeline_tag | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| cc-by-nc-sa-4.0 | Qwen/Qwen2.5-0.5B-Instruct |
|
|
llama-cpp | text-generation |
ACE 0.5B - Task Automation Model
Fine-tuned Qwen 0.5B for local task automation. Detects task types and generates execution plans.
Code: GitHub
Model Description
ACE is a 0.5B parameter language model fine-tuned for task automation. It can:
- Classify tasks (atomic, repetitive, clarification needed)
- Generate CLI commands for file operations
- Create execution plans with hotkeys
- Handle repetitive bulk operations
All inference runs on CPU - no GPU required.
Model Files
| File | Size | Quant | Use Case |
|---|---|---|---|
ace-bf16.gguf |
940MB | BF16 | Recommended - A bit slower inference, but better quality |
ace-q4-k-m.gguf |
385MB | Q4_K_M | Faster inference |
Training Details
Base Model: Qwen/Qwen2-0.5B
Method: LoRA fine-tuning (r=16, alpha=32)
Dataset: ~1000 custom task examples
Training: 500-700 steps, learning_rate=3e-5
Quantization: GGUF Q4_K_M with imatrix
Task Types:
- Atomic tasks (single operations)
- Repetitive tasks (bulk processing)
- Clarification requests (ambiguous inputs)
Data Format:
Input: {"task": "...", "directory": [...], "available_hotkeys": [...]}
Output: {"task_type": "atomic", "output": {"execution_plan": {...}}}
Usage
- Right now the model is a bit unstable and intended for only experimental usages.
- Refer to the GitHub repo for installation and usage.
Limitations
- Requires explicit file paths (no smart file search)
- Optimized for Linux commands (Should be able to work on Windows)
- CPU inference only (3-10 seconds on i3/i5)
- No visual understanding (text-only)
- English language only
Performance
Hardware benchmarks:
- Intel i5 (2018+): 3-5 seconds per task
- Intel i3 (2015+): 5-10 seconds per task
- Older hardware: 30-90 seconds per task
Bias and Ethics
Known biases:
- Training data focused on common developer workflows
- Linux command bias (more Linux than Windows examples)
- English-only (no multilingual support)
Ethical considerations:
- Model can generate destructive commands (file deletion)
- Users should review plans before execution
- No built-in safety checks for harmful operations
License
CC BY-NC-SA 4.0 (Non-commercial)
- ✅ Free for personal/research use
- ❌ Commercial use requires separate license
- ✅ Must provide attribution
- ✅ Derivatives must use same license
Additional Restriction: Training of AI/ML models using these weights is prohibited without explicit written permission.
Contact
- Issues: GitHub Issues
- Discussions: GitHub Discussions
More info: GitHub Repository
Description