87 lines
2.7 KiB
Markdown
87 lines
2.7 KiB
Markdown
---
|
|
license: apache-2.0
|
|
language:
|
|
- en
|
|
base_model:
|
|
- janhq/Jan-v1-4B
|
|
pipeline_tag: text-generation
|
|
---
|
|
# Jan-v1: Advanced Agentic Language Model
|
|
|
|
[](https://github.com/menloresearch/deep-research)
|
|
[](https://opensource.org/licenses/Apache-2.0)
|
|
[](https://jan.ai/)
|
|
|
|
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
|
|
<!--  -->
|
|
|
|
## Overview
|
|
|
|
**Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling.
|
|
|
|
Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
|
|
|
|
## Performance
|
|
|
|
### Question Answering (SimpleQA)
|
|
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.
|
|
|
|

|
|
|
|
*The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.*
|
|
|
|
### Chat Benchmarks
|
|
|
|
These benchmarks evaluate the model's conversational and instructional capabilities.
|
|
|
|

|
|
|
|
## Quick Start
|
|
|
|
### Integration with Jan App
|
|
|
|
Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
|
|
|

|
|
|
|
### Local Deployment
|
|
|
|
**Using vLLM:**
|
|
```bash
|
|
vllm serve janhq/Jan-v1-4B \
|
|
--host 0.0.0.0 \
|
|
--port 1234 \
|
|
--enable-auto-tool-choice \
|
|
--tool-call-parser hermes
|
|
```
|
|
|
|
**Using llama.cpp:**
|
|
```bash
|
|
llama-server --model jan-v1.gguf \
|
|
--host 0.0.0.0 \
|
|
--port 1234 \
|
|
--jinja \
|
|
--no-context-shift
|
|
```
|
|
|
|
### Recommended Parameters
|
|
|
|
```yaml
|
|
temperature: 0.6
|
|
top_p: 0.95
|
|
top_k: 20
|
|
min_p: 0.0
|
|
max_tokens: 2048
|
|
```
|
|
|
|
|
|
## 🤝 Community & Support
|
|
|
|
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions) <!-- Update with your HF model ID -->
|
|
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
|
|
|
|
## 📄 Citation
|
|
```bibtex
|
|
Updated Soon
|
|
```
|
|
--- |