Create README.md
This commit is contained in:
committed by
system
parent
123270152a
commit
868d5588ac
88
README.md
Normal file
88
README.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- Menlo/Jan-v1-4B
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
# Jan-v1: Advanced Agentic Language Model
|
||||
|
||||
[](https://github.com/menloresearch/deep-research)
|
||||
[](https://opensource.org/licenses/Apache-2.0)
|
||||
[](https://jan.ai/)
|
||||
|
||||
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
|
||||
<!--  -->
|
||||
|
||||
## Overview
|
||||
|
||||
Introducing **Jan-v1**, the first release in the **Jan Family** – specifically designed for advanced agentic reasoning and complex problem-solving within the [Jan App](https://jan.ai/). Building on the innovative agentic capabilities of our earlier [**Lucy** ](https://huggingface.co/Menlo/Lucy) model, Jan-v1 represents a significant leap forward through strategic model scaling.
|
||||
|
||||
Jan-v1 leverages the newly released [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to deliver significantly enhanced reasoning capabilities and effective tool utilizatio. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI.
|
||||
|
||||
## Performance
|
||||
|
||||
### Question Answering (SimpleQA)
|
||||
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.2% accuracy.
|
||||
|
||||

|
||||
|
||||
*The 91.2% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.*
|
||||
|
||||
### Report Generation & Factuality
|
||||
Evaluated on a benchmark testing factual report generation from web sources, using an LLM-as-judge. The benchmark includes our proprietary `Jan Exam - Longform` and the `DeepResearchBench`.
|
||||
|
||||
| Model | Average Overall Score |
|
||||
| :--- | :--- |
|
||||
| o4-mini | 7.30 |
|
||||
| **Jan-v1-4B (Ours)** | **7.17** |
|
||||
| gpt-4.1 | 6.90 |
|
||||
| Qwen3-4B-Thinking-2507 | 6.84 |
|
||||
| 4o-mini | 6.60 |
|
||||
| Jan-nano-128k | 5.63 |
|
||||
## Quick Start
|
||||
|
||||
### Integration with Jan App
|
||||
|
||||
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
|
||||
|
||||
### Local Deployment
|
||||
|
||||
**Using vLLM:**
|
||||
```bash
|
||||
vllm serve Menlo/Jan-v1 \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234 \
|
||||
--enable-auto-tool-choice \
|
||||
--tool-call-parser hermes
|
||||
```
|
||||
|
||||
**Using llama.cpp:**
|
||||
```bash
|
||||
llama-server --model jan-v1.gguf \
|
||||
--host 0.0.0.0 \
|
||||
--port 1234
|
||||
```
|
||||
|
||||
### Recommended Parameters
|
||||
|
||||
```yaml
|
||||
temperature: 0.6
|
||||
top_p: 0.95
|
||||
top_k: 20
|
||||
min_p: 0.0
|
||||
max_tokens: 2048
|
||||
```
|
||||
|
||||
|
||||
## 🤝 Community & Support
|
||||
|
||||
- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) <!-- Update with your HF model ID -->
|
||||
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
|
||||
|
||||
## 📄 Citation
|
||||
```bibtex
|
||||
Updated Soon
|
||||
```
|
||||
---
|
||||
Reference in New Issue
Block a user