--- license: apache-2.0 language: - en base_model: - Qwen/Qwen3-4B-Thinking-2507 pipeline_tag: text-generation --- # Jan-v1: Advanced Agentic Language Model [![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/menloresearch/deep-research) [![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0) [![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) ## Overview Introducing **Jan-v1**, the first release in the **Jan Family** – specifically designed for advanced agentic reasoning and complex problem-solving within the [Jan App](https://jan.ai/). Building on the innovative agentic capabilities of our earlier [**Lucy** ](https://huggingface.co/Menlo/Lucy) model, Jan-v1 represents a significant leap forward through strategic model scaling. Jan-v1 leverages the newly released [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to deliver significantly enhanced reasoning capabilities and effective tool utilizatio. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI. ## Performance ### Question Answering (SimpleQA) For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.2% accuracy. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/abEitIjvszFm7Z8mRHQz-.png) *The 91.2% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.* ### Chat Benchmarks These benchmarks evaluate the model's conversational and instructional capabilities. | Benchmark | JanV1 (Ours) | Qwen3-4B-Thinking-2507 | GPT-OSS-20B (High) | GPT-OSS-20B (Low) | | :--- | :--- | :--- | :--- | :--- | | EQBench | **83.61** | 82.61 | 78.35 | 78.35 | | CreativeWriting | **72.08** | 65.74 | 30.23 | 26.38 | | IFBench | **Prompt:** 0.3537
**Instruction:** 0.3910 | Prompt: 0.4490
Instruction: **0.4806** | Prompt: 0.5646
Instruction: 0.6000 | Prompt: 0.5034
Instruction: 0.5403 | | ArenaHardv2 | **25.3** | - | - | - | ## Quick Start ### Integration with Jan App Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities. ### Local Deployment **Using vLLM:** ```bash vllm serve Menlo/Jan-v1 \ --host 0.0.0.0 \ --port 1234 \ --enable-auto-tool-choice \ --tool-call-parser hermes ``` **Using llama.cpp:** ```bash llama-server --model jan-v1.gguf \ --host 0.0.0.0 \ --port 1234 ``` ### Recommended Parameters ```yaml temperature: 0.6 top_p: 0.95 top_k: 20 min_p: 0.0 max_tokens: 2048 ``` ## 🤝 Community & Support - **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) - **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/) ## 📄 Citation ```bibtex Updated Soon ``` ---