From 868d5588ac1a85858846a71fddfdb1f30a6ef179 Mon Sep 17 00:00:00 2001 From: "Jan (Homebrew Research)" Date: Mon, 11 Aug 2025 14:54:17 +0000 Subject: [PATCH] Create README.md --- README.md | 88 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 README.md diff --git a/README.md b/README.md new file mode 100644 index 0000000..65391cb --- /dev/null +++ b/README.md @@ -0,0 +1,88 @@ +--- +license: apache-2.0 +language: +- en +base_model: +- Menlo/Jan-v1-4B +pipeline_tag: text-generation +--- +# Jan-v1: Advanced Agentic Language Model + +[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/menloresearch/deep-research) +[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0) +[![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat&logo=android)](https://jan.ai/) + + + + +## Overview + +Introducing **Jan-v1**, the first release in the **Jan Family** – specifically designed for advanced agentic reasoning and complex problem-solving within the [Jan App](https://jan.ai/). Building on the innovative agentic capabilities of our earlier [**Lucy** ](https://huggingface.co/Menlo/Lucy) model, Jan-v1 represents a significant leap forward through strategic model scaling. + +Jan-v1 leverages the newly released [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to deliver significantly enhanced reasoning capabilities and effective tool utilizatio. This architectural evolution is designed to deliver superior performance on complex agentic tasks, setting a new benchmark for accessible, high-performance AI. + +## Performance + +### Question Answering (SimpleQA) +For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.2% accuracy. + +![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/xuDDHjPnqzS_eziwShmBq.png) + +*The 91.2% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.* + +### Report Generation & Factuality +Evaluated on a benchmark testing factual report generation from web sources, using an LLM-as-judge. The benchmark includes our proprietary `Jan Exam - Longform` and the `DeepResearchBench`. + +| Model | Average Overall Score | +| :--- | :--- | +| o4-mini | 7.30 | +| **Jan-v1-4B (Ours)** | **7.17** | +| gpt-4.1 | 6.90 | +| Qwen3-4B-Thinking-2507 | 6.84 | +| 4o-mini | 6.60 | +| Jan-nano-128k | 5.63 | +## Quick Start + +### Integration with Jan App + +Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities. + +### Local Deployment + +**Using vLLM:** +```bash +vllm serve Menlo/Jan-v1 \ + --host 0.0.0.0 \ + --port 1234 \ + --enable-auto-tool-choice \ + --tool-call-parser hermes +``` + +**Using llama.cpp:** +```bash +llama-server --model jan-v1.gguf \ + --host 0.0.0.0 \ + --port 1234 +``` + +### Recommended Parameters + +```yaml +temperature: 0.6 +top_p: 0.95 +top_k: 20 +min_p: 0.0 +max_tokens: 2048 +``` + + +## 🤝 Community & Support + +- **Discussions**: [HuggingFace Community](https://huggingface.co/Menlo/Jan-v1/discussions) +- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/) + +## 📄 Citation +```bibtex +Updated Soon +``` +--- \ No newline at end of file