初始化项目,由ModelHub XC社区提供模型
Model: Neura-Tech-AI/Neuron-14B Source: Original Platform
This commit is contained in:
65
README.md
Normal file
65
README.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
language:
|
||||
- en
|
||||
- hi
|
||||
tags:
|
||||
- neuron
|
||||
- neura-tech
|
||||
- 14B
|
||||
- text-generation
|
||||
- qwen2
|
||||
- neural-networks
|
||||
license: apache-2.0
|
||||
datasets:
|
||||
- custom-neura-tech-data
|
||||
metrics:
|
||||
- accuracy
|
||||
---
|
||||
|
||||
# 🧠 Neuron-14B: The Official Intelligence of Neura Tech
|
||||
|
||||
**Neuron-14B** is a high-performance Large Language Model (LLM) developed by **Neura Tech**. It serves as the flagship model for advanced reasoning, creative synthesis, and multilingual communication.
|
||||
|
||||
---
|
||||
|
||||
## 🏢 Organization Identity
|
||||
* **Company**: Neura Tech
|
||||
* **Project Name**: Neuron
|
||||
* **Lead Architect**: Anandnrnnffn
|
||||
|
||||
## 📊 Model Specifications
|
||||
* **Architecture**: Optimized Transformer (Fine-tuned from Qwen2)
|
||||
* **Parameters**: ~15 Billion
|
||||
* **Precision**: BF16 (Bfloat16)
|
||||
* **Context Window**: 131,072 tokens
|
||||
* **License**: Apache-2.0 (Open Source)
|
||||
|
||||
## 🎯 Core Capabilities
|
||||
* **Advanced Reasoning**: Capable of solving complex logical and mathematical queries.
|
||||
* **Multilingual Proficiency**: Highly optimized for English and Hindi (including Hinglish).
|
||||
* **Instruction Following**: Specifically tuned to follow complex user prompts with high precision.
|
||||
* **Creative Synthesis**: Exceptional at generating scripts, stories, and technical documentation.
|
||||
|
||||
## 📜 License & Usage
|
||||
This model is licensed under the Apache-2.0 License. This means you are free to use, modify, and distribute this model, provided that you credit Neura Tech as the original creator.
|
||||
|
||||
## 🛠️ Quick Start (Python)
|
||||
To use **Neuron-14B**, load it via the Hugging Face `transformers` library:
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_id = "Anandnrnnffn/Neura-Tech-14B-Weights"
|
||||
|
||||
# Load Neuron-14B Tokenizer
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
||||
|
||||
# Load Model Weights
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_id,
|
||||
device_map="auto",
|
||||
torch_dtype="auto"
|
||||
)
|
||||
```
|
||||
|
||||
## © 2026 Neura Tech. All Rights Reserved.
|
||||
Reference in New Issue
Block a user