初始化项目,由ModelHub XC社区提供模型
Model: AI-ModelScope/helium-1-2b-science Source: Original Platform
This commit is contained in:
195
README.md
Normal file
195
README.md
Normal file
@@ -0,0 +1,195 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: cc-by-sa-4.0
|
||||
language:
|
||||
- bg
|
||||
- cs
|
||||
- da
|
||||
- de
|
||||
- el
|
||||
- en
|
||||
- es
|
||||
- et
|
||||
- fi
|
||||
- fr
|
||||
- ga
|
||||
- hr
|
||||
- hu
|
||||
- it
|
||||
- lt
|
||||
- lv
|
||||
- mt
|
||||
- nl
|
||||
- pl
|
||||
- pt
|
||||
- ro
|
||||
- sk
|
||||
- sl
|
||||
- sv
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# Helium-1-2b
|
||||
|
||||
<img src="https://huggingface.co/kyutai/moshi-1-2b/resolve/main/helium_sticker.png" width="400">
|
||||
|
||||
|
||||
## Model Description
|
||||
|
||||
<!-- Provide a longer summary of what this model is. -->
|
||||
|
||||
Helium-1 is a lightweight language model with 2B parameters, targeting edge and mobile devices.
|
||||
It supports the 24 official languages of the European Union.
|
||||
|
||||
⚠️ Helium-1 is a base model, which was not fine-tuned to follow instructions or human preferences.
|
||||
For most downstream use cases, the model should be aligned with supervised fine-tuning, RLHF or related methods.
|
||||
|
||||
- **Developed by:** Kyutai
|
||||
- **Model type:** Large Language Model
|
||||
- **Language(s) (NLP):** Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Irish, Croatian, Hungarian, Italian, Lithuanian, Latvian, Maltese, Dutch, Polish, Portuguese, Romanian, Slovak, Slovenian, Swedish.
|
||||
- **License:** CC-BY-SA 4.0
|
||||
- **Terms of use:** As a model distilled from Gemma 2, Helium 1 is subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms
|
||||
|
||||
<!-- ### Model Sources [optional]
|
||||
|
||||
Provide the basic links for the model.
|
||||
|
||||
- **Repository:** [More Information Needed]
|
||||
- **Paper [optional]:** [More Information Needed]
|
||||
- **Demo [optional]:** [More Information Needed] -->
|
||||
|
||||
## Uses
|
||||
|
||||
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
||||
|
||||
### Direct Use
|
||||
|
||||
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
||||
|
||||
The intended use of the Helium model is research and development of natural language processing systems, including but not limited to language generation and understanding.
|
||||
The model can be used in Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Irish, Croatian, Hungarian, Italian, Lithuanian, Latvian, Maltese, Dutch, Polish, Portuguese, Romanian, Slovak, Slovenian, Swedish.
|
||||
For most downstream use cases, the model should be aligned with supervised fine-tuning, RLHF or related methods.
|
||||
|
||||
### Out-of-Scope Use
|
||||
|
||||
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
||||
|
||||
The model should not be used in other languages than the ones on which it was trained.
|
||||
The model is not intended to be used for any malicious or illegal activities of any kind.
|
||||
The model was not fine-tuned to follow instructions, and thus should not be used as such.
|
||||
|
||||
## Bias, Risks, and Limitations
|
||||
|
||||
Helium-1 is a base language model, which was not aligned to human preferences.
|
||||
As such, the model can generate incorrect, biased, harmful or generally unhelpful content.
|
||||
Thus, the model should not be used for downstream applications without further alignment, evaluations and mitigations of risks.
|
||||
|
||||
## How to Get Started with the Model
|
||||
|
||||
Use the code below to get started with the model.
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
model_id = "kyutai/helium-1-2b"
|
||||
|
||||
pipe = pipeline(
|
||||
"text-generation",
|
||||
model=model_id,
|
||||
torch_dtype=torch.bfloat16,
|
||||
device_map="auto"
|
||||
)
|
||||
|
||||
text = pipe("Hello, today is a great day to")
|
||||
```
|
||||
|
||||
## Training Details
|
||||
|
||||
### Training Data
|
||||
|
||||
Helium-1 was trained on data from Common Crawl, which was preprocessed with the dactory library.
|
||||
|
||||
<!--#### Training Hyperparameters
|
||||
|
||||
- **Training regime:** [More Information Needed] -->
|
||||
|
||||
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
||||
|
||||
## Evaluation
|
||||
|
||||
<!-- This section describes the evaluation protocols and provides the results. -->
|
||||
|
||||
#### Testing Data
|
||||
|
||||
The model was evaluated on MMLU, TriviaQA, NaturalQuestions, ARC Easy & Challenge, Open Book QA, Common Sense QA,
|
||||
Physical Interaction QA, Social Interaction QA, HellaSwag, WinoGrande, Multilingual Knowledge QA, FLORES 200.
|
||||
|
||||
#### Metrics
|
||||
|
||||
We report accuracy on MMLU, ARC, OBQA, CSQA, PIQA, SIQA, HellaSwag, WinoGrande.
|
||||
We report exact match on TriviaQA, NQ and MKQA.
|
||||
We report BLEU on FLORES.
|
||||
|
||||
#### English Results
|
||||
|
||||
| Benchmark | Helium-1 | HF SmolLM2 (1.7B) | Gemma-2 (2.6B) | Llama-3.2 (3B) | Qwen2.5 (1.5B) |
|
||||
|--------------|:------:|:------:|:------:|:------:|:------:|
|
||||
| | | | | | |
|
||||
| MMLU | 52.0 | 50.4 | 53.1 | 56.6 | 61.0 |
|
||||
| NQ | 16.5 | 15.1 | 17.7 | 22.0 | 13.1 |
|
||||
| TQA | 46.5 | 45.4 | 49.9 | 53.6 | 35.9 |
|
||||
| ARC E | 82.2 | 81.8 | 81.1 | 84.6 | 89.7 |
|
||||
| ARC C | 64.6 | 64.7 | 66.0 | 69.0 | 77.2 |
|
||||
| OBQA | 65.4 | 61.4 | 64.6 | 68.4 | 73.8 |
|
||||
| CSQA | 63.6 | 59.0 | 64.4 | 65.4 | 72.4 |
|
||||
| PIQA | 78.5 | 77.7 | 79.8 | 78.9 | 76.0 |
|
||||
| SIQA | 62.3 | 57.5 | 61.9 | 63.8 | 68.7 |
|
||||
| HS | 73.6 | 73.2 | 74.7 | 76.9 | 67.5 |
|
||||
| WG | 66.9 | 65.6 | 71.2 | 72.0 | 64.8 |
|
||||
| | | | | | |
|
||||
| Average | 61.1 | 59.3 | 62.2 | 64.7 | 63.6 |
|
||||
|
||||
#### Multilingual Results
|
||||
|
||||
| Benchmark | Helium-1 | Gemma-2 (2.6B) | Llama-3.2 (3B) |
|
||||
|--------------|:------:|:------:|:------:|
|
||||
| | | | | | |
|
||||
| ARC E | 71.1 | 65.8 | 68.2 |
|
||||
| ARC C | 54.8 | 51.1 | 52.6 |
|
||||
| MMLU | 44.8 | 43.1 | 45.3 |
|
||||
| HS | 51.9 | 49.9 | 48.4 |
|
||||
| FLORES | 20.6 | 21.9 | 19.8 |
|
||||
| MKQA | 16.5 | 17.2 | 19.7 |
|
||||
| | | | | | |
|
||||
| Average | 43.3 | 41.5 | 42.3 |
|
||||
|
||||
|
||||
## Technical Specifications
|
||||
|
||||
### Model Architecture and Objective
|
||||
|
||||
| Hyperparameter | Value |
|
||||
|--------------|:------:|
|
||||
| Model dimension | 2048 |
|
||||
| MLP dimension | 8192 |
|
||||
| Layers | 28 |
|
||||
| Heads | 16 |
|
||||
| RoPE theta | 20,000 |
|
||||
| Context size | 4096 |
|
||||
| Max learning rate | 2.4e-04 |
|
||||
| Total steps | 500,000 |
|
||||
| Weight decay | 0.1 |
|
||||
| Gradient clip | 1.0 |
|
||||
|
||||
#### Hardware
|
||||
|
||||
The model was trained on 64 NVIDIA H100 Tensor Core GPUs.
|
||||
|
||||
#### Software
|
||||
|
||||
The model was trained using Jax.
|
||||
|
||||
## Citation
|
||||
|
||||
Blog post: [Helium 1: a modular and multilingual LLM](https://kyutai.org/2025/04/30/helium.html).
|
||||
Reference in New Issue
Block a user