初始化项目,由ModelHub XC社区提供模型
Model: Finisha-F-scratch/Learnia-gemini-test Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.model filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||||
|
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||||
|
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||||
73
README.md
Normal file
73
README.md
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
---
|
||||||
|
tags:
|
||||||
|
- text-generation-inference
|
||||||
|
- text-generation
|
||||||
|
library_name: transformers
|
||||||
|
base_model: Finisha-LLM/Learnia
|
||||||
|
widget:
|
||||||
|
- messages:
|
||||||
|
- role: user
|
||||||
|
content: What is your favorite condiment?
|
||||||
|
license: other
|
||||||
|
datasets:
|
||||||
|
- TeichAI/gemini-3-pro-preview-high-reasoning-1000x
|
||||||
|
pipeline_tag: text-generation
|
||||||
|
---
|
||||||
|
|
||||||
|
# 📜 Documentation : Learnia-Gemini-Test 🧬
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 🏗️ Genèse du Projet
|
||||||
|
|
||||||
|
**Learnia-Gemini-Test** n'est pas une simple itération. C'est un "stress-test" syntaxique. Le modèle de base, **Learnia (52M)**, conçu intégralement *from scratch*, a été injecté d'une couche comportementale spécifique via un fine-tuning ciblé.
|
||||||
|
|
||||||
|
> **L'objectif :** Observer comment une architecture légère et originale absorbe, digère et recrache les patterns de réponse d'un modèle massif comme Gemini.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 📊 Spécifications Techniques
|
||||||
|
|
||||||
|
| Paramètre | Détail |
|
||||||
|
| --- | --- |
|
||||||
|
| **Base Model** | Learnia (Original Architecture) |
|
||||||
|
| **Taille** | 52 Million Parameters |
|
||||||
|
| **Nature** | Decoder-only Transformer |
|
||||||
|
| **Fine-tuning** | Hugging Face Public Dataset (Gemini Outputs) |
|
||||||
|
| **Vocation** | Recherche, simulation de logs, exploration de syntaxe hybride |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🧠 Comportement & Texture
|
||||||
|
|
||||||
|
Contrairement aux modèles lisses, **Learnia-Gemini-Test** conserve la "nervosité" de l'architecture Learnia.
|
||||||
|
|
||||||
|
* **Hybridation :** Le modèle mélange la structure brute de Learnia avec les tics de langage formels de Gemini.
|
||||||
|
* **Output :** Génère des paragraphes entiers structurés comme des réponses d'assistant, mais avec la signature thermique unique d'un modèle de 52M.
|
||||||
|
* **Usage :** Idéal pour des projets créatifs où l'on cherche une "IA qui imite une IA", créant un effet de mise en abyme.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### 🛠️ Installation & Inférence
|
||||||
|
|
||||||
|
Pour charger le modèle via la bibliothèque `transformers` :
|
||||||
|
|
||||||
|
```python
|
||||||
|
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||||
|
|
||||||
|
# Chargement de la signature Learnia
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained("ton-path/learnia-gemini-test")
|
||||||
|
model = AutoModelForCausalLM.from_pretrained("ton-path/learnia-gemini-test")
|
||||||
|
|
||||||
|
# Test de génération
|
||||||
|
prompt = "System Log: Analysis of..."
|
||||||
|
inputs = tokenizer(prompt, return_tensors="pt")
|
||||||
|
outputs = model.generate(**inputs, max_length=150)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### ⚠️ Note sur l'ADN "From Scratch"
|
||||||
|
|
||||||
|
Ce modèle n'est pas une copie. C'est une **interprétation**. Les erreurs de syntaxe potentielles ou les cassures de rythme ne sont pas des bugs, mais la preuve de l'existence d'un moteur de langage indépendant qui refuse de se lisser totalement derrière le dataset d'affinage.
|
||||||
3
added_tokens.json
Normal file
3
added_tokens.json
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
{
|
||||||
|
"[PAD]": 50257
|
||||||
|
}
|
||||||
14
chat_template.jinja
Normal file
14
chat_template.jinja
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{% for message in messages %}
|
||||||
|
{% if message['role'] == 'system' %}
|
||||||
|
{{ '<|im_start|>system\n' + message['content'] + '<|im_end|>' }}
|
||||||
|
{% elif message['role'] == 'user' %}
|
||||||
|
{{ '\n<|im_start|>user\n' + message['content'] + '<|im_end|>' }}
|
||||||
|
{% elif message['role'] == 'assistant' %}
|
||||||
|
{{ '\n<|im_start|>assistant\n' + message['content'] + '<|im_end|>' }}
|
||||||
|
{% endif %}
|
||||||
|
{% endfor %}
|
||||||
|
{% if add_generation_prompt %}
|
||||||
|
{{ '\n<|im_start|>assistant\n' }}
|
||||||
|
{% else %}
|
||||||
|
{{ '<|im_end|>' }}
|
||||||
|
{% endif %}
|
||||||
35
config.json
Normal file
35
config.json
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
{
|
||||||
|
"_name_or_path": "Finisha-LLM/Learnia",
|
||||||
|
"activation_function": "gelu_new",
|
||||||
|
"architectures": [
|
||||||
|
"GPT2LMHeadModel"
|
||||||
|
],
|
||||||
|
"attn_pdrop": 0.1,
|
||||||
|
"bos_token_id": 50256,
|
||||||
|
"dtype": "float32",
|
||||||
|
"embd_pdrop": 0.1,
|
||||||
|
"eos_token_id": 50256,
|
||||||
|
"initializer_range": 0.02,
|
||||||
|
"layer_norm_epsilon": 1e-05,
|
||||||
|
"model_type": "gpt2",
|
||||||
|
"n_ctx": 1350,
|
||||||
|
"n_embd": 512,
|
||||||
|
"n_head": 8,
|
||||||
|
"n_inner": null,
|
||||||
|
"n_layer": 8,
|
||||||
|
"n_positions": 1350,
|
||||||
|
"pad_token_id": 50257,
|
||||||
|
"reorder_and_upcast_attn": false,
|
||||||
|
"resid_pdrop": 0.1,
|
||||||
|
"scale_attn_by_inverse_layer_idx": false,
|
||||||
|
"scale_attn_weights": true,
|
||||||
|
"summary_activation": null,
|
||||||
|
"summary_first_dropout": 0.1,
|
||||||
|
"summary_proj_to_labels": true,
|
||||||
|
"summary_type": "cls_index",
|
||||||
|
"summary_use_proj": true,
|
||||||
|
"torch_dtype": "float32",
|
||||||
|
"transformers_version": "4.48.0",
|
||||||
|
"use_cache": true,
|
||||||
|
"vocab_size": 50258
|
||||||
|
}
|
||||||
9
generation_config.json
Normal file
9
generation_config.json
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
{
|
||||||
|
"_from_model_config": true,
|
||||||
|
"bos_token_id": 50256,
|
||||||
|
"eos_token_id": [
|
||||||
|
50256
|
||||||
|
],
|
||||||
|
"pad_token_id": 50257,
|
||||||
|
"transformers_version": "4.48.0"
|
||||||
|
}
|
||||||
50001
merges.txt
Normal file
50001
merges.txt
Normal file
File diff suppressed because it is too large
Load Diff
3
model.safetensors
Normal file
3
model.safetensors
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:752536f7e7e0ada76039ad8932528a3cbd0b6afe3a6fc8b82c466436777ef38b
|
||||||
|
size 206583536
|
||||||
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:c3044c0a41c36132b1e811fc41f554b6bd1f772d59536373160a51f67d85d1e6
|
||||||
|
size 20099
|
||||||
30
special_tokens_map.json
Normal file
30
special_tokens_map.json
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"bos_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"eos_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"pad_token": {
|
||||||
|
"content": "[PAD]",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
},
|
||||||
|
"unk_token": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false
|
||||||
|
}
|
||||||
|
}
|
||||||
250315
tokenizer.json
Normal file
250315
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
30
tokenizer_config.json
Normal file
30
tokenizer_config.json
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"add_prefix_space": false,
|
||||||
|
"added_tokens_decoder": {
|
||||||
|
"50256": {
|
||||||
|
"content": "<|endoftext|>",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": true,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
},
|
||||||
|
"50257": {
|
||||||
|
"content": "[PAD]",
|
||||||
|
"lstrip": false,
|
||||||
|
"normalized": false,
|
||||||
|
"rstrip": false,
|
||||||
|
"single_word": false,
|
||||||
|
"special": true
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"bos_token": "<|endoftext|>",
|
||||||
|
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
||||||
|
"clean_up_tokenization_spaces": false,
|
||||||
|
"eos_token": "<|endoftext|>",
|
||||||
|
"extra_special_tokens": {},
|
||||||
|
"model_max_length": 1024,
|
||||||
|
"pad_token": "[PAD]",
|
||||||
|
"tokenizer_class": "GPT2Tokenizer",
|
||||||
|
"unk_token": "<|endoftext|>"
|
||||||
|
}
|
||||||
3
training_args.bin
Normal file
3
training_args.bin
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
version https://git-lfs.github.com/spec/v1
|
||||||
|
oid sha256:b959b47b578c6498c29c719457c81681efef05fd47f883f0f5388a8bbfe3a33d
|
||||||
|
size 5688
|
||||||
49
training_params.json
Normal file
49
training_params.json
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
{
|
||||||
|
"model": "Finisha-LLM/Learnia",
|
||||||
|
"project_name": "Learnia-think-test",
|
||||||
|
"data_path": "TeichAI/gemini-3-pro-preview-high-reasoning-1000x",
|
||||||
|
"train_split": "train",
|
||||||
|
"valid_split": null,
|
||||||
|
"add_eos_token": true,
|
||||||
|
"block_size": 512,
|
||||||
|
"model_max_length": 1350,
|
||||||
|
"padding": "right",
|
||||||
|
"trainer": "sft",
|
||||||
|
"use_flash_attention_2": false,
|
||||||
|
"log": "tensorboard",
|
||||||
|
"disable_gradient_checkpointing": false,
|
||||||
|
"logging_steps": -1,
|
||||||
|
"eval_strategy": "epoch",
|
||||||
|
"save_total_limit": 1,
|
||||||
|
"auto_find_batch_size": false,
|
||||||
|
"mixed_precision": "fp16",
|
||||||
|
"lr": 3e-05,
|
||||||
|
"epochs": 3,
|
||||||
|
"batch_size": 2,
|
||||||
|
"warmup_ratio": 0.1,
|
||||||
|
"gradient_accumulation": 4,
|
||||||
|
"optimizer": "adamw_torch",
|
||||||
|
"scheduler": "linear",
|
||||||
|
"weight_decay": 0.0,
|
||||||
|
"max_grad_norm": 1.0,
|
||||||
|
"seed": 42,
|
||||||
|
"chat_template": "none",
|
||||||
|
"quantization": "int4",
|
||||||
|
"target_modules": "all-linear",
|
||||||
|
"merge_adapter": false,
|
||||||
|
"peft": false,
|
||||||
|
"lora_r": 16,
|
||||||
|
"lora_alpha": 32,
|
||||||
|
"lora_dropout": 0.05,
|
||||||
|
"model_ref": null,
|
||||||
|
"dpo_beta": 0.1,
|
||||||
|
"max_prompt_length": 128,
|
||||||
|
"max_completion_length": null,
|
||||||
|
"prompt_text_column": "prompt",
|
||||||
|
"text_column": "messages",
|
||||||
|
"rejected_text_column": "rejected_text",
|
||||||
|
"push_to_hub": true,
|
||||||
|
"username": "Clemylia",
|
||||||
|
"unsloth": false,
|
||||||
|
"distributed_backend": "ddp"
|
||||||
|
}
|
||||||
1
vocab.json
Normal file
1
vocab.json
Normal file
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user