初始化项目,由ModelHub XC社区提供模型

Model: disham993/electrical-embeddinggemma-ir_finetune_16bit
Source: Original Platform
This commit is contained in:
ModelHub XC
2026-05-14 12:27:42 +08:00
commit dc99db43b9
19 changed files with 51744 additions and 0 deletions

38
.gitattributes vendored Normal file
View File

@@ -0,0 +1,38 @@
*.7z filter=lfs diff=lfs merge=lfs -text
*.arrow filter=lfs diff=lfs merge=lfs -text
*.bin filter=lfs diff=lfs merge=lfs -text
*.bz2 filter=lfs diff=lfs merge=lfs -text
*.ckpt filter=lfs diff=lfs merge=lfs -text
*.ftz filter=lfs diff=lfs merge=lfs -text
*.gz filter=lfs diff=lfs merge=lfs -text
*.h5 filter=lfs diff=lfs merge=lfs -text
*.joblib filter=lfs diff=lfs merge=lfs -text
*.lfs.* filter=lfs diff=lfs merge=lfs -text
*.mlmodel filter=lfs diff=lfs merge=lfs -text
*.model filter=lfs diff=lfs merge=lfs -text
*.msgpack filter=lfs diff=lfs merge=lfs -text
*.npy filter=lfs diff=lfs merge=lfs -text
*.npz filter=lfs diff=lfs merge=lfs -text
*.onnx filter=lfs diff=lfs merge=lfs -text
*.ot filter=lfs diff=lfs merge=lfs -text
*.parquet filter=lfs diff=lfs merge=lfs -text
*.pb filter=lfs diff=lfs merge=lfs -text
*.pickle filter=lfs diff=lfs merge=lfs -text
*.pkl filter=lfs diff=lfs merge=lfs -text
*.pt filter=lfs diff=lfs merge=lfs -text
*.pth filter=lfs diff=lfs merge=lfs -text
*.rar filter=lfs diff=lfs merge=lfs -text
*.safetensors filter=lfs diff=lfs merge=lfs -text
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.tar.* filter=lfs diff=lfs merge=lfs -text
*.tar filter=lfs diff=lfs merge=lfs -text
*.tflite filter=lfs diff=lfs merge=lfs -text
*.tgz filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text
*.xz filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
tokenizer.json filter=lfs diff=lfs merge=lfs -text
eeir-models-retrieval-comparison.png filter=lfs diff=lfs merge=lfs -text
poster.png filter=lfs diff=lfs merge=lfs -text

10
1_Pooling/config.json Normal file
View File

@@ -0,0 +1,10 @@
{
"word_embedding_dimension": 768,
"pooling_mode_cls_token": false,
"pooling_mode_mean_tokens": true,
"pooling_mode_max_tokens": false,
"pooling_mode_mean_sqrt_len_tokens": false,
"pooling_mode_weightedmean_tokens": false,
"pooling_mode_lasttoken": false,
"include_prompt": true
}

6
2_Dense/config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"in_features": 768,
"out_features": 3072,
"bias": false,
"activation_function": "torch.nn.modules.linear.Identity"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1a96e74fabcdf6b81dc67562b923e9f9c6ab4a0b7e75d1a2a705debfd0d704b1
size 4718680

6
3_Dense/config.json Normal file
View File

@@ -0,0 +1,6 @@
{
"in_features": 3072,
"out_features": 768,
"bias": false,
"activation_function": "torch.nn.modules.linear.Identity"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:e52d25716b14b0cf5142e1743dee3d5caeef0b42d4567ec510124cdb2896ca49
size 4718680

164
README.md Normal file
View File

@@ -0,0 +1,164 @@
---
language:
- en
library_name: sentence-transformers
pipeline_tag: feature-extraction
base_model: unsloth/embeddinggemma-300m
datasets:
- disham993/ElectricalElectronicsIR
tags:
- embedding
- retrieval
- electrical-engineering
- unsloth
- safetensors
- sentence-transformers
- information-retrieval
- rag
- semantic-search
- arxiv:2509.20354
license: mit
---
# electrical-embeddinggemma-ir_finetune_16bit
## Model Description
This model is a **fully merged fp16 checkpoint** fine-tuned from [`unsloth/embeddinggemma-300m`](https://huggingface.co/unsloth/embeddinggemma-300m) — Unsloth's optimized mirror of Google's [EmbeddingGemma-300M](https://huggingface.co/google/embeddinggemma-300m) — for feature-extraction tasks, specifically dense Information Retrieval (IR) in the electrical and electronics engineering domain. The LoRA adapter weights have been merged into the base model and saved as full fp16 `.safetensors` weights, making this the most compatible variant for the Hugging Face ecosystem (Sentence Transformers, vLLM, Text Embeddings Inference, etc.).
This repository contains the complete model weights (~1.2 GB) and does **not** require a `llama.cpp` backend.
<p align="center"><img src="https://huggingface.co/disham993/electrical-embeddinggemma-ir_finetune_16bit/resolve/main/poster.png" width="340"/></p>
## Training Data
The model was trained on the [`disham993/ElectricalElectronicsIR`](https://huggingface.co/datasets/disham993/ElectricalElectronicsIR) dataset — 20,000 question-passage pairs covering electrical engineering, electronics, power systems, and communications.
- **16k train / 2k validation / 2k test**
- Queries: 133822 characters; passages: 5865,590 characters
- Topics include phased array antennas, IEC 61850 protocols, Josephson junctions, OTDR measurements, MIMO channel estimation, FPGA partial reconfiguration, and more
## Model Details
| | |
|---|---|
| **Base Model** | `unsloth/embeddinggemma-300m` (308M params) |
| **Format** | Merged fp16 (`.safetensors`) |
| **Task** | Feature Extraction (Dense IR / Semantic Search) |
| **Language** | English (en) |
| **Dataset** | `disham993/ElectricalElectronicsIR` |
| **Model size** | ~1.2 GB |
| **License** | MIT |
## Training Procedure
### Training Hyperparameters
| | |
|---|---|
| **Method** | LoRA via Unsloth's `FastSentenceTransformer`, merged to fp16 |
| **LoRA rank / alpha** | r=32, α=64 |
| **Target modules** | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| **Loss** | `MultipleNegativesRankingLoss` (in-batch negatives) |
| **Batch size** | 128 per device × 2 gradient accumulation = 256 effective |
| **Learning rate** | 2e-5 (linear schedule, 3% warmup) |
| **Max steps** | 100 |
| **Max sequence length** | 1024 |
| **Precision** | bf16 (training) → fp16 (saved) |
| **Batch sampler** | `NO_DUPLICATES` |
| **Hardware** | NVIDIA RTX 5090 |
## Evaluation Results
Evaluated on the held-out test split (2,000 queries) of `disham993/ElectricalElectronicsIR` using `sentence_transformers.evaluation.InformationRetrievalEvaluator`.
| Model | MAP@100 | NDCG@10 | MRR@10 | Recall@10 |
|---|---|---|---|---|
| `unsloth/embeddinggemma-300m` (baseline) | 0.5753 | 0.6221 | 0.5682 | 0.7925 |
| `electrical-embeddinggemma-ir_lora` | 0.9795 | 0.9847 | 0.9795 | 1.0000 |
| **`electrical-embeddinggemma-ir_finetune_16bit` (this model)** | **0.9797** | **0.9849** | **0.9797** | **1.0000** |
| `electrical-embeddinggemma-ir_f16` | 0.9849 | 0.9887 | 0.9849 | 0.9995 |
| `electrical-embeddinggemma-ir_q8_0` | 0.9844 | 0.9883 | 0.9844 | 0.9995 |
| `electrical-embeddinggemma-ir_q4_k_m` | 0.9841 | 0.9879 | 0.9840 | 0.9990 |
| `electrical-embeddinggemma-ir_q5_k_m` | 0.9824 | 0.9866 | 0.9823 | 0.9990 |
**+41 pp MAP@100 and +73% relative MRR@10 improvement over the general-purpose baseline. Recall@10 = 1.0000 — perfect top-10 coverage.**
## Usage
```bash
# Install dependencies
pip install sentence-transformers torch
```
```python
import torch
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
# === SEMANTIC SEARCH EXAMPLE ===
if __name__ == "__main__":
print("Downloading and Booting Engine...")
# SentenceTransformers flawlessly supports this repository natively!
model = SentenceTransformer("disham993/electrical-electronics-gemma-ir_finetune_16bit")
query = "How do transformers step up voltage?"
# A miniature corpus of engineering documents
documents = [
"Ohm's law defines the relationship between voltage, current, and resistance.",
"AC circuits use alternating current which changes direction periodically.",
"A step-up transformer has more turns on its secondary coil than its primary, increasing voltage.",
"Capacitors store electrical energy in an electric field.",
"Inductors resist changes in electric current passing through them.",
"Transformers operate on Faraday's law of induction to transfer energy between circuits.",
"Diodes allow current to pass in only one direction.",
"Voltage is the electric potential difference between two points."
]
print("Extracting Embeddings...")
# Convert texts directly to PyTorch tensors
query_emb = model.encode(query, convert_to_tensor=True)
doc_embs = model.encode(documents, convert_to_tensor=True)
# Calculate similarities natively
similarities = F.cosine_similarity(query_emb.unsqueeze(0), doc_embs)
# Retrieve the top 3 highest scoring documents
top_3_idx = torch.topk(similarities, k=3).indices.tolist()
print(f"\n--- Top 3 Documents for Query: '{query}' ---")
for rank, idx in enumerate(top_3_idx, 1):
print(f"Rank {rank} (Score: {similarities[idx]:.4f}) | {documents[idx]}")
```
## Limitations and Bias
While this model performs exceptionally well in the electrical and electronics engineering domain, it is not designed for use in other domains. Additionally, it may:
- Underperform on queries that mix electrical engineering with unrelated domains (e.g., biomedical, legal, financial)
- Show reduced performance on non-English text or highly colloquial phrasing
- Be slower and more memory-intensive than the GGUF variants (~1.2 GB vs ~236 MB for q4_k_m)
This model is intended for research, educational, and production IR applications in the electrical engineering domain.
## Training Infrastructure
For the complete fine-tuning and evaluation pipeline — from data loading to GGUF export — refer to the [GitHub repository](https://github.com/di37/electrical-embeddinggemma-ir-finetuning-evaluation) and the notebooks `Finetuning_EmbeddingGemma_EEIR_RTX_5090.ipynb` and `Evaluate_All_Models.ipynb`.
## Last Update
2026-04-18
## Citation
```bibtex
@misc{electrical-embeddinggemma-ir,
author = {disham993},
title = {Electrical \& Electronics Engineering Embedding Models},
year = {2026},
howpublished = {\url{https://huggingface.co/collections/disham993/electrical-and-electronics-engineering-embedding-models}},
}
```

3
added_tokens.json Normal file
View File

@@ -0,0 +1,3 @@
{
"<image_soft_token>": 262144
}

62
config.json Normal file
View File

@@ -0,0 +1,62 @@
{
"_sliding_window_pattern": 6,
"architectures": [
"Gemma3TextModel"
],
"attention_bias": false,
"attention_dropout": 0.0,
"attn_logit_softcapping": null,
"bos_token_id": 2,
"torch_dtype": "bfloat16",
"eos_token_id": 1,
"final_logit_softcapping": null,
"head_dim": 256,
"hidden_activation": "gelu_pytorch_tanh",
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 1152,
"layer_types": [
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"sliding_attention",
"full_attention"
],
"max_position_embeddings": 2048,
"model_name": "unsloth/embeddinggemma-300m",
"model_type": "gemma3_text",
"num_attention_heads": 3,
"num_hidden_layers": 24,
"num_key_value_heads": 1,
"pad_token_id": 0,
"query_pre_attn_scalar": 256,
"rms_norm_eps": 1e-06,
"rope_local_base_freq": 10000.0,
"rope_scaling": null,
"rope_theta": 1000000.0,
"sliding_window": 512,
"tokenizer_class": "GemmaTokenizerFast",
"unsloth_version": "2026.4.6",
"use_bidirectional_attention": true,
"use_cache": true,
"vocab_size": 262144
}

View File

@@ -0,0 +1,14 @@
{
"model_type": "SentenceTransformer",
"__version__": {
"sentence_transformers": "5.3.0",
"transformers": "4.56.2",
"pytorch": "2.10.0+cu128"
},
"prompts": {
"query": "",
"document": ""
},
"default_prompt_name": null,
"similarity_fn_name": "cosine"
}

View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1e4c5292f17104668aaefe797397515f723c31a4a5c24b7d3e824996c5834605
size 100559

3
model.safetensors Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:3cd2fb3fe8231cef44e800759736a9a282c40b3d2278fe127578a7bd45287c74
size 1211486072

32
modules.json Normal file
View File

@@ -0,0 +1,32 @@
[
{
"idx": 0,
"name": "0",
"path": "",
"type": "sentence_transformers.models.Transformer"
},
{
"idx": 1,
"name": "1",
"path": "1_Pooling",
"type": "sentence_transformers.models.Pooling"
},
{
"idx": 2,
"name": "2",
"path": "2_Dense",
"type": "sentence_transformers.models.Dense"
},
{
"idx": 3,
"name": "3",
"path": "3_Dense",
"type": "sentence_transformers.models.Dense"
},
{
"idx": 4,
"name": "4",
"path": "4_Normalize",
"type": "sentence_transformers.models.Normalize"
}
]

3
poster.png Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:322e1f2a54f78bde72c68c1ba597802cc43ea8dfee6b86202eeabfecb5c9d06f
size 379748

View File

@@ -0,0 +1,4 @@
{
"max_seq_length": 2048,
"do_lower_case": false
}

33
special_tokens_map.json Normal file
View File

@@ -0,0 +1,33 @@
{
"boi_token": "<start_of_image>",
"bos_token": {
"content": "<bos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eoi_token": "<end_of_image>",
"eos_token": {
"content": "<eos>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"image_token": "<image_soft_token>",
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}

3
tokenizer.json Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:45732ebfacbc3b3da71bd8290d8c14c0df3b2d8d63d86970a953c59d71bd36d8
size 33385261

3
tokenizer.model Normal file
View File

@@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:1299c11d7cf632ef3b4e11937501358ada021bbdf7c47638d13c0ee982f2e79c
size 4689074

51351
tokenizer_config.json Normal file

File diff suppressed because it is too large Load Diff