初始化项目,由ModelHub XC社区提供模型
Model: farbodtavakkoli/OTel-LLM-1.2B-IT Source: Original Platform
This commit is contained in:
35
.gitattributes
vendored
Normal file
35
.gitattributes
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
89
README.md
Normal file
89
README.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
base_model:
|
||||
- LiquidAI/LFM2.5-1.2B-Instruct
|
||||
tags:
|
||||
- telecom
|
||||
- telecommunications
|
||||
- gsma
|
||||
- fine-tuned
|
||||
pipeline_tag: text-generation
|
||||
---
|
||||
|
||||
# OTel-LLM-1.2B-IT
|
||||
|
||||
**OTel-LLM-1.2B-IT** is a telecom-specialized language model fine-tuned on telecommunications domain data. It is part of the [OTel Family of Models](https://huggingface.co/collections/farbodtavakkoli/otel-llm), an open-source initiative to build industry-standard AI models for the global telecommunications sector.
|
||||
|
||||
## Model Details
|
||||
|
||||
| Attribute | Value |
|
||||
|-----------|-------|
|
||||
| **Base Model** | [LiquidAI/LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) |
|
||||
| **Parameters** | 1.2B |
|
||||
| **Training Method** | Full parameter fine-tuning |
|
||||
| **Language** | English |
|
||||
| **License** | Apache 2.0 |
|
||||
|
||||
## Training Data
|
||||
|
||||
The model was trained on high-quality telecom-focused data curated by 100+ domain experts from organizations including AT&T, Microsoft, AMD, GSMA, RelationalAI, Essential AI, Purdue University, Khalifa University, University of Leeds, Yale University, The University of Texas at Dallas, NetoAI, and MantisNLP.
|
||||
|
||||
**Data Sources:**
|
||||
- GSMA Permanent Reference Documents
|
||||
- 3GPP Specifications
|
||||
- O-RAN Documentation
|
||||
- RFC Series
|
||||
- eSIM, terminals, security, networks, roaming, APIs
|
||||
- Industry whitepapers and telecom academic papers
|
||||
|
||||
## Intended Use
|
||||
|
||||
The OTel model family is designed to power end-to-end Retrieval-Augmented Generation (RAG) pipelines for telecommunications. The three model types serve complementary roles:
|
||||
|
||||
1. **Embedding** — Retrieve relevant chunks from telecom specifications, standards, and documentation.
|
||||
2. **Reranker** — Re-score and prioritize the retrieved chunks for relevance.
|
||||
3. **LLM** — Generate accurate responses grounded in the retrieved context.
|
||||
|
||||
Users can deploy the full pipeline or use individual models independently based on their needs.
|
||||
|
||||
**Note:** The LLMs include abstention training — if the model does not receive sufficient context, it will decline to answer rather than hallucinate. This means the models are optimized for context-grounded generation, not open-ended question answering.
|
||||
|
||||
## Related Models
|
||||
|
||||
### Language Models
|
||||
- [OTel LLM Collection](https://huggingface.co/collections/farbodtavakkoli/otel-llm)
|
||||
|
||||
### Embedding Models
|
||||
- [OTel Embedding Collection](https://huggingface.co/collections/farbodtavakkoli/otel-embedding)
|
||||
|
||||
### Reranker Models
|
||||
- [OTel Reranker Collection](https://huggingface.co/collections/farbodtavakkoli/otel-reranker)
|
||||
|
||||
## Related Datasets
|
||||
|
||||
- [OTel-Embedding](https://huggingface.co/datasets/farbodtavakkoli/OTel-Embedding)
|
||||
- [OTel-Safety](https://huggingface.co/datasets/farbodtavakkoli/OTel-Safety)
|
||||
- [OTel-LLM](https://huggingface.co/datasets/farbodtavakkoli/OTel-LLM)
|
||||
- [OTel-Reranker](https://huggingface.co/datasets/farbodtavakkoli/OTel-Reranker)
|
||||
|
||||
## Training Infrastructure
|
||||
|
||||
- **Framework**: ScalarLM (GPU-agnostic)
|
||||
- **Compute**: AMD and NVIDIA GPUs.
|
||||
|
||||
## Citation
|
||||
|
||||
```bibtex
|
||||
@misc{otel2026,
|
||||
title={OTel: Open Telco AI Models},
|
||||
author={Tavakkoli, Farbod and Diamos, Gregory and Paulk, Roderic and Terrazas, Jorden},
|
||||
year={2026},
|
||||
url={https://huggingface.co/farbodtavakkoli}
|
||||
}
|
||||
```
|
||||
|
||||
## Contact
|
||||
|
||||
If you have any technical questions, please feel free to reach out to farbod.tavakkoli@att.com or farbodtavakoli@gmail.com
|
||||
45
chat_template.jinja
Normal file
45
chat_template.jinja
Normal file
@@ -0,0 +1,45 @@
|
||||
{{- bos_token -}}
|
||||
{%- set keep_past_thinking = keep_past_thinking | default(false) -%}
|
||||
{%- set ns = namespace(system_prompt="") -%}
|
||||
{%- if messages[0]["role"] == "system" -%}
|
||||
{%- set ns.system_prompt = messages[0]["content"] -%}
|
||||
{%- set messages = messages[1:] -%}
|
||||
{%- endif -%}
|
||||
{%- if tools -%}
|
||||
{%- set ns.system_prompt = ns.system_prompt + ("\n" if ns.system_prompt else "") + "List of tools: [" -%}
|
||||
{%- for tool in tools -%}
|
||||
{%- if tool is not string -%}
|
||||
{%- set tool = tool | tojson -%}
|
||||
{%- endif -%}
|
||||
{%- set ns.system_prompt = ns.system_prompt + tool -%}
|
||||
{%- if not loop.last -%}
|
||||
{%- set ns.system_prompt = ns.system_prompt + ", " -%}
|
||||
{%- endif -%}
|
||||
{%- endfor -%}
|
||||
{%- set ns.system_prompt = ns.system_prompt + "]" -%}
|
||||
{%- endif -%}
|
||||
{%- if ns.system_prompt -%}
|
||||
{{- "<|im_start|>system\n" + ns.system_prompt + "<|im_end|>\n" -}}
|
||||
{%- endif -%}
|
||||
{%- set ns.last_assistant_index = -1 -%}
|
||||
{%- for message in messages -%}
|
||||
{%- if message["role"] == "assistant" -%}
|
||||
{%- set ns.last_assistant_index = loop.index0 -%}
|
||||
{%- endif -%}
|
||||
{%- endfor -%}
|
||||
{%- for message in messages -%}
|
||||
{{- "<|im_start|>" + message["role"] + "\n" -}}
|
||||
{%- set content = message["content"] -%}
|
||||
{%- if content is not string -%}
|
||||
{%- set content = content | tojson -%}
|
||||
{%- endif -%}
|
||||
{%- if message["role"] == "assistant" and not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}
|
||||
{%- if "</think>" in content -%}
|
||||
{%- set content = content.split("</think>")[-1] | trim -%}
|
||||
{%- endif -%}
|
||||
{%- endif -%}
|
||||
{{- content + "<|im_end|>\n" -}}
|
||||
{%- endfor -%}
|
||||
{%- if add_generation_prompt -%}
|
||||
{{- "<|im_start|>assistant\n" -}}
|
||||
{%- endif -%}
|
||||
57
config.json
Normal file
57
config.json
Normal file
@@ -0,0 +1,57 @@
|
||||
{
|
||||
"architectures": [
|
||||
"Lfm2ForCausalLM"
|
||||
],
|
||||
"block_auto_adjust_ff_dim": true,
|
||||
"block_dim": 2048,
|
||||
"block_ff_dim": 12288,
|
||||
"block_ffn_dim_multiplier": 1.0,
|
||||
"block_mlp_init_scale": 1.0,
|
||||
"block_multiple_of": 256,
|
||||
"block_norm_eps": 1e-05,
|
||||
"block_out_init_scale": 1.0,
|
||||
"block_use_swiglu": true,
|
||||
"block_use_xavier_init": true,
|
||||
"bos_token_id": 1,
|
||||
"conv_L_cache": 3,
|
||||
"conv_bias": false,
|
||||
"conv_dim": 2048,
|
||||
"conv_use_xavier_init": true,
|
||||
"dtype": "bfloat16",
|
||||
"eos_token_id": 7,
|
||||
"hidden_size": 2048,
|
||||
"initializer_range": 0.02,
|
||||
"intermediate_size": 12288,
|
||||
"layer_types": [
|
||||
"conv",
|
||||
"conv",
|
||||
"full_attention",
|
||||
"conv",
|
||||
"conv",
|
||||
"full_attention",
|
||||
"conv",
|
||||
"conv",
|
||||
"full_attention",
|
||||
"conv",
|
||||
"full_attention",
|
||||
"conv",
|
||||
"full_attention",
|
||||
"conv",
|
||||
"full_attention",
|
||||
"conv"
|
||||
],
|
||||
"max_position_embeddings": 128000,
|
||||
"model_type": "lfm2",
|
||||
"norm_eps": 1e-05,
|
||||
"num_attention_heads": 32,
|
||||
"num_heads": 32,
|
||||
"num_hidden_layers": 16,
|
||||
"num_key_value_heads": 8,
|
||||
"pad_token_id": 0,
|
||||
"rope_theta": 1000000.0,
|
||||
"tie_embedding": true,
|
||||
"transformers_version": "4.57.6",
|
||||
"use_cache": false,
|
||||
"use_pos_enc": true,
|
||||
"vocab_size": 65536
|
||||
}
|
||||
10
generation_config.json
Normal file
10
generation_config.json
Normal file
@@ -0,0 +1,10 @@
|
||||
{
|
||||
"_from_model_config": true,
|
||||
"bos_token_id": 1,
|
||||
"do_sample": true,
|
||||
"eos_token_id": [
|
||||
7
|
||||
],
|
||||
"pad_token_id": 0,
|
||||
"transformers_version": "4.57.6"
|
||||
}
|
||||
3
pytorch_model.bin
Normal file
3
pytorch_model.bin
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:bf3e6c77415507e49788f276fecc26b32c53b954c4f253c0b14e918fea4609af
|
||||
size 4681411139
|
||||
23
special_tokens_map.json
Normal file
23
special_tokens_map.json
Normal file
@@ -0,0 +1,23 @@
|
||||
{
|
||||
"bos_token": {
|
||||
"content": "<|startoftext|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"eos_token": {
|
||||
"content": "<|im_end|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
},
|
||||
"pad_token": {
|
||||
"content": "<|pad|>",
|
||||
"lstrip": false,
|
||||
"normalized": false,
|
||||
"rstrip": false,
|
||||
"single_word": false
|
||||
}
|
||||
}
|
||||
323835
tokenizer.json
Normal file
323835
tokenizer.json
Normal file
File diff suppressed because it is too large
Load Diff
4095
tokenizer_config.json
Normal file
4095
tokenizer_config.json
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user