80 lines
2.3 KiB
Markdown
80 lines
2.3 KiB
Markdown
|
|
---
|
||
|
|
pipeline_tag: text-generation
|
||
|
|
library_name: transformers
|
||
|
|
tags:
|
||
|
|
- text-generation
|
||
|
|
- metadata-localization
|
||
|
|
- global
|
||
|
|
- 1b
|
||
|
|
- with-metadata
|
||
|
|
- pretraining
|
||
|
|
- intermediate-checkpoint
|
||
|
|
---
|
||
|
|
|
||
|
|
# combined_with_metadata_1b_step2k
|
||
|
|
|
||
|
|
## Summary
|
||
|
|
|
||
|
|
This repo contains the global combined model exported from the 2k checkpoint for the metadata localization project. It was trained from scratch on the project corpus, using the Llama 3.2 tokenizer and vocabulary.
|
||
|
|
|
||
|
|
## Variant Metadata
|
||
|
|
|
||
|
|
- Stage: `pretrain`
|
||
|
|
- Family: `global`
|
||
|
|
- Size: `1b`
|
||
|
|
- Metadata condition: `with_metadata`
|
||
|
|
- Checkpoint export: `2k`
|
||
|
|
- Base model lineage: `Trained from scratch; tokenizer/vocabulary from meta-llama/Llama-3.2-1B`
|
||
|
|
|
||
|
|
## Weights & Biases Provenance
|
||
|
|
|
||
|
|
- Run name: `13/11/2025_21:36:45_combined_with_metadata_1b`
|
||
|
|
- Internal run URL: `https://wandb.ai/iamshnoo/nanotron/runs/zcm25bay`
|
||
|
|
- Note: the Weights & Biases workspace is private; public readers should use the summarized metrics and configuration below.
|
||
|
|
- State: `finished`
|
||
|
|
- Runtime: `114h 26m 23s`
|
||
|
|
|
||
|
|
## Run Summary
|
||
|
|
|
||
|
|
- `KPI/train_lm_loss`: `2.0574`
|
||
|
|
- `KPI/train_perplexity`: `7.8256`
|
||
|
|
- `KPI/val_loss`: `2.0958`
|
||
|
|
- `KPI/val_perplexity`: `8.1322`
|
||
|
|
- `KPI/consumed_tokens/train`: `41,943,040,000`
|
||
|
|
- `_step`: `10,000`
|
||
|
|
|
||
|
|
## Training Configuration
|
||
|
|
|
||
|
|
- `train_steps`: `10,000`
|
||
|
|
- `sequence_length`: `2,048`
|
||
|
|
- `micro_batch_size`: `8`
|
||
|
|
- `batch_accumulation_per_replica`: `64`
|
||
|
|
- `learning_rate`: `0.003`
|
||
|
|
- `min_decay_lr`: `0.0003`
|
||
|
|
- `checkpoint_interval`: `1,000`
|
||
|
|
|
||
|
|
## Training Curves
|
||
|
|
|
||
|
|
Static plots below were exported from the private Weights & Biases run and embedded here for public access.
|
||
|
|
|
||
|
|
### Train Loss
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
### Validation Perplexity
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
### Throughput
|
||
|
|
|
||
|
|

|
||
|
|
|
||
|
|
## Project Context
|
||
|
|
|
||
|
|
This model is part of the metadata localization release. Related checkpoints and variants are grouped in the public Hugging Face collection [Metadata Conditioned LLMs](https://huggingface.co/collections/iamshnoo/metadata-conditioned-llms).
|
||
|
|
- Training data source: [News on the Web (NOW) Corpus](https://www.english-corpora.org/now/)
|
||
|
|
- Project repository: [https://github.com/iamshnoo/metadata_localization](https://github.com/iamshnoo/metadata_localization)
|
||
|
|
- Paper: [https://arxiv.org/abs/2601.15236](https://arxiv.org/abs/2601.15236)
|
||
|
|
|
||
|
|
Last synced: `2026-04-02 14:39:21 UTC`
|