ModelHub XC 5e7ba7f537 初始化项目,由ModelHub XC社区提供模型
Model: RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_share_gpt-gguf
Source: Original Platform
2026-04-21 17:55:36 +08:00

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama3-1_8b_oh_v3.1_wo_share_gpt - GGUF

Name Quant method Size
llama3-1_8b_oh_v3.1_wo_share_gpt.Q2_K.gguf Q2_K 2.96GB
llama3-1_8b_oh_v3.1_wo_share_gpt.IQ3_XS.gguf IQ3_XS 3.28GB
llama3-1_8b_oh_v3.1_wo_share_gpt.IQ3_S.gguf IQ3_S 3.43GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q3_K_S.gguf Q3_K_S 3.41GB
llama3-1_8b_oh_v3.1_wo_share_gpt.IQ3_M.gguf IQ3_M 3.52GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q3_K.gguf Q3_K 3.74GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q3_K_M.gguf Q3_K_M 3.74GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q3_K_L.gguf Q3_K_L 4.03GB
llama3-1_8b_oh_v3.1_wo_share_gpt.IQ4_XS.gguf IQ4_XS 4.18GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q4_0.gguf Q4_0 4.34GB
llama3-1_8b_oh_v3.1_wo_share_gpt.IQ4_NL.gguf IQ4_NL 4.38GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q4_K_S.gguf Q4_K_S 4.37GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q4_K.gguf Q4_K 4.58GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q4_K_M.gguf Q4_K_M 4.58GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q4_1.gguf Q4_1 4.78GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q5_0.gguf Q5_0 5.21GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q5_K_S.gguf Q5_K_S 5.21GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q5_K.gguf Q5_K 5.34GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q5_K_M.gguf Q5_K_M 5.34GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q5_1.gguf Q5_1 5.65GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q6_K.gguf Q6_K 6.14GB
llama3-1_8b_oh_v3.1_wo_share_gpt.Q8_0.gguf Q8_0 7.95GB

Original model description:

library_name: transformers license: llama3.1 base_model: meta-llama/Meta-Llama-3.1-8B tags:

  • llama-factory
  • full
  • generated_from_trainer model-index:
  • name: llama3-1_8b_oh_v3.1_wo_share_gpt results: []

llama3-1_8b_oh_v3.1_wo_share_gpt

This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B on the mlfoundations-dev/oh_v3.1_wo_share_gpt dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6453

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 32
  • total_train_batch_size: 512
  • total_eval_batch_size: 256
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.1
  • lr_scheduler_warmup_steps: 1738
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss
0.6517 1.0 422 0.6541
0.6045 2.0 844 0.6440
0.5731 3.0 1266 0.6453

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.4.0
  • Datasets 3.0.2
  • Tokenizers 0.20.3
Description
Model synced from source: RichardErkhov/mlfoundations-dev_-_llama3-1_8b_oh_v3.1_wo_share_gpt-gguf
Readme 28 KiB