初始化项目,由ModelHub XC社区提供模型
Model: yam-peleg/Experiment28-7B Source: Original Platform
This commit is contained in:
25
README.md
Normal file
25
README.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
license: apache-2.0
|
||||
language:
|
||||
- en
|
||||
library_name: transformers
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- chat
|
||||
---
|
||||
**Experiment28-7B**
|
||||
|
||||
An experiment for testing and refining a specific training and evaluation pipeline research framework.
|
||||
|
||||
This experiment aims to identify potential optimizations, focusing on data engineering, architecture efficiency, and evaluation performance.
|
||||
|
||||
The goal is to evaluate the effectiveness of a new training / evaluation pipeline for LLMs.
|
||||
|
||||
The experiment will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.
|
||||
|
||||
More details in the future experiments.
|
||||
|
||||
|
||||
---
|
||||
license: apache-2.0
|
||||
---
|
||||
Reference in New Issue
Block a user