初始化项目,由ModelHub XC社区提供模型
Model: croissantllm/CroissantLLMBase-GGUF Source: Original Platform
This commit is contained in:
38
.gitattributes
vendored
Normal file
38
.gitattributes
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
croissantllmbase.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
croissantllmbase.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
croissantllmbase.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
||||
75
README.md
Normal file
75
README.md
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
license: mit
|
||||
datasets:
|
||||
- cerebras/SlimPajama-627B
|
||||
- uonlp/CulturaX
|
||||
- pg19
|
||||
- bigcode/starcoderdata
|
||||
- croissantllm/croissant_dataset
|
||||
language:
|
||||
- fr
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
tags:
|
||||
- legal
|
||||
- code
|
||||
- text-generation-inference
|
||||
- art
|
||||
---
|
||||
|
||||
# CroissantLLM - Base GGUF (190k steps, Final version)
|
||||
|
||||
This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens.
|
||||
|
||||
To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
|
||||
|
||||
|
||||
https://arxiv.org/abs/2402.00786
|
||||
|
||||
|
||||
|
||||
## Abstract
|
||||
We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
|
||||
To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
|
||||
To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
|
||||
This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
|
||||
|
||||
## Citation
|
||||
|
||||
Our work can be cited as:
|
||||
|
||||
```bash
|
||||
@misc{faysse2024croissantllm,
|
||||
title={CroissantLLM: A Truly Bilingual French-English Language Model},
|
||||
author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
|
||||
year={2024},
|
||||
eprint={2402.00786},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL}
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
|
||||
|
||||
|
||||
```python
|
||||
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
|
||||
model_name = "croissantllm/CroissantLLMBase"
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
|
||||
|
||||
inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.\nHe is heading to the market. -> Il va au marché.\nWe are running on the beach. ->", return_tensors="pt").to(model.device)
|
||||
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
|
||||
print(tokenizer.decode(tokens[0]))
|
||||
|
||||
# remove bos token
|
||||
inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
|
||||
tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
|
||||
print(tokenizer.decode(tokens[0]))
|
||||
```
|
||||
3
croissantllmbase.Q4_K_M.gguf
Normal file
3
croissantllmbase.Q4_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:0c20ca12a501191bf382f63b9428f5cebe3b788746146f63982b472911ba9905
|
||||
size 872313024
|
||||
3
croissantllmbase.Q5_K_M.gguf
Normal file
3
croissantllmbase.Q5_K_M.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:59ed4d4447418b9aac6343e100cd0158ebe4915e4dce73f80714859784c61f4a
|
||||
size 1000632512
|
||||
3
croissantllmbase.Q8_0.gguf
Normal file
3
croissantllmbase.Q8_0.gguf
Normal file
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:dce83a2949b5cacda4413b2993345bb344bb63dda355d4fc5ae9372730f7a0f5
|
||||
size 1430560960
|
||||
Reference in New Issue
Block a user