Files
ModelHub XC 0df61cd289 初始化项目,由ModelHub XC社区提供模型
Model: RichardErkhov/Equall_-_Saul-7B-Instruct-v1-gguf
Source: Original Platform
2026-04-16 17:52:11 +08:00

6.5 KiB
Raw Permalink Blame History

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Saul-7B-Instruct-v1 - GGUF

Name Quant method Size
Saul-7B-Instruct-v1.Q2_K.gguf Q2_K 2.53GB
Saul-7B-Instruct-v1.Q3_K_S.gguf Q3_K_S 2.95GB
Saul-7B-Instruct-v1.Q3_K.gguf Q3_K 3.28GB
Saul-7B-Instruct-v1.Q3_K_M.gguf Q3_K_M 3.28GB
Saul-7B-Instruct-v1.Q3_K_L.gguf Q3_K_L 3.56GB
Saul-7B-Instruct-v1.IQ4_XS.gguf IQ4_XS 3.67GB
Saul-7B-Instruct-v1.Q4_0.gguf Q4_0 3.83GB
Saul-7B-Instruct-v1.IQ4_NL.gguf IQ4_NL 3.87GB
Saul-7B-Instruct-v1.Q4_K_S.gguf Q4_K_S 3.86GB
Saul-7B-Instruct-v1.Q4_K.gguf Q4_K 4.07GB
Saul-7B-Instruct-v1.Q4_K_M.gguf Q4_K_M 4.07GB
Saul-7B-Instruct-v1.Q4_1.gguf Q4_1 4.24GB
Saul-7B-Instruct-v1.Q5_0.gguf Q5_0 4.65GB
Saul-7B-Instruct-v1.Q5_K_S.gguf Q5_K_S 4.65GB
Saul-7B-Instruct-v1.Q5_K.gguf Q5_K 4.78GB
Saul-7B-Instruct-v1.Q5_K_M.gguf Q5_K_M 4.78GB
Saul-7B-Instruct-v1.Q5_1.gguf Q5_1 5.07GB
Saul-7B-Instruct-v1.Q6_K.gguf Q6_K 5.53GB
Saul-7B-Instruct-v1.Q8_0.gguf Q8_0 7.17GB

Original model description:

library_name: transformers tags:

  • legal license: mit language:
  • en

Equall/Saul-Instruct-v1

This is the instruct model for Equall/Saul-Instruct-v1, a large instruct language model tailored for Legal domain. This model is obtained by continue pretraining of Mistral-7B.

Checkout our website and register https://equall.ai/

image/png

Model Details

Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

  • Developed by: Equall.ai in collaboration with CentraleSupelec, Sorbonne Université, Instituto Superior Técnico and NOVA School of Law
  • Model type: 7B
  • Language(s) (NLP): English
  • License: MIT

Model Sources

Uses

You can use it for legal use cases that involves generation.

Here's how you can run the model using the pipeline() function from 🤗 Transformers:


# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate

import torch
from transformers import pipeline

pipe = pipeline("text-generation", model="Equall/Saul-Instruct-v1", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizers chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "[YOUR QUERY GOES HERE]"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=False)
print(outputs[0]["generated_text"])

Bias, Risks, and Limitations

This model is built upon the technology of LLM, which comes with inherent limitations. It may occasionally generate inaccurate or nonsensical outputs. Furthermore, being a 7B model, it's anticipated to exhibit less robust performance compared to larger models, such as the 70B variant.

Citation

BibTeX:

@misc{colombo2024saullm7b,
      title={SaulLM-7B: A pioneering Large Language Model for Law}, 
      author={Pierre Colombo and Telmo Pessoa Pires and Malik Boudiaf and Dominic Culver and Rui Melo and Caio Corro and Andre F. T. Martins and Fabrizio Esposito and Vera Lúcia Raposo and Sofia Morgado and Michael Desa},
      year={2024},
      eprint={2403.03883},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}