3.6 KiB
This model was released on 2024-04-16 and added to Hugging Face Transformers on 2024-10-04.
Zamba
Zamba (blog post) is a large language model (LLM) trained by Zyphra, and made available under an Apache 2.0 license. Please see the Zyphra Hugging Face repository for model weights.
This model was contributed by pglo.
Model details
Zamba-7B-v1 is a hybrid between state-space models (Specifically Mamba) and transformer, and was trained using next-token prediction. Zamba uses a shared transformer layer after every 6 mamba blocks. It uses the Mistral v0.1 tokenizer. We came to this architecture after a series of ablations at small scales. Zamba-7B-v1 was pre-trained on 1T tokens of text and code data.
Quick start
Presequities
Zamba requires you use transformers version 4.46.0 or higher:
pip install transformers>=4.45.0
In order to run optimized Mamba implementations, you first need to install mamba-ssm and causal-conv1d:
pip install mamba-ssm causal-conv1d>=1.2.0
You also have to have the model on a CUDA device.
You can run the model not using the optimized Mamba kernels, but it is not recommended as it will result in significantly lower latencies. In order to do that, you'll need to specify use_mamba_kernels=False when loading the model.
Inference
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("Zyphra/Zamba-7B-v1")
model = AutoModelForCausalLM.from_pretrained("Zyphra/Zamba-7B-v1", device_map="auto", dtype=torch.bfloat16)
input_text = "A funny prompt would be "
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
Model card
The model cards can be found at:
Issues
For issues with model output, or community discussion, please use the Hugging Face community forum
License
The model weights are open-sourced via an Apache 2.0 license.
ZambaConfig
autodoc ZambaConfig
ZambaModel
autodoc ZambaModel - forward
ZambaForCausalLM
autodoc ZambaForCausalLM - forward
ZambaForSequenceClassification
autodoc transformers.ZambaForSequenceClassification - forward