Files
ModelHub XC 8eb589696d 初始化项目,由ModelHub XC社区提供模型
Model: RedHatAI/TinyLlama-1.1B-Chat-v1.0-pruned2.4
Source: Original Platform
2026-05-06 22:36:51 +08:00

2.9 KiB

base_model, inference, model_type, hardware_tag, license, license_name, license_link, name, description, readme, tags
base_model inference model_type hardware_tag license license_name license_link name description readme tags
TinyLlama/TinyLlama-1.1B-Chat-v1.0 true Llama
Intel Xeon
apache-2.0 apache-2.0 https://choosealicense.com/licenses/apache-2.0/ RedHatAI/TinyLlama-1.1B-Chat-v1.0-pruned2.4 Pruned TinyLlama model. https://huggingface.co/RedHatAI/TinyLlama-1.1B-Chat-v1.0-pruned2.4/blob/main/README.md
nm-vllm
sparse

TinyLlama-1.1B-Chat-v1.0-pruned2.4

This repo contains model files for TinyLlama-1.1B-Chat-v1.0 optimized for NM-vLLM, a high-throughput serving engine for compressed LLMs.

This model was pruned with SparseGPT, using SparseML.

Inference

Install NM-vLLM for fast inference and low memory-usage:

pip install nm-vllm[sparse]

Run in a Python pipeline for local inference:

from vllm import LLM, SamplingParams

model = LLM("nm-testing/TinyLlama-1.1B-Chat-v1.0-pruned2.4", sparsity="semi_structured_sparse_w16a16")
prompt = "How to make banana bread?"
formatted_prompt =  f"<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n"

sampling_params = SamplingParams(max_tokens=100,temperature=0,repetition_penalty=1.3)
outputs = model.generate(formatted_prompt, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
"""
Banana bread is a delicious dessert that is made with bananas. Here is how to make banana bread:

1. Firstly, you need to cut bananas into small pieces.
2. Then, you need to slice the bananas into small pieces
"""

Prompt template

<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Sparsification

For details on how this model was sparsified, see the recipe.yaml in this repo and follow the instructions below.

Install SparseML:

git clone https://github.com/neuralmagic/sparseml
pip install -e "sparseml[transformers]"

Replace the recipe as you like and run this one-shot compression script to apply SparseGPT:

import sparseml.transformers

original_model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
calibration_dataset = "open_platypus"
output_directory = "output/"

recipe = """
test_stage:
  obcq_modifiers:
    SparseGPTModifier:
      sparsity: 0.5
      sequential_update: true
      mask_structure: '2:4'
      targets: ['re:model.layers.\d*$']
"""

# Apply SparseGPT to the model
sparseml.transformers.oneshot(
    model=original_model_name,
    dataset=calibration_dataset,
    recipe=recipe,
    output_dir=output_directory,
)

Slack

For further support, and discussions on these models and AI in general, join Neural Magic's Slack Community