models:- model:mistralai/Mistral-7B-v0.1# No parameters necessary for base model- model:Intel/neural-chat-7b-v3-3parameters:density:0.6weight:0.2- model:openaccess-ai-collective/DPOpenHermes-7B-v2parameters:density:0.6weight:0.1- model:fblgit/una-cybertron-7b-v2-bf16parameters:density:0.6weight:0.2- model:openchat/openchat-3.5-0106parameters:density:0.6weight:0.15- model:OpenPipe/mistral-ft-optimized-1227parameters:density:0.6weight:0.25- model:mlabonne/NeuralHermes-2.5-Mistral-7Bparameters:density:0.6weight:0.1merge_method:dare_tiesbase_model:mistralai/Mistral-7B-v0.1parameters:int8_mask:truedtype:bfloat16
💻 Usage
!pipinstall-qUtransformersacceleratefromtransformersimportAutoTokenizerimporttransformersimporttorchmodel="mlabonne/Darewin-7B"messages=[{"role":"user","content":"What is a large language model?"}]tokenizer=AutoTokenizer.from_pretrained(model)prompt=tokenizer.apply_chat_template(messages,tokenize=False,add_generation_prompt=True)pipeline=transformers.pipeline("text-generation",model=model,torch_dtype=torch.float16,device_map="auto",)outputs=pipeline(prompt,max_new_tokens=256,do_sample=True,temperature=0.7,top_k=50,top_p=0.95)print(outputs[0]["generated_text"])