Files
WabiSabi-V1/README.md
ModelHub XC 6ed8dbdec1 初始化项目,由ModelHub XC社区提供模型
Model: Local-Novel-LLM-project/WabiSabi-V1
Source: Original Platform
2026-05-02 20:41:41 +08:00

41 lines
1.1 KiB
Markdown

---
license: apache-2.0
language:
- en
- ja
tags:
- finetuned
library_name: transformers
pipeline_tag: text-generation
---
<img src="./wabisabi-logo.jpg" width="100%" height="20%" alt="">
## Model Card for Wabisabi-v1.0
The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
wabisabi has the following changes compared to Mistral-7B-v0.1.
- 128k context window (8k context in v0.1)
- Achieving both high quality Japanese and English generation
- Can be generated NSFW
- Memory ability that does not forget even after long-context generation
This model was created with the help of GPUs from the first LocalAI hackathon.
We would like to take this opportunity to thank
## List of Creation Methods
- Chatvector for multiple models
- Simple linear merging of result models
- Domain and Sentence Enhancement with LORA
- Context expansion
## Instruction format
Vicuna-v1.1
## Other points to keep in mind
- The training data may be biased. Be careful with the generated sentences.
- Memory usage may be large for long inferences.
- If possible, we recommend inferring with llamacpp rather than Transformers.