Files
Wayfarer-2-12B/README.md
ModelHub XC 6c9ce7a04d 初始化项目,由ModelHub XC社区提供模型
Model: LatitudeGames/Wayfarer-2-12B
Source: Original Platform
2026-04-09 12:55:25 +08:00

70 lines
3.3 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-Nemo-Base-2407
tags:
- text adventure
- roleplay
library_name: transformers
---
![image/jpeg](Wayfarer-2-12B.jpg)
# Wayfarer-2-12B
Weve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games arent all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. We created Wayfarer as a response, and after much testing, feedback and refining, weve developed a worthy sequel.
Wayfarer 2 further refines the formula that made the original Wayfarer so popular, slowing the pacing, increasing the length and detail of responses and making death a distinct possibility for all characters—not just the user. The stakes have never been higher!
If you want to try this model for free, you can do so at [https://aidungeon.com](https://aidungeon.com/).
We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created.
[Quantized GGUF weights can be downloaded here.](https://huggingface.co/LatitudeGames/Wayfarer-2-12B-GGUF)
## Model details
Wayfarer 2 12B received SFT training with a simple three ingredient recipe: the Wayfarer 2 dataset itself, a series of sentiment-balanced roleplay transcripts and a small instruct core to help retain its instructional capabilities.
## How It Was Made
Wayfarers text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples.
One language model played the role of narrator, with the other playing the user. They were blind to each others underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died.
Wayfarers general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist for anyone. This serves to counter the positivity bias so inherent in our language models nowadays.
## Inference
The Nemo architecture is known for being sensitive to higher temperatures, so the following settings are recommended as a baseline. Nothing stops you from experimenting with these, of course.
```
"temperature": 0.8,
"repetition_penalty": 1.05,
"min_p": 0.025
```
## Limitations
Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style. Other perspectives will work as well but may produce suboptimal results.
## Prompt Format
ChatML was used for both finetuning stages.
```
<|im_start|>system
You're a masterful storyteller and gamemaster. Write in second person present tense (You are), crafting vivid, engaging narratives with authority and confidence.<|im_end|>
<|im_start|>user
> You peer into the darkness.<|im_end|>
<|im_start|>assistant
You have been eaten by a grue.
GAME OVER<|im_end|>
```
## Credits
Thanks to [Gryphe Padar](https://huggingface.co/Gryphe) for collaborating on this finetune with us!