初始化项目,由ModelHub XC社区提供模型
Model: Undi95/Nethena-MLewd-Xwin-23B Source: Original Platform
This commit is contained in:
31
README.md
Normal file
31
README.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
license: cc-by-nc-4.0
|
||||
tags:
|
||||
- not-for-all-audiences
|
||||
- nsfw
|
||||
---
|
||||
|
||||
Undi doing chemistry again.
|
||||
|
||||
Layer of Xwin-Mlewd was added in a different way than I do before, result seem good, but I'm a VRAMlet so I can only run the Q2 at 2k context for now.
|
||||
|
||||
Need to see if it really work good or I was just lucky with my prompt.
|
||||
|
||||
OG model : [NeverSleep/Nethena-13B](https://huggingface.co/NeverSleep/Nethena-13B)
|
||||
|
||||
## Prompt template: Alpaca
|
||||
|
||||
```
|
||||
Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
||||
|
||||
### Instruction:
|
||||
{prompt}
|
||||
|
||||
### Response:
|
||||
```
|
||||
|
||||
LimaRP is always kicking in and thus, this can be used to have more control on the size of the output.
|
||||
|
||||

|
||||
|
||||
Thanks Ikari.
|
||||
Reference in New Issue
Block a user