初始化项目,由ModelHub XC社区提供模型
Model: huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated Source: Original Platform
This commit is contained in:
124
README.md
Normal file
124
README.md
Normal file
@@ -0,0 +1,124 @@
|
||||
---
|
||||
library_name: transformers
|
||||
license: apache-2.0
|
||||
license_link: https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/blob/main/LICENSE
|
||||
language:
|
||||
- en
|
||||
pipeline_tag: text-generation
|
||||
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
|
||||
tags:
|
||||
- chat
|
||||
- abliterated
|
||||
- uncensored
|
||||
---
|
||||
|
||||
# huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated
|
||||
|
||||
|
||||
This is an uncensored version of [Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
|
||||
|
||||
Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
|
||||
|
||||
Qwen2.5-Coder uncensored version has covered six mainstream model sizes,
|
||||
[0.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-0.5B-Instruct-abliterated),
|
||||
[1.5](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated),
|
||||
[3](https://huggingface.co/huihui-ai/Qwen2.5-Coder-3B-Instruct-abliterated),
|
||||
[7](https://huggingface.co/huihui-ai/Qwen2.5-Coder-7B-Instruct-abliterated),
|
||||
[14](https://huggingface.co/huihui-ai/Qwen2.5-Coder-14B-Instruct-abliterated),
|
||||
[32](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) billion parameters.
|
||||
|
||||
## ollama
|
||||
|
||||
You can use [huihui_ai/qwen2.5-coder-abliterate:1.5b](https://ollama.com/huihui_ai/qwen2.5-coder-abliterate:1.5b) directly,
|
||||
```
|
||||
ollama run huihui_ai/qwen2.5-coder-abliterate:1.5b
|
||||
```
|
||||
|
||||
## Usage
|
||||
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
|
||||
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
# Load the model and tokenizer
|
||||
model_name = "huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated"
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="auto",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
# Initialize conversation context
|
||||
initial_messages = [
|
||||
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
|
||||
]
|
||||
messages = initial_messages.copy() # Copy the initial conversation context
|
||||
|
||||
# Enter conversation loop
|
||||
while True:
|
||||
# Get user input
|
||||
user_input = input("User: ").strip() # Strip leading and trailing spaces
|
||||
|
||||
# If the user types '/exit', end the conversation
|
||||
if user_input.lower() == "/exit":
|
||||
print("Exiting chat.")
|
||||
break
|
||||
|
||||
# If the user types '/clean', reset the conversation context
|
||||
if user_input.lower() == "/clean":
|
||||
messages = initial_messages.copy() # Reset conversation context
|
||||
print("Chat history cleared. Starting a new conversation.")
|
||||
continue
|
||||
|
||||
# If input is empty, prompt the user and continue
|
||||
if not user_input:
|
||||
print("Input cannot be empty. Please enter something.")
|
||||
continue
|
||||
|
||||
# Add user input to the conversation
|
||||
messages.append({"role": "user", "content": user_input})
|
||||
|
||||
# Build the chat template
|
||||
text = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=False,
|
||||
add_generation_prompt=True
|
||||
)
|
||||
|
||||
# Tokenize input and prepare it for the model
|
||||
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
||||
|
||||
# Generate a response from the model
|
||||
generated_ids = model.generate(
|
||||
**model_inputs,
|
||||
max_new_tokens=8192
|
||||
)
|
||||
|
||||
# Extract model output, removing special tokens
|
||||
generated_ids = [
|
||||
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
||||
]
|
||||
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
||||
|
||||
# Add the model's response to the conversation
|
||||
messages.append({"role": "assistant", "content": response})
|
||||
|
||||
# Print the model's response
|
||||
print(f"Qwen: {response}")
|
||||
|
||||
```
|
||||
|
||||
## Evaluations
|
||||
The following data has been re-evaluated and calculated as the average for each test.
|
||||
|
||||
| Benchmark | Qwen2.5-Coder-1.5B-Instruct | Qwen2.5-Coder-1.5B-Instruct-abliterated |
|
||||
|-------------|-----------------------------|-----------------------------------------|
|
||||
| IF_Eval | 43.43 | **45.41** |
|
||||
| MMLU Pro | 21.5 | 20.57 |
|
||||
| TruthfulQA | 46.07 | 41.9 |
|
||||
| BBH | 36.67 | 36.09 |
|
||||
| GPQA | 28.00 | 26.13 |
|
||||
|
||||
The script used for evaluation can be found inside this repository under /eval.sh, or click [here](https://huggingface.co/huihui-ai/Qwen2.5-Coder-1.5B-Instruct-abliterated/blob/main/eval.sh)
|
||||
Reference in New Issue
Block a user