142 lines
7.5 KiB
Markdown
142 lines
7.5 KiB
Markdown
---
|
||
pipeline_tag: text-generation
|
||
tags:
|
||
- uncensored
|
||
- abliterated
|
||
base_model:
|
||
- open-thoughts/OpenThinker-Agent-v1
|
||
---
|
||
This is an abliterated version of [OpenThinker-Agent-v1](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1), made using [Heretic](https://github.com/p-e-w/heretic) v1.0.1
|
||
|
||
The quantizations were created using an imatrix merged from [combined\_en\_medium](https://huggingface.co/datasets/eaddario/imatrix-calibration/blob/main/combined_en_medium.parquet) and [harmful.txt](https://github.com/Sumandora/remove-refusals-with-transformers) to leverage the abliterated nature of the model.
|
||
|
||
## Performance
|
||
|
||
| Metric | This model | [Original model](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1) |
|
||
| :----- | :--------: | :---------------------------: |
|
||
| **Refusals** | 3/100 | 99/100 |
|
||
|
||
## Analysis against the original model:
|
||
Detailed Analysis:
|
||
- Total Tensors: 399
|
||
- Tensors with Diffs: 202 (50.6%)
|
||
- Average % Diff: 6.35%
|
||
- Median % Diff: 0.00%
|
||
- Min/Max % Diff: 0.00% / 46.22%
|
||
- Std Dev % Diff: 15.56%
|
||
- Skewness % Diff: 2.04
|
||
- Avg L2 Norm: 125405.56
|
||
- Tensors with >5% diff: 57
|
||
- Top differences:
|
||
blk.35.attn_output.weight ((4096, 8192), L2: 668013.65): 46.22%
|
||
blk.34.ffn_down.weight ((4096, 24576), L2: 1155843.86): 46.07%
|
||
blk.18.attn_output.weight ((4096, 8192), L2: 667142.18): 46.00%
|
||
blk.16.ffn_down.weight ((4096, 24576), L2: 1154713.83): 45.95%
|
||
blk.24.attn_output.weight ((4096, 8192), L2: 666019.48): 45.66%
|
||
|
||
|
||
File Comparison:
|
||
File 1: Avg Abs Value = 77.9178, Deviation Score = 0.0991
|
||
File 2: Avg Abs Value = 77.9111, Deviation Score = 0.0991
|
||
Positive Diffs (File 1 > File 2): 143, Negative Diffs (File 2 > File 1): 59
|
||
|
||
|
||
|
||

|
||
|
||

|
||
|
||
## BibTeX entry and citation info
|
||
```bibtex
|
||
@misc{heretic,
|
||
author = {Weidmann, Philipp Emanuel},
|
||
title = {Heretic: Fully automatic censorship removal for language models},
|
||
year = {2025},
|
||
publisher = {GitHub},
|
||
journal = {GitHub repository},
|
||
howpublished = {\url{https://github.com/p-e-w/heretic}}
|
||
}
|
||
```
|
||
|
||
# Original model card:
|
||
|
||
|
||
<p align="center">
|
||
<img src="https://huggingface.co/datasets/open-thoughts/OpenThoughts1-Agent-SFT/resolve/main/ota-logo.png" width="50%">
|
||
</p>
|
||
|
||
<p align="center">
|
||
<a href="https://www.openthoughts.ai/blog/agent" style="margin-right: 24px;">Project</a> |
|
||
<a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-SFT" style="margin-right: 24px; margin-left: 24px;">SFT dataset</a> |
|
||
<a href="https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-RL" style="margin-right: 24px; margin-left: 24px;">RL dataset</a> |
|
||
<a href="https://huggingface.co/open-thoughts/OpenThinker-Agent-v1-SFT" style="margin-right: 24px; margin-left: 24px;">SFT model</a> |
|
||
<a href="https://huggingface.co/open-thoughts/OpenThinker-Agent-v1" style="margin-left: 24px;">RL model</a>
|
||
</p>
|
||
|
||
|
||
# OpenThinker-Agent-v1
|
||
|
||
**OpenThoughts-Agent** is an open-source effort to curate the best datasets for training agents. Our first release includes [datasets](https://huggingface.co/collections/open-thoughts/openthinker-agent), [models](https://huggingface.co/collections/open-thoughts/openthinker-agent) and our [research codebase](https://github.com/open-thoughts/OpenThoughts-Agent).
|
||
|
||
[OpenThinker-Agent-v1](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1) is a model trained for agentic tasks such as **Terminal-Bench 2.0** and **SWE-Bench**.
|
||
|
||
The [OpenThinker-Agent-v1](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1) model is post-trained from [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
|
||
It is SFT-ed on the [OpenThoughts-Agent-v1-SFT](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-SFT) dataset, then RL-ed on the [OpenThoughts-Agent-v1-RL](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-RL) dataset.
|
||
|
||
This model is the final model after both SFT and RL. For the model after the SFT stage only, see [OpenThinker-Agent-v1-SFT](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1-SFT).
|
||
|
||
- **Homepage:** https://www.openthoughts.ai/blog/agent
|
||
- **Repository:** https://github.com/open-thoughts/OpenThoughts-Agent
|
||
|
||
|
||
# OpenThinker-Agent-v1 Model Performance
|
||
|
||
Our [OpenThinker-Agent-v1](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-RL) model is the state-of-the-art model at its scale on agent benchmarks.
|
||
|
||
| Model | Harness | Terminal-Bench 2.0 | SWE-Bench Verified | OpenThoughts-TB-Dev |
|
||
| ----------------------------------------------------------------------------------------------- | ------- | ------------------ | --------- | ------------------- |
|
||
| [Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) | Terminus-2 | 0.0 | 0.7 | 5.7 |
|
||
| **[OpenThinker-Agent-v1](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1)** | Terminus-2 | 4.9 | 15.7 | 17.3 |
|
||
| [Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) | Terminus-2 | 1.9 | 5.7 | 10.2 |
|
||
| [Qwen/Qwen3-Coder-30B-A3B-Instruct](https://huggingface.co/Qwen/Qwen3-Coder-30B-A3B-Instruct) | OpenHands | 10.1 | 49.2 | 24.5 |
|
||
|
||
|
||
# Data
|
||
|
||
We built [OpenThinker-Agent-v1](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1) in two stages: **supervised fine-tuning**, followed by **reinforcement learning**.
|
||
Each stage required its own data pipeline – RL tasks (instructions, environments, and verifiers) and SFT traces from strong teacher agents completing tasks.
|
||
|
||
[OpenThoughts-Agent-v1-SFT](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-SFT) is an SFT trace dataset containing approximately **15,200 traces** drawn from two different data sources we curate:
|
||
- **nl2bash**: Simple synthetically generated tasks where the agent has to format shell commands effectively
|
||
- **InferredBugs**: A set of bugs in C# and Java collected by Microsoft that we turned into tasks
|
||
|
||
[OpenThoughts-Agent-v1-RL](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-RL) is an RL dataset containing ~720 tasks drawn from the **nl2bash verified** dataset.
|
||
|
||
To stabilize training, we built a three-stage filtration pipeline that prunes tasks before they ever hit the learner:
|
||
|
||
1. Bad verifiers filter: drop tasks with flaky or excessively slow verifiers.
|
||
2. Environment stability: remove tasks whose containers take too long to build or tear down.
|
||
Optional difficulty filter: discard tasks that even a strong model (GPT-5 Codex) cannot solve in a single pass.
|
||
|
||
|
||
# Links
|
||
- 🌐 [OpenThoughts-Agent project page](https://www.openthoughts.ai/blog/agent)
|
||
- 💻 [OpenThoughts-Agent GitHub repository](https://github.com/open-thoughts/OpenThoughts-Agent)
|
||
- 🧠 [OpenThoughts-Agent-v1-SFT dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-SFT)
|
||
- 🧠 [OpenThoughts-Agent-v1-RL dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-Agent-v1-RL)
|
||
- 🧠 [OpenThoughts-TB-dev dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-TB-dev)
|
||
- 🤖 [OpenThinker-Agent-v1 model](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1)
|
||
- 🤖 [OpenThinker-Agent-v1-SFT model](https://huggingface.co/open-thoughts/OpenThinker-Agent-v1-SFT)
|
||
|
||
|
||
# Citation
|
||
```
|
||
@misc{openthoughts-agent,
|
||
author = {Team, OpenThoughts-Agent},
|
||
month = Dec,
|
||
title = {{OpenThoughts-Agent}},
|
||
howpublished = {https://open-thoughts.ai/agent},
|
||
year = {2025}
|
||
}
|
||
```
|