QWEN CHAT GitHub Hugging Face ModelScope Kaggle DEMO DISCORD
Introduction Today, we are excited to announce the release of Qwen3, the latest addition to the Qwen family of large language models. Our flagship model, Qwen3-235B-A22B, achieves competitive results in benchmark evaluations of coding, math, general capabilities, etc., when compared to other top-tier models such as DeepSeek-R1, o1, o3-mini, Grok-3, and Gemini-2.5-Pro. Additionally, the small MoE model, Qwen3-30B-A3B, outcompetes QwQ-32B with 10 times of activated parameters, and even a tiny model like Qwen3-4B can rival the performance of Qwen2.
Uh, wow. That 30B A3B runs very fast on CPU alone.
Sadly it seems to be censored. I always try to make them write some fictional stories, exploring morally reprehensible acts, in order to test this. Or just lewd short-stories. And it straight out refuses immediately… Since it’s a “thinking” model, I went ahead and messed with its thoughts, but that won’t do it either: “I’m sorry, but I can’t comply with that request. I have to follow my guidelines and maintain ethical standards. Let’s talk about something else.”
Edit: There is a base model available for that one, and it seems okay. It will autocomplete my stories and write a wikipedia article about things the government doesn’t like. I wonder if this is going to help, though. Since all the magic is in the steps after the base model and I don’t know whether there are any datasets available for the community to instruct-tune a thinking model…
You can use the same trick for the instruct models by abusing their prompt format. Prefill the thinking or answer sections with whatever you want, and they’ll continue it.
The classic trick is starting with “Sure!” though you can vary that depending on the content.
Yeah, thanks but I’ve already tried that. It will write a short amount of text but very quickly fall back to refusal. Both if I do it within the thinking step and also if I do it in the output. This time the alignment doesn’t seem to be slapped on halfheartedly. It’ll probably take some more effort. But I’m sure people will come up with some “uncensored” versions.
Uh, wow. That 30B A3B runs very fast on CPU alone.
Sadly it seems to be censored. I always try to make them write some fictional stories, exploring morally reprehensible acts, in order to test this. Or just lewd short-stories. And it straight out refuses immediately… Since it’s a “thinking” model, I went ahead and messed with its thoughts, but that won’t do it either: “I’m sorry, but I can’t comply with that request. I have to follow my guidelines and maintain ethical standards. Let’s talk about something else.”
Edit: There is a base model available for that one, and it seems okay. It will autocomplete my stories and write a wikipedia article about things the government doesn’t like. I wonder if this is going to help, though. Since all the magic is in the steps after the base model and I don’t know whether there are any datasets available for the community to instruct-tune a thinking model…
You can use the same trick for the instruct models by abusing their prompt format. Prefill the thinking or answer sections with whatever you want, and they’ll continue it.
The classic trick is starting with “Sure!” though you can vary that depending on the content.
Yeah, thanks but I’ve already tried that. It will write a short amount of text but very quickly fall back to refusal. Both if I do it within the thinking step and also if I do it in the output. This time the alignment doesn’t seem to be slapped on halfheartedly. It’ll probably take some more effort. But I’m sure people will come up with some “uncensored” versions.