181 Downloads Updated 1 month ago
tags: - unsloth - qwen3 - qwen base_model: - Qwen/Qwen3-Coder-480B-A35B-Instruct library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct/blob/main/LICENSE
See our collection for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.
Learn to run Qwen3-Coder correctly - Read our Guide.
See Unsloth Dynamic 2.0 GGUFs for our quantization benchmarks.
Unsloth supports | Free Notebooks | Performance | Memory use |
---|---|---|---|
Qwen3 (14B) | ▶️ Start on Colab | 3x faster | 70% less |
GRPO with Qwen3 (8B) | ▶️ Start on Colab | 3x faster | 80% less |
Llama-3.2 (3B) | ▶️ Start on Colab | 2.4x faster | 58% less |
Llama-3.2 (11B vision) | ▶️ Start on Colab | 2x faster | 60% less |
Qwen2.5 (7B) | ▶️ Start on Colab | 2x faster | 60% less |
Today, we’re announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we’re excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct. featuring the following key enhancements:
Qwen3-480B-A35B-Instruct has the following features: - Type: Causal Language Models - Training Stage: Pretraining & Post-training - Number of Parameters: 480B in total and 35B activated - Number of Layers: 62 - Number of Attention Heads (GQA): 96 for Q and 8 for KV - Number of Experts: 160 - Number of Activated Experts: 8 - Context Length: 262,144 natively.
NOTE: This model supports only non-thinking mode and does not generate <think></think>
blocks in its output. Meanwhile, specifying enable_thinking=False
is no longer required.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
We advise you to use the latest version of transformers
.
With transformers<4.51.0
, you will encounter the following error:
KeyError: 'qwen3_moe'
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-480B-A35B-Instruct"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Write a quick sort algorithm."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=65536
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
content = tokenizer.decode(output_ids, skip_special_tokens=True)
print("content:", content)
Note: If you encounter out-of-memory (OOM) issues, consider reducing the context length to a shorter value, such as 32,768
.
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
Qwen3-Coder excels in tool calling capabilities.
You can simply define or use any tools as following example.
# Your tool implementation
def square_the_number(num: float) -> dict:
return num ** 2
# Define Tools
tools=[
{
"type":"function",
"function":{
"name": "square_the_number",
"description": "output the square of the number.",
"parameters": {
"type": "object",
"required": ["input_num"],
"properties": {
'input_num': {
'type': 'number',
'description': 'input_num is a number that will be squared'
}
},
}
}
}
]
import OpenAI
# Define LLM
client = OpenAI(
# Use a custom endpoint compatible with OpenAI API
base_url='http://localhost:8000/v1', # api_base
api_key="EMPTY"
)
messages = [{'role': 'user', 'content': 'square the number 1024'}]
completion = client.chat.completions.create(
messages=messages,
model="Qwen3-480B-A35B-Instruct",
max_tokens=65536,
tools=tools,
)
print(completion.choice[0])
To achieve optimal performance, we recommend the following settings:
Sampling Parameters:
temperature=0.7
, top_p=0.8
, top_k=20
, repetition_penalty=1.05
.Adequate Output Length: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.
If you find our work helpful, feel free to give us a cite.
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}