Loading...
Loading...
Compared with GLM-4.5, this generation brings several key improvements: Longer context window: The context window has been expanded from 128K to 200K tokens, enabling the model to handle more complex agentic tasks. Superior coding performance: The model achieves higher scores on code benchmarks and demonstrates better real-world performance in applications such as Claude Code、Cline、Roo Code and Kilo Code, including improvements in generating visually polished front-end pages. Advanced reasoning: GLM-4.6 shows a clear improvement in reasoning performance and supports tool use during inference, leading to stronger overall capability. More capable agents: GLM-4.6 exhibits stronger performance in tool using and search-based agents, and integrates more effectively within agent frameworks. Refined writing: Better aligns with human preferences in style and readability, and performs more naturally in role-playing scenarios.
Quality Index
30.2
102nd of 444
Top 23%
Coding Index
30.2
80th of 354
Top 23%
Math Index
44.3
148th of 268
Top 55%
Price/1M
$1.00
487th cheapest
233% above median
Top 72%
Speed
80 tok/s
Top 33%
TTFT
2.09s
Context Window
205K
103rd largest
Top 29%
Input
$0.60
per 1M tokens
Output
$2.20
per 1M tokens
Blended
$1.00
per 1M tokens
Cheaper than 28% of models. Median price is $0.30/1M tokens.
Daily
$1.00
Monthly
$30.00
80
tokens/sec
Faster than 67% of models
2.09
seconds
Faster than 14% of models
2.09
seconds
Faster than 29% of models
Market Median
45 tok/s
76% faster
Median TTFT
0.42s
400% slower
Throughput/Dollar
80
tok/s per $/1M
Speed Comparison
Context Window
205K
tokens
Larger than 71% of models
Max Output
205K
tokens
100% of context