Best LLMs for Cantonese 2026 | Cantonese Language AI Leaderboard

Top LLMs for Cantonese language including spoken and written performance.

MiniMax: MiniMax M2.7

MiniMax: MiniMax M2.7

by MiniMax

196.61K tokens

MiniMax-M2.7 is a next-generation large language model designed for autonomous, real-world productivity and continuous improvement. Built to actively participate in its own evolution, M2.7 integrates advanced agentic capabilities through multi-agent collaboration, enabling it to plan, execute, and refine complex tasks across dynamic environments. Trained for production-grade performance, M2.7 handles workflows such as live debugging, root cause analysis, financial modeling, and full document generation across Word, Excel, and PowerPoint. It delivers strong results on benchmarks including 56.2% on SWE-Pro and 57.0% on Terminal Bench 2, while achieving a 1495 ELO on GDPval-AA, setting a new standard for multi-agent systems operating in real-world digital workflows.

Position Medals
Qwen: Qwen3.6 Plus

Qwen: Qwen3.6 Plus

by Qwen

1M tokens

Qwen 3.6 Plus builds on a hybrid architecture that combines efficient linear attention with sparse mixture-of-experts routing, enabling strong scalability and high-performance inference. Compared to the 3.5 series, it delivers major gains in agentic coding, front-end development, and overall reasoning, with a significantly improved “vibe coding” experience. The model excels at complex tasks such as 3D scenes, games, and repository-level problem solving, achieving a 78.8 score on SWE-bench Verified. It represents a substantial leap in both pure-text and multimodal capabilities, performing at the level of leading state-of-the-art models.

Position Medals
Z.ai: GLM 4.7 Flash

Z.ai: GLM 4.7 Flash

by Z.ai

202.75K tokens

As a 30B-class SOTA model, GLM-4.7-Flash offers a new option that balances performance and efficiency. It is further optimized for agentic coding use cases, strengthening coding capabilities, long-horizon task planning, and tool collaboration, and has achieved leading performance among open-source models of the same size on several current public benchmark leaderboards.

Position Medals

4

Google: Gemini 2.5 Flash

Google: Gemini 2.5 Flash

by Google

1.05M tokens

5

Google: Gemini 3.1 Pro Preview

Google: Gemini 3.1 Pro Preview

by Google

1.05M tokens

6

MiniMax: MiniMax M2.5 (free)

MiniMax: MiniMax M2.5 (free)

by MiniMax

196.61K tokens

7

OpenAI: gpt-oss-20b (free)

OpenAI: gpt-oss-20b (free)

by OpenAI

131.07K tokens

8

DeepSeek: DeepSeek V3 0324

DeepSeek: DeepSeek V3 0324

by DeepSeek

163.84K tokens

9

Qwen2.5 72B Instruct

Qwen2.5 72B Instruct

by Qwen

32.77K tokens