Best LLMs for Persian 2026 | Persian (Farsi) Language AI Leaderboard
Real-time rankings of the best LLMs for Persian/Farsi language.

Qwen: Qwen3.5-Flash
by Qwen
•1M tokens
The Qwen3.5 native vision-language Flash models are built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. Compared to the 3 series, these models deliver a leap forward in performance for both pure text and multimodal tasks, offering fast response times while balancing inference speed and overall performance.

Google: Gemini 2.5 Flash Lite
by Google
•1.05M tokens
Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.


DeepSeek: DeepSeek V4 Flash
by DeepSeek
•1.05M tokens
DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fast inference and high-throughput workloads, while maintaining strong reasoning and coding performance. The model includes hybrid attention for efficient long-context processing. Reasoning efforts `high` and `xhigh` are supported; `xhigh` maps to max reasoning. It is well suited for applications such as coding assistants, chat systems, and agent workflows where responsiveness and cost efficiency are important.

4
Google: Gemma 4 26B A4B (free)
by Google
262.14K tokens
5

xAI: Grok 4 Fast
by xAI
2M tokens
6
Google: Gemini 3 Flash Preview
by Google
1.05M tokens
7
Google: Gemini 2.5 Flash
by Google
1.05M tokens
8
OpenAI: GPT-5.4
by OpenAI
1.05M tokens