Best LLMs for Burmese 2026 | Burmese Language AI Rankings

Real-time leaderboard of the best LLMs for Burmese (Myanmar) language.

OpenAI: GPT-4o-mini

OpenAI: GPT-4o-mini

by OpenAI

128K tokens

GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than [GPT-3.5 Turbo](/models/openai/gpt-3.5-turbo). It maintains SOTA intelligence, while being significantly more cost-effective. GPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences [common leaderboards](https://arena.lmsys.org/). Check out the [launch announcement](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) to learn more. #multimodal

Position Medals
Google: Gemini 2.5 Flash Lite

Google: Gemini 2.5 Flash Lite

by Google

1.05M tokens

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

Position Medals
OpenAI: gpt-oss-120b (free)

OpenAI: gpt-oss-120b (free)

by OpenAI

131.07K tokens

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

Position Medals

4

Anthropic: Claude Sonnet 4.6

Anthropic: Claude Sonnet 4.6

by Anthropic

1M tokens

5

Google: Gemini 2.5 Pro

Google: Gemini 2.5 Pro

by Google

1.05M tokens

6

OpenAI: GPT-4.1 Mini

OpenAI: GPT-4.1 Mini

by OpenAI

1.05M tokens

7

Google: Gemini 3 Flash Preview

Google: Gemini 3 Flash Preview

by Google

1.05M tokens

8

DeepSeek: DeepSeek V3

DeepSeek: DeepSeek V3

by deepseek-ai

163.84K tokens

9

Qwen2.5 72B Instruct

Qwen2.5 72B Instruct

by Qwen

32.77K tokens