Best LLMs for Korean 2026 | Korean Language AI Rankings

Top performing LLMs for Korean language tasks including Hangul support.

Qwen: Qwen3 235B A22B Instruct 2507

Qwen: Qwen3 235B A22B Instruct 2507

by Qwen

262.14K tokens

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

Position Medals
Google: Gemini 3 Flash Preview

Google: Gemini 3 Flash Preview

by Google

1.05M tokens

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability. The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.

Position Medals
Google: Gemini 2.5 Flash Lite

Google: Gemini 2.5 Flash Lite

by Google

1.05M tokens

Gemini 2.5 Flash-Lite is a lightweight reasoning model in the Gemini 2.5 family, optimized for ultra-low latency and cost efficiency. It offers improved throughput, faster token generation, and better performance across common benchmarks compared to earlier Flash models. By default, "thinking" (i.e. multi-pass reasoning) is disabled to prioritize speed, but developers can enable it via the [Reasoning API parameter](https://openrouter.ai/docs/use-cases/reasoning-tokens) to selectively trade off cost for intelligence.

Position Medals

4

Google: Gemma 4 26B A4B  (free)

Google: Gemma 4 26B A4B (free)

by Google

262.14K tokens

5

Google: Gemini 2.5 Pro

Google: Gemini 2.5 Pro

by Google

1.05M tokens

6

MoonshotAI: Kimi K2.6

MoonshotAI: Kimi K2.6

by moonshotai

262.14K tokens

7

OpenAI: GPT-4o-mini

OpenAI: GPT-4o-mini

by OpenAI

128K tokens

8

Anthropic: Claude Sonnet 4.6

Anthropic: Claude Sonnet 4.6

by Anthropic

1M tokens

9

OpenAI: GPT-5.4 Mini

OpenAI: GPT-5.4 Mini

by OpenAI

400K tokens