Best LLMs for Turkish 2026 | Turkish Language AI Leaderboard

Top LLMs for Turkish language performance and accuracy.

Google: Gemini 3 Flash Preview

Google: Gemini 3 Flash Preview

by Google

1.05M tokens

Gemini 3 Flash Preview is a high speed, high value thinking model designed for agentic workflows, multi turn chat, and coding assistance. It delivers near Pro level reasoning and tool use performance with substantially lower latency than larger Gemini variants, making it well suited for interactive development, long running agent loops, and collaborative coding tasks. Compared to Gemini 2.5 Flash, it provides broad quality improvements across reasoning, multimodal understanding, and reliability. The model supports a 1M token context window and multimodal inputs including text, images, audio, video, and PDFs, with text output. It includes configurable reasoning via thinking levels (minimal, low, medium, high), structured output, tool use, and automatic context caching. Gemini 3 Flash Preview is optimized for users who want strong reasoning and agentic behavior without the cost or latency of full scale frontier models.

Position Medals
MoonshotAI: Kimi K2.6

MoonshotAI: Kimi K2.6

by moonshotai

262.14K tokens

Kimi K2.6 is Moonshot AI's next-generation multimodal model, designed for long-horizon coding, coding-driven UI/UX generation, and multi-agent orchestration. It handles complex end-to-end coding tasks across Python, Rust, and Go, and can convert prompts and visual inputs into production-ready interfaces. Its agent swarm architecture scales to hundreds of parallel sub-agents for autonomous task decomposition - delivering documents, websites, and spreadsheets in a single run without human oversight.

Position Medals
OpenAI: gpt-oss-120b (free)

OpenAI: gpt-oss-120b (free)

by OpenAI

131.07K tokens

gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

Position Medals

4

Google: Gemini 2.5 Flash

Google: Gemini 2.5 Flash

by Google

1.05M tokens

5

Anthropic: Claude Sonnet 4.6

Anthropic: Claude Sonnet 4.6

by Anthropic

1M tokens

6

OpenAI: GPT-5 Chat

OpenAI: GPT-5 Chat

by OpenAI

128K tokens

7

DeepSeek: DeepSeek V4 Flash

DeepSeek: DeepSeek V4 Flash

by DeepSeek

1.05M tokens

8

DeepSeek: DeepSeek V4 Pro

DeepSeek: DeepSeek V4 Pro

by DeepSeek

1.05M tokens