Back to feed

Tencent-Hunyuan/Hy3-preview

Tencent-Hunyuan/Hy3-preview
254
+16/day
10
Python

Hy3 preview (295B A21B), a leading reasoning and agent model in its size, with great cost efficiency

From the README

中文 | English

  

  

  

  

🖥️ Official Website  |  
💬 GitHub

Table of Contents

Model Introduction

Hy3 preview is a 295B-parameter Mixture-of-Experts (MoE) model with 21B active parameters and 3.8B MTP layer parameters, developed by the Tencent Hy Team. Hy3 preview is the first model trained on our rebuilt infrastructure, and the strongest we've shipped so far. It improves significantly on complex reasoning, instruction following, context learning, coding, and agent tasks.

| Property | Value | |:---|:---| | Architecture | Mixture-of-Experts (MoE) | | Total Parameters | 295B | | Activated Parameters | 21B | | MTP Layer Parameters | 3.8B | | Number of Layers (excluding MTP layer) | 80 | | Number of MTP Layers | 1 | | Attention Heads | 64 (GQA, 8 KV heads, head dim 128) | | Hidden Size | 4096 | | Intermediate Size | 13312 | | Context Length | 256K | | Vocabulary Size | 120832 | | Number of Experts | 192 experts, top-8 activated | | Supported Precisions | BF16 |

Highlights

  • STEM & Reasoning — Complex reasoning underpins everything else. Hy3 preview performs well on challenging STEM benchmarks like FrontierScience-Olympiad and IMOAnswerBench, and achieved excellent results in the Tsinghua Qiuzhen College Math PhD qualifying exam (Spring '26) and the China High School Biology Olympiad (CHSBO 2025), demonstrating generalizable reasoning capacity.

  • Context Learning & Instruction Following — Real-world tasks require the ability to parse messy, lengthy contexts and follow complex rules. We built CL-bench and CL-bench-Life from our own business scenarios to innovatively measure context learning ability. Hy3 preview exhibits solid gains in both context learning and instruction following capabilities.

  • Code & Agent — Coding and agents saw the biggest gains. With a rebuilt RL infrastructure and larger-scale training tasks, we posted competitive scores across mainstream coding agent benchmarks (SWE-bench Verified, Terminal-Bench 2.0) and search agent benchmarks (BrowseComp, WideSearch).

Benchmark Results

Pre-trained Model Performance

| Category | Benchmark (Metric) | # Shots | Kimi-K2 BASE | DeepSeek-V3 BASE | GLM-4.5 BASE | Hy3 preview-Base | |---|---|---|---|---|---|---| | | #ActivatedParams | - | 32B | 37B | 32B | 21B | | | #TotalParams | - | 1043B | 671B | 355B | 295B | | English | MMLU | 5-shot |