- Published on
DeepSeek-V4 (April 2026) is a two-tier MoE family: V4-Pro (1.6T/49B active) and V4-Flash (284B/13B active). Both support 1 million token context, MIT-licensed weights, and thinking/non-thinking modes. The most cost-effective frontier model available.