Alibaba has released Wan2.2, the industry’s first open-source large video generation models incorporating the MoE (Mixture-of-Experts) architecture, that will significantly elevate the ability of creators and developers to produce cinematic-style videos with a single click.
The Wan2.2 series features a text-to-video model, Wan2.2-T2V-A14B, an image-to-video model, Wan2.2-I2V-A14B, and Wan2.2-TI2V-5B, a hybrid model that supports both text-to-video and image-to-video generation tasks within a single unified framework.
MoE architecture
Wan2.2-T2V-A14B and Wan2.2-I2V-A14B generate videos with cinematic-grade quality
Built on the MoE architecture and trained on meticulously curated aesthetic data, Wan2.2-T2V-A14B and Wan2.2-I2V-A14B generate videos with cinematic-grade quality and aesthetics, offering creators precise control over key dimensions such as lighting, time of day, colour tone, camera angle, frame size, composition, focal length, etc.
The two MoE models also demonstrate significant enhancements in producing complex motions, including vivid facial expressions, dynamic hand gestures, and intricate sports movements.
Additionally, the models deliver realistic representations with enhanced instruction following and adherence to physical laws.
High computational consumption
To address the issue of high computational consumption in video generation caused by long tokens, Wan2.2-T2V-A14B and Wan2.2-I2V-A14B implement a two-expert design in the denoising process of diffusion models, including a high-noise expert focusing on overall scene layout and a low-noise expert to refine details and textures.
Though both models comprise a total of 27 billion parameters, only 14 billion parameters are activated per step, reducing computational consumption by up to 50%.
Wan2.2 incorporates fine-grained aesthetic tuning through a cinematic-inspired prompt system that categorises key dimensions such as lighting, illumination, composition, and colour tone.
Generalisation capabilities
This approach enables Wan2.2 to accurately interpret and convey users' aesthetic intentions
This approach enables Wan2.2 to accurately interpret and convey users' aesthetic intentions during the generation process.
To enhance generalisation capabilities and creative diversity, Wan2.2 was trained on a substantially larger dataset, featuring a 65.6% increase in image data and an 83.2% increase in video data compared to Wan2.1.
Wan2.2 demonstrates enhanced performance in producing complex scenes and motions, as well as an enhanced capacity for artistic expression.
Efficiency and scalability
The TI2V-5B can generate a 5-second 720P video in several minutes on a single consumer-grade GPU
Wan2.2 also introduces its hybrid model Wan2.2-TI2V-5B, a dense model utilising a high-compression 3D VAE architecture to achieve a temporal and spatial compression ratio of 4x16x16, enhancing the overall information compression rate to 64.
The TI2V-5B can generate a 5-second 720P video in several minutes on a single consumer-grade GPU, enabling efficiency and scalability to developers and content creators.
Wan2.2 models are available to download on Hugging Face and GitHub, as well as Alibaba Cloud’s open-source community, ModelScope. A major contributor to the global open source community, Alibaba open-sourced four Wan2.1 models in February 2025 and Wan 2.1-VACE (Video All-in-one Creation and Editing) in May 2025. To date, the models have attracted over 5.4 million downloads on Hugging Face and ModelScope.
Learn why leading casinos are upgrading to smarter, faster, and more compliant systems