×
Foundation Models

Qwen3‑LiveTranslate: Real‑Time Multimodal Interpretation — See It, Hear It, Speak It!

This article introduces Qwen3‑LiveTranslate, a real‑time multimodal AI for fast, vision‑enhanced, high‑quality multilingual audio and video interpretation.

Qwen3-VL: Sharper Vision, Deeper Thought, Broader Action

This article introduces Qwen3‑VL, a next‑gen open‑source multimodal model with sharper vision, stronger reasoning, long context, and agentic tool use.

Qwen3Guard: Real-time Safety for Your Token Stream

This article introduces Qwen3Guard, a multilingual safety guardrail for LLMs that provides real-time, token-level moderation of prompts and responses.

Qwen3-Omni: Natively Omni-Modal Foundation Models!

This article introduces Qwen3‑Omni—an end‑to‑end multilingual, omni‑modal foundation model with real‑time text and speech across text, image, audio, and video.

Qwen2.5-LLM: Extending the Boundary of LLMs

In this blog, we delve into the details of our latest Qwen2.5 series language models.

Qwen2.5: A Party of Foundation Models!

This article introduces the latest addition to the Qwen family, Qwen2.5, along with specialized models for coding and mathematics.

Introducing Alibaba Cloud for Generative AI

This article introduces Alibaba Cloud's role in providing a complete solution for generative AI.

Japanese-Language AI Models Based on Tongyi Qianwen (Qwen) Were Launched by rinna

This article introduces rinna’s Nekomata model series continuously trained in the Japanese language based on Alibaba Cloud's Qwen-7B and Qwen-14B models.