×
vLLM

16× Performance Boost and 98% Cost Reduction: A Dive into the Upgraded SLS Vector Indexing Architecture

This article introduces an upgraded SLS Vector Indexing Architecture that achieves a 16× performance boost and 98% cost reduction for semantic indexing in log scenarios.

Qwen3-Next: Towards Ultimate Training & Inference Efficiency

This article introduces Qwen3‑Next—an ultra‑efficient LLM architecture—along with its 80B models, benchmarks, and deployment guidance.

ACK Gateway with Inference Extension: A Practice for Optimizing Large Model Inference Service Deployed across Multiple Nodes

This article introduces how to use ACK Gateway with Inference Extension to optimize multi-node large-model inference performance.

ACK One Registered Clusters Help Solve GPU Resource Shortage in Data Centers

With the help of ACK One registered clusters, we can make full use of ACS GPU computing power of Alibaba Cloud to efficiently deploy the DeepSeek inference model.

Analyzing the Distributed Inference Process Using vLLM and Ray from the Perspective of Source Code

This article explores how to implement distributed inference with vLLM and Ray from a source code perspective.