This article introduces the architecture and implementation of Tair KVCache Manager, an open-source enterprise-grade global KVCache management service for scalable Agentic AI inference.
This article introduces a dual memory-pool inference framework enabling efficient hybrid Transformer-Mamba model execution by resolving conflicting caching mechanisms.
This article introduces engineering optimizations to 3FS—KVCache's foundation layer—across performance, productization, and cloud-native management for scalable AI inference.
Alibaba Cloud has unveiled AI Lakebase architecture alongside a suite of upgrades for its flagship database, PolarDB at PolarDB Developer Conference in China.
This article introduces HiCache, a hierarchical KVCache infrastructure developed by Alibaba Cloud Tair and SGLang to optimize performance and memory capacity for long-context "agentic" LLM inference.