1 post tagged inference-acceleration.
A KV cache compression method that maintains full attention quality whilst delivering 2.5× higher throughput for long-context inference.