1 post tagged attention-mechanisms.
A KV cache compression method that maintains full attention quality whilst delivering 2.5× higher throughput for long-context inference.