Nvidia says it can shrink LLM memory 20x without changing model weights
Nvidia researchers have introduced a new technique that dramatically reduces how much memory large language models need to track conversation history — by as much as 20x — without modifying the mo...