MIT and ETH Zurich unveil SDFT to stop AI from forgetting old skills
Researchers at MIT, the Improbable AI Lab, and ETH Zurich have developed a new technique called self-distillation fine-tuning (SDFT) for large language models (LLMs).
SDFT enables LLMs to acquire new ...
