Just 250 bad documents can poison a massive AI model
We trust large language models with everything from writing emails to generating code, assuming their vast training data makes them robust. But what if a bad actor could secretly teach an AI a malici...