Training Data Poisoning
The long-game version of context flooding — creating and distributing content specifically designed to be ingested into future LLM training datasets. Unlike prompt poisoning (which targets runtime behavior), training data poisoning aims to permanently alter what the AI "knows" by corrupting its foundational training data.