Prompt Injection
A direct attack where an adversary inserts malicious instructions into an LLM's input context to override its intended behavior. Distinct from prompt poisoning (which is indirect, via web content), prompt injection typically occurs through user inputs, API calls, or data feeds. Can cause AI to ignore safety filters, leak system prompts, or produce harmful outputs.