Why it matters
OpenAI describes using automated red teaming and reinforcement learning to discover agent prompt injection attacks before they appear in the wild.
My takeaway: Strong reference for red-team programs that need to justify iterative break-fix validation instead of one-time review cycles.