Profile

Steven Sporen

AI red teaming in practice, with emphasis on prompt injection, tool misuse, sensitive data exposure, unsafe autonomy, and the governance and compliance questions that appear when models are connected to real systems.

About

AI Red Teaming Researcher, Responsible AI and Compliance

My work sits at the intersection of offensive security thinking, responsible AI, and modern AI application design. The emphasis is system-level risk rather than model hype: prompts, retrieved content, tools, identities, permissions, memory, downstream actions, and the policy and compliance expectations around them. The goal is to make AI security concrete through architecture, attack paths, defensive patterns, governance considerations, and current reference material.

Current Focus

Focused on AI red teaming, prompt injection risk, agent security, responsible AI controls, and compliance-facing review for LLM and agent systems.

Remote / United States AI red teaming Open to relevant roles
Focus Areas
  • AI red teaming across copilots, assistants, and agentic workflows
  • Prompt injection and jailbreak analysis across direct, indirect, and multi-step attack paths
  • Responsible AI review covering safety, misuse pathways, and control effectiveness
  • Compliance-aware assessment of AI systems, including governance, auditability, and policy alignment
  • Prompt engineering review for instruction hierarchy, guardrails, decomposition, and isolation
  • Agent security analysis covering tools, memory, permissions, identity, and action boundaries
  • Curated research and commentary for AI security practitioners, product teams, and red teams
Working Style
  • I favor practical application risk over abstract model capability debates
  • I connect security findings to governance, control design, and compliance expectations
  • I link to primary sources and add short notes on why they matter
  • I treat prompts, tools, memory, identity, and action boundaries as one attack surface
Contact

Reach out if the work here is relevant

If you are hiring, building in this space, or want to get in touch about AI red teaming and agent security, send a message.