『Securing AI Agents with Niall Merrigan』のカバーアート

Securing AI Agents with Niall Merrigan

Securing AI Agents with Niall Merrigan

無料で聴く

ポッドキャストの詳細を見る

今ならプレミアムプランが3カ月 月額99円

2026年5月12日まで。4か月目以降は月額1,500円で自動更新します。

概要

AI Agents can be powerful tools for an organization - but are they a security risk? Richard talks to Niall Merrigan about his experiences dealing with the various ways that LLMs can be attacked, starting with prompt injection. While some attacks are humorous, others can be very serious, especially in the context of agents, where the right prompt can cause an agent to use its capabilities to access or affect data outside its expected behavior. This has already led to several well-publicized CVEs, including the ServiceNow Privilege Escalation advisory. New tools have emerged to help restrict prompts and keep agents on task - but as with all things security, this is another set of tools you need to get familiar with!

Links

  • AI Recommendation Poisoning
  • Detecting Prompt Injection Attacks
  • Mark Russinovich Crescendo Multi-Turn LLM Jailbreak Attack
  • Cross-Site Scripting (XSS)
  • Cameron Mattis LinkedIn
  • Privilege Escalation in ServiceNow AI Platform
  • Azure AI Content Safety Prompt Shields
  • Task Adherence
  • Simon Willison's Lethal Trifecta
  • Microsoft Agent 365
  • PyRIT
  • OWASP Securing Agentic Applications Guide

Recorded February 16, 2026

まだレビューはありません