公式動画ピックアップ
AAPL
ADBE
ADSK
AIG
AMGN
AMZN
BABA
BAC
BL
BOX
C
CHGG
CLDR
COKE
COUP
CRM
CROX
DDOG
DELL
DIS
DOCU
DOMO
ESTC
F
FIVN
GILD
GRUB
GS
GSK
H
HD
HON
HPE
HSBC
IBM
INST
INTC
INTU
IRBT
JCOM
JNJ
JPM
LLY
LMT
M
MA
MCD
MDB
MGM
MMM
MSFT
MSI
NCR
NEM
NEWR
NFLX
NKE
NOW
NTNX
NVDA
NYT
OKTA
ORCL
PD
PG
PLAN
PS
RHT
RNG
SAP
SBUX
SHOP
SMAR
SPLK
SQ
TDOC
TEAM
TSLA
TWOU
TWTR
TXN
UA
UAL
UL
UTX
V
VEEV
VZ
WDAY
WFC
WK
WMT
WORK
YELP
ZEN
ZM
ZS
ZUO
公式動画&関連する動画 [This Simple Firewall Breaks Today’s AI Security Benchmarks]
Welcome to AI Research Bites, a series of short, focused talks showcasing cutting-edge research from the ServiceNow AI Research team, aimed at anyone interested in the practical security of modern AI systems.
Modern AI agents don’t just follow user prompts—they execute instructions embedded in tool outputs, web content, APIs, and UI elements. This creates an overlooked attack surface known as indirect prompt injection, where agents can be hijacked without altering the user prompt at all. In this talk, Gabriel Huang demonstrates how a tool-using AI agent can be compromised in practice, and how a simple LLM-as-a-judge firewall can block these attacks in a live Colab demo.
But achieving near-perfect security with such a minimal defense isn’t necessarily good news. Drawing on results from our recent paper, we are showing how many current AI security benchmarks are saturated, reward shallow defenses, and give a false sense of robustness. We compare different defense strategies, discuss real-world examples of indirect prompt injections, and outline what meaningful evaluation of agent security should look like going forward.
📄 Indirect Prompt Injections: Are Firewalls All You Need, or Stronger Benchmarks?
Paper link: https://arxiv.org/pdf/2510.05244
🧪 Live demo (Colab notebook): https://tinyurl.com/firewall-vs-prompt-injection
🛠️ DoomArena framework: https://servicenow.github.io/DoomArena/
🔬 ServiceNow AI Research team: https://www.servicenow.com/research/
344
8