公式動画ピックアップ
AAPL
ADBE
ADSK
AIG
AMGN
AMZN
BABA
BAC
BL
BOX
C
CHGG
CLDR
COKE
COUP
CRM
CROX
DDOG
DELL
DIS
DOCU
DOMO
ESTC
F
FIVN
GILD
GRUB
GS
GSK
H
HD
HON
HPE
HSBC
IBM
INST
INTC
INTU
IRBT
JCOM
JNJ
JPM
LLY
LMT
M
MA
MCD
MDB
MGM
MMM
MSFT
MSI
NCR
NEM
NEWR
NFLX
NKE
NOW
NTNX
NVDA
NYT
OKTA
ORCL
PD
PG
PLAN
PS
RHT
RNG
SAP
SBUX
SHOP
SMAR
SPLK
SQ
TDOC
TEAM
TSLA
TWOU
TWTR
TXN
UA
UAL
UL
UTX
V
VEEV
VZ
WDAY
WFC
WK
WMT
WORK
YELP
ZEN
ZM
ZS
ZUO
公式動画&関連する動画 [Pop Goes the Stack | AI Red Teaming in Practice: Scores, guardrails, auto-remediation | AI]
AI in production isn’t just another feature to ship. It’s a non-deterministic system that can be socially engineered, fuzzed, and pushed into failure states you won’t find with traditional testing.
Recorded live in Las Vegas at #F5’s #AppWorld2026, this episode of Pop Goes the Stack brings Joel Moses together with Jimmy White, F5’s VP of AI Security (via the CalypsoAI acquisition), for a practical look at what AI red teaming actually is and how it works when the attacker is an agent.
Jimmy reframes #GenAI security as a permutation problem: if there are countless prompt combinations that could unlock sensitive data or trigger unsafe actions, you need genAI-powered red team agents to explore those paths at scale. The discussion covers custom intents, agentic “fingerprints” that reveal not just what was compromised but how it happened, and why that “how” is the key to building protections you can trust.
You’ll also hear how scoring and reporting translate into guardrails, how auto-remediation can be validated with positive and negative test cases before a human publishes changes, and why relying on models to internalize safety isn’t a realistic plan. The conversation closes on agentic AI risk, where tools and permissions matter more than the model’s reasoning, and introduces “thought injection” as a way to redirect unsafe actions without breaking the agent loop.
If you’re building AI apps, deploying MCP-connected systems, or worrying about agents becoming tomorrow’s service accounts, this episode gives you a sharper playbook for testing, governance, and resilience.
Chapters:
00:00 Welcome to Pop Goes the Stack
00:44 AI red teaming 101: non-determinism + prompt permutations
02:02 Real risk: LLMs connected to Confluence, SQL, MCP tools
03:13 Define attacker intent: steal salaries, source code, API tokens
04:01 Agentic red teamers: backtracking, new attacks, “fingerprints”
05:22 What’s different vs traditional red teams (AI fuzzing + context)
06:39 The key output: not just “what happened,” but “how”
07:29 CASI + ARS: scoring model safety and agentic resiliency
08:59 What teams miss: security is still an afterthought
11:03 Turning findings into guardrails: model choice + remediation
12:41 Auto-remediation with evidence: publish only after review
14:39 Guardrails: AI model development focuses on better not safer
16:27 What are the benefits of consistency across guardrail infrastructure?
18:08 What are the simplistic guardrails that enterprises forget?
20:29 Agentic AI security: tools are the danger + “thought injection”
Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech: https://go.f5.net/89lxcphr
More about F5: https://go.f5.net/s08gngs4
Read our blog: https://go.f5.net/1hloysjv
Follow us on LinkedIn: https://go.f5.net/zkfyn68o
98
1