公式動画ピックアップ
AAPL
ADBE
ADSK
AIG
AMGN
AMZN
BABA
BAC
BL
BOX
C
CHGG
CLDR
COKE
COUP
CRM
CROX
DDOG
DELL
DIS
DOCU
DOMO
ESTC
F
FIVN
GILD
GRUB
GS
GSK
H
HD
HON
HPE
HSBC
IBM
INST
INTC
INTU
IRBT
JCOM
JNJ
JPM
LLY
LMT
M
MA
MCD
MDB
MGM
MMM
MSFT
MSI
NCR
NEM
NEWR
NFLX
NKE
NOW
NTNX
NVDA
NYT
OKTA
ORCL
PD
PG
PLAN
PS
RHT
RNG
SAP
SBUX
SHOP
SMAR
SPLK
SQ
TDOC
TEAM
TSLA
TWOU
TWTR
TXN
UA
UAL
UL
UTX
V
VEEV
VZ
WDAY
WFC
WK
WMT
WORK
YELP
ZEN
ZM
ZS
ZUO
公式動画&関連する動画 [Pop Goes the Stack | Crossing the streams | AI Security]
Prompt injection isn't some new exotic hack. It’s what happens when you throw your admin console and your users into the same text box and pray the intern doesn’t find the keys to production. Vendors keep chanting about “guardrails” like it’s a Harry Potter spell, but let’s be real—if your entire security model is “please don’t say ignore previous instructions,” you’re not doing security, you’re doing improv.
On this episode of Pop Goes the Stack, we're digging into what it actually takes to keep agentic #AI from dumpster-diving its own system prompts: deterministic policy engines, mediated tool use, and maybe—just maybe—admitting that your LLM is not a #CISO. Because at the end of the day, you can’t trust a probabilistic parrot to enforce your compliance framework. That’s how you end up with a fax machine defending against a DDoS—again.
The core premise here is that prompt injection is not actually injection, it's system prompt manipulation—but it's not a bug, it's by design. There's a GitHub repo full of system prompts extracted by folks and a number of articles on "exfiltration" of system prompts.
Join #F5's Lori MacVittie, Joel Moses, and Jason Williams as they explain why it's so easy, why it's hard to prevent, and possible mechanisms for constraining AI to minimize damage. Cause you can't stop it. At least not yet.
Chapters:
00:00 Welcome to Pop Goes the Stack
01:53 What Exactly Are System Prompts?
03:01 The Evolution of System Prompts
03:58 Why Prompt Injection Tops OWASP GenAI Threats
05:45 Tools and Function Calling: A Double-Edged Sword
07:05 Prompt Injection vs Jailbreaking
09:23 Exfiltration Risks: Insights from Leaked Prompts
12:03 Risks with AI Agents and Agentic Architectures
14:41 A Future Beyond System Prompts for Guardrails
16:42 System Prompt Growth: Operational Cost Implications
18:20 Key Takeaways: Security and Architecture Evolutions
Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech: https://go.f5.net/944omoyr
More about F5: https://www.f5.com/
Read our blog: https://www.f5.com/company/blog
Follow us on LinkedIn: https://www.linkedin.com/company/F5
71
1