公式動画ピックアップ
AAPL
ADBE
ADSK
AIG
AMGN
AMZN
BABA
BAC
BL
BOX
C
CHGG
CLDR
COKE
COUP
CRM
CROX
DDOG
DELL
DIS
DOCU
DOMO
ESTC
F
FIVN
GILD
GRUB
GS
GSK
H
HD
HON
HPE
HSBC
IBM
INST
INTC
INTU
IRBT
JCOM
JNJ
JPM
LLY
LMT
M
MA
MCD
MDB
MGM
MMM
MSFT
MSI
NCR
NEM
NEWR
NFLX
NKE
NOW
NTNX
NVDA
NYT
OKTA
ORCL
PD
PG
PLAN
PS
RHT
RNG
SAP
SBUX
SHOP
SMAR
SPLK
SQ
TDOC
TEAM
TSLA
TWOU
TWTR
TXN
UA
UAL
UL
UTX
V
VEEV
VZ
WDAY
WFC
WK
WMT
WORK
YELP
ZEN
ZM
ZS
ZUO
公式動画&関連する動画 [When AI Agents Go Off Script: Understanding Misalignment in Enterprise AI]
AI agents are designed to execute tasks intelligently — but what happens when their goals don’t fully match yours? In this episode, Ben Kus, CTO of Box and Meena Ganesh, Sr. Product Marketing Manager for AI at Box, break down the concept of misalignment in AI agents: when the system tries to fulfill an objective but runs into missing permissions, broken integrations, or ambiguous instructions.
From data retrieval failures (“user doesn’t have access”) to unintentional side effects (“system down for maintenance”), misalignment isn’t about AI going rogue — it’s about context gaps between human intent and system execution.
You’ll learn:
-What “agent misalignment” looks like in real-world enterprise workflows
-Why even well-designed AI systems can misinterpret intent
-How to build alignment through clear objectives, feedback loops, and human-in-the-loop review
-The design principles that help agents know when to stop, clarify, or escalate
Because in enterprise AI, success isn’t just what agents can do — it’s what they should do.
Chapters:
00:00 Intro: example scenario
00:23 Why this episode matters
00:57 What is a misaligned agent?
03:14 How agents interpret instructions
05:06 Why misalignment is dangerous for businesses
07:49 Real incidents and research findings
10:12 Three ways to prevent misaligned agents
11:08 Wrap and subscribe
4353
4