公式動画ピックアップ
AAPL
ADBE
ADSK
AIG
AMGN
AMZN
BABA
BAC
BL
BOX
C
CHGG
CLDR
COKE
COUP
CRM
CROX
DDOG
DELL
DIS
DOCU
DOMO
ESTC
F
FIVN
GILD
GRUB
GS
GSK
H
HD
HON
HPE
HSBC
IBM
INST
INTC
INTU
IRBT
JCOM
JNJ
JPM
LLY
LMT
M
MA
MCD
MDB
MGM
MMM
MSFT
MSI
NCR
NEM
NEWR
NFLX
NKE
NOW
NTNX
NVDA
NYT
OKTA
ORCL
PD
PG
PLAN
PS
RHT
RNG
SAP
SBUX
SHOP
SMAR
SPLK
SQ
TDOC
TEAM
TSLA
TWOU
TWTR
TXN
UA
UAL
UL
UTX
V
VEEV
VZ
WDAY
WFC
WK
WMT
WORK
YELP
ZEN
ZM
ZS
ZUO
公式動画&関連する動画 [Building AI workflows that actually work with micro1]
Most enterprise AI projects fail because teams focus on models instead of workflows.
Ben Kus, CTO of Box, sits down with Andrew Maas, VP of AI at micro1, to zoom in on how organizations are building reliable AI systems by combining existing models, structured workflows, and human expertise.
They dig into the shift from one-shot AI outputs to multi-step agent workflows, and why composing tools like OCR and LLMs delivers more value than training custom models from scratch.
The conversation highlights the critical role of human-led evals in validating domain-specific work, how teams design around non-deterministic AI behavior, and why files are becoming the core interface between humans and agents.
Key Moments:
- Start with workflows, not models: Enterprises move faster by building MVP workflows and iterating, instead of investing in custom model training upfront.
- Human-led evals add real value: Domain experts review critical steps, especially in complex workflows where accuracy matters most.
- Multi-step agents over one-shot answers: Real enterprise impact comes from chaining tasks together, not relying on single AI responses.
- Design for non-determinism: Reliable systems use checks, retries, and validation layers to handle variability in AI outputs.
- Files become the interface: Inputs, outputs, and instructions live as structured files, making workflows easier to manage and audit.
- From prototype to production: Teams close gaps with evals and human feedback, turning early experiments into scalable, production-ready systems.
Jump into the conversation:
(00:00) Why enterprises should focus on composing AI workflows vs. training custom models
(00:33) Introduction of Andrew Maas from micro1
(02:54) What micro1 does and the role of human experts in AI systems
(04:13) Rise of multi-step agentic workflows and domain-specific AI capabilities
(04:39) Shift from unsupervised training to human-led evals in enterprise AI
(05:20) OCR and document AI as foundational building blocks in workflows
(06:42) Combining OCR, LLMs, and human review in real enterprise pipelines
(07:48) Limits of current models and the need for deeper domain expertise
(08:12) One-shot vs multi-step AI reasoning and why it matters
(09:39) Difference between model knowledge and real-world conversational performance
(10:07) Composing multiple LLM steps to create reliable enterprise workflows
(11:11) Natural language instructions as the new interface for AI agents
(11:57) MVP-first approach to building AI workflows instead of training models
(13:22) Variability in LLM outputs and concerns about enterprise reliability
(14:18) Designing around non-determinism with checks, retries, and testing
(16:21) Measuring outcomes instead of worrying about how AI reaches them
(16:56) How to evaluate AI systems using meaningful “windows” into workflows
(18:35) Where human experts add value in validating domain-specific outputs
(18:54) Files as the new interface between humans and AI agents
(20:26) Structuring workflows with clear inputs, outputs, and editable artifacts
(21:23) How enterprises engage micro1 to move from prototype to production
(22:24) Using evals and human review to improve AI systems in production
(23:18) Real-world example of human-in-the-loop workflows in audits and compliance
(23:51) Evolution of regulated workflows with AI-assisted decision-making
(26:30) Experiment and challenge assumptions about AI limits
43
2