公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [Pop Goes the Stack | The Impact of Inference: Performance | AI]

Traditional performance meant deterministic response times. Identical inputs produced near-identical execution times. Optimizations reduced latency, but variance was minimal. Insert #AI inference and performance engineering has been flipped upside down. Latency depends on model size, tokenization, batching strategies, and generation settings. Identical inputs may produce different response times. The new dimension of performance is variance—not just how fast the system responds, but how response times distribute across requests, how many tokens per second are processed, and how efficient each response is relative to cost. In this episode of #F5's Pop Goes the Stack, Lori MacVittie, Joel Moses, and special guest Nina Forsyth dive into the impact of AI inference on measuring performance. It's time to rethink performance observability, focus on infrastructure optimization, agent-to-agent interactions, and robust measurement techniques. Listen in to learn how traditional approaches must evolve to manage this multi-dimensional puzzle. Chapters: 00:00 Welcome to Pop Goes the Stack 00:36 Once upon a time: Deterministic performance 02:27 Inference and the shift to non-deterministic performance 03:42 The human factor in AI latency tolerance 05:30 AI system variability: Performance measurement and cost optimization challenges 07:01 Optimizing for non-deterministic AI 08:51 Measuring AI performance: New metrics 10:41 Observability is key 13:37 Does performance management need a multi-layered infrastructure? 16:47 Key takeaways: New performance definition, start with infrastructure Find out more in the blog, "How AI inference changes application delivery": https://go.f5.net/w9barr3j Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech: https://go.f5.net/ieoxk0fj More about F5: https://go.f5.net/4c0zuulu Read our blog: https://go.f5.net/sw5ktzmn Follow us on LinkedIn: https://go.f5.net/hzhd02ai
 87      2