公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [Pop Goes the Stack | Logging for Giants: High-Speed Telemetry in an AI World | Observability]

When OpenAI discovered they could reclaim 30,000 CPU cores simply by tuning the log-forwarding agent Fluent Bit—disabling a single function that ate ~35 % of one server’s cycles—something large and systemic became undeniable. In this episode of Pop Goes the Stack, #F5's Lori MacVittie, Joel Moses, and observability expert, Chris Hain, break down the hidden cost of telemetry in #AI-heavy architectures, why “logging is free” is a myth, and how modern systems demand a new breed of high-speed telemetry planes. Listen in to learn how Fluent Bit’s file-watching overhead compounded at scale, why profiling matters, and what enterprises can do now to control AI #observability costs. Chapters: 00:00 Welcome to Pop Goes the Stack 00:19 The most expensive lie: “logging is free” 02:25 Fluent Bit at scale: DaemonSet on every node (and the bill) 03:47 Inotify, memory churn, and the surprise CPU tax 04:43 Did turning off the feature actually solve the problem? 05:48 Finding it with profiling: System calls don’t lie 07:44 AI multiplies telemetry: More data, more destinations 09:02 Modern reality: Observability can cost more than the app 10:07 LLM logs aren’t “log lines” anymore (volume + metadata) 12:12 Reducing contention: Sampling, eBPF, buffering, offload 18:31 Enterprise key takeaways: Follow the money, drill down, optimize Read more about OpenAI's CPU recovery: https://go.f5.net/dvl5fk5d Learn how you can stay ahead of the curve and keep your stack whole with additional insights on app security, multicloud, AI, and emerging tech: https://go.f5.net/7cnnrgoi More about F5: https://go.f5.net/twngukhv Read our blog: https://go.f5.net/y9sdms2p Follow us on LinkedIn: https://go.f5.net/4itvvip6
 63      3