公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [[vLLM Office Hours #37] InferenceMAX & vLLM - November 13, 2025]

We dig into InferenceMAX, an open source continuous inference benchmarking framework that sweeps popular LLMs across hardware and software stacks to track real world throughput, latency, and cost efficiency. We start with our regular vLLM project update from Michael Goin, covering recent changes across the project and what they mean for production deployments. Then Kimbo Chen and Cam Quilici from SemiAnalysis break down InferenceMAX: how it works, what metrics it tracks, and how you can use it to compare and tune LLM performance across different accelerators and inference stacks. If you would like to join future sessions live on Google Meet and take part in the discussion, request a calendar invite here: https://red.ht/office-hours Session slides: https://docs.google.com/presentation/d/1DQDA_c8x89BLoYEn_rZt0IQi-4SGLHJR7m8oidtln0Q Explore vLLM on GitHub: https://github.com/vllm-project/vllm Timestamps 0:00 Intro 3:00 What’s new in the community 18:07 InferenceMAX deep dive 41:35 InferenceMAX UI demo 44:20 Q&A
 386      4