公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [[vLLM Office Hours #47] LLM Compressor Update - April 16, 2026]

In this session, we covered the latest updates from the vLLM ecosystem, followed by a deep dive into recent improvements in LLM Compressor, including Distributed Data Parallel (DDP) support and enhanced offloading capabilities. vLLM core committer Michael Goin kicked things off with a project update, sharing the latest developments across the community and ongoing work to improve performance and usability. We then heard from Red Hat AI’s Model Optimization team, who walked through what’s new in LLM Compressor, an open source library within the vLLM ecosystem designed for accurate quantization of LLMs to enable faster and more efficient inference. The session covered recent feature additions, including Distributed Data Parallel (DDP) support and improved offloading, along with practical guidance on how the community can take advantage of these capabilities. Timestamps: 00:00 – Introduction and welcome 01:58 – vLLM community update 08:39 – Overview of LLM Compressor, DDP support and distributed workflows, enhanced offloading capabilities 34:19 – Q&A and discussion Session slides: https://docs.google.com/presentation/d/1qTRzOXYJVRUamSkJtXq7Cr96ISP8zjSg_flIX_r392s Explore and join our bi-weekly vLLM office hours: https://red.ht/office-hours
 258      7