公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [[vLLM Office Hours #46] Intro to vLLM-Omni - April 9, 2026]

In this session, we covered the latest updates from the vLLM ecosystem, followed by a deep dive into vLLM-Omni, a new effort to make omni-modal serving easy, fast, and accessible for everyone. vLLM Core Committer kicked things off with a project update, highlighting recent developments across the community and core system improvements that were released in vLLM v0.17.0, v0.18.0, and v0.19.0. We then heard from Gao Han, maintainer of vLLM-Omni, who introduced a unified approach to multimodal comprehension and generation within the vLLM project. vLLM-Omni extends vLLM’s high-performance inference engine to support text, image, video, and audio workloads, covering both autoregressive and diffusion-based architectures. The session explored how the project works today, how to get started, and what’s ahead for omni-modality serving in the vLLM ecosystem. Timestamps: 00:00 – Introduction and welcome 03:24 – vLLM community update 13:03 – What's new in vLLM v0.17.0 16:27 – What's new in vLLM v0.18.0 18:49 – What's new in vLLM v0.19.0 22:05 – Intro to vLLM-Omni: What it is, how it works, how to get started, and more 1:10:25 – Q&A and discussion Session slides: https://docs.google.com/presentation/d/1UcbvMcMM2SEr82OJLyALa43mwQy_uQmGooQEZRIEhmc/ Explore and join our bi-weekly vLLM office hours: https://red.ht/office-hours
 544      17