公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [AI Model Inference with Red Hat AI | Red Hat Explains]

Everyone talks about training AI models, but what about inference? Cedric Clyburn, Senior Developer Advocate at Red Hat, pulls back the curtain on why AI model inference is often the real culprit behind performance bottlenecks and budget overruns and how to tackle it. In this video, Cedric covers: ● Why Inference is Costly: Understand common issues like using oversized models for specific tasks and suboptimal infrastructure that can inflate costs and slow down your AI. ● Quick Wins for Optimization including Quantization, Batching, and Caching ● Deeper Optimization Techniques: Explore methods like model pruning and distillation to further streamline your models. ● A Practical Roadmap: Get a 4-step plan to measure, identify bottlenecks, implement quick wins, and plan larger optimizations for your AI inference workloads. Understand how Red Hat AI's flexible, open source-powered deployment options and enterprise support can help streamline your journey to efficient and cost-effective AI inference. Stop overspending on AI inference and start optimizing for speed and efficiency. Timestamps: 0:00 - The Real AI Challenge: Inference Costs & Performance 1:21 - Quick Win 1: Quantization (Reduce model precision) 2:02 - Quick Win 2: Proper Batching (Efficient processing with vLLM) 2:23 - Quick Win 3: Caching Strategies (Eliminate redundant computations) 2:35 - Deeper Dive: Model Pruning & Distillation 3:02 - Smart Hardware Optimization 3:11 - A Practical Roadmap to Optimize Inference 3:43 - Benefits of Optimization & How Red Hat AI Can Help Learn more about optimizing AI with Red Hat: 🧠 Explore Red Hat AI solutions → https://www.redhat.com/en/products/ai 📖 Learn about vLLM for efficient LLM serving (used with OpenShift AI) → https://www.redhat.com/en/topics/ai/what-is-vllm 🛠️ Discover model quantization with Red Hat (LLM Compressor) → https://developers.redhat.com/articles/2025/02/19/multimodal-model-quantization-support-through-llm-compressor 📄 Red Hat OpenShift AI for model serving → https://www.redhat.com/en/products/ai/openshift-ai #RedHat #AIinference #LLMOps #MLOps #RedHatAI #ModelOptimization #Quantization #AIperformance #AICost #TechExplained #OpenSourceAI
 260      8