公式動画ピックアップ

AAPL   ADBE   ADSK   AIG   AMGN   AMZN   BABA   BAC   BL   BOX   C   CHGG   CLDR   COKE   COUP   CRM   CROX   DDOG   DELL   DIS   DOCU   DOMO   ESTC   F   FIVN   GILD   GRUB   GS   GSK   H   HD   HON   HPE   HSBC   IBM   INST   INTC   INTU   IRBT   JCOM   JNJ   JPM   LLY   LMT   M   MA   MCD   MDB   MGM   MMM   MSFT   MSI   NCR   NEM   NEWR   NFLX   NKE   NOW   NTNX   NVDA   NYT   OKTA   ORCL   PD   PG   PLAN   PS   RHT   RNG   SAP   SBUX   SHOP   SMAR   SPLK   SQ   TDOC   TEAM   TSLA   TWOU   TWTR   TXN   UA   UAL   UL   UTX   V   VEEV   VZ   WDAY   WFC   WK   WMT   WORK   YELP   ZEN   ZM   ZS   ZUO  

  公式動画&関連する動画 [Who Holds Power When AI Compresses Decision Time?]

What if the choices we make about AI security today determine who holds power tomorrow? Erica L. Shoemate brings over a decade of experience from the FBI and U.S. Intelligence Community, followed by senior leadership roles at Twitter, Amazon, and Meta shaping AI policy, cyber strategy, and regulatory readiness. As founder of Lead with Intent Strategy, she operates at the intersection where national security, emerging technology, and human-centered design collide. In this episode, David Moulton and Erica explore how AI is fundamentally reshaping the security landscape, from compressed decision-making timelines and asymmetric threat capabilities to the erosion of trust that creates strategic vulnerabilities. You'll learn: Why AI governance can't be an afterthought—and how building policy alongside innovation creates competitive advantage, not friction How the "new security order" is lowering disruption costs while amplifying ambiguity, enabling smaller actors to generate outsized impact Why human-centered design isn't about empathy as a value—it's about operational clarity that prevents cognitive overload from becoming a security risk The framework for balancing innovation and restraint: treating policy as guardrails, not brakes, while red-teaming AI systems before deployment How trust functions as a national security asset—and why overconfidence is the fastest way to lose it Erica brings rare perspective from both classified intelligence operations and private sector AI deployment at scale. She challenges the assumption that speed and security are trade-offs, arguing instead that ethical AI systems are more durable, more resilient, and ultimately more profitable than those built without accountability. With AI compressing the timeline from detection to decision to response, the margin for error has never been smaller. This conversation reveals why the choices security leaders make right now—about governance, diversity, transparency, and human oversight—will define who is protected, who is exposed, and who maintains strategic advantage in an AI-driven future. This episode is essential listening if you're: A CISO or security leader deploying AI-enabled systems who needs to balance innovation velocity with governance rigor A policy professional struggling to keep pace with AI deployment timelines and seeking frameworks that enable rather than block Anyone responsible for building trust in AI systems—whether with users, regulators, or boards—who recognizes transparency as competitive advantage Full episode at https://www.paloaltonetworks.com/podcasts/threat-vector
 131      10