The 'Social Driving' Trust Model
by Shweta Shrivastava • Senior Director of Product Management at Waymo
Shweta is currently leading product management at Waymo, focusing on autonomous driving behavior, simulation tools, and ride-hailing commercialization. Previously, she was CPO at Nauto and held leadership roles at AWS and Cisco.
🎙️ Episode Context
Shweta discusses the unique challenges of product management for autonomous vehicles, contrasting 'move fast and break things' with safety-critical systems. She details how Waymo builds trust through 'natural' driving behaviors and shares leadership lessons on communication efficiency and career growth from her time at Amazon and Waymo.
Problem It Solves
Overcoming user fear and the 'uncanny valley' effect when interacting with autonomous agents or robots.
Framework Overview
Building trust by programming AI to adhere to social norms, not just strict rules. This involves creating digital 'body language' to communicate intent and ensuring the ride feels natural (e.g., not overly robotic).
🧠 Framework Structure
Digital Body Language: Use subtle mov...
Social Norm Adherence: Understand con...
Naturalness over Strict Rule Followin...
Transparency: Visualize what the AI '...
When to Use
Designing HMI (Human-Machine Interfaces), robotics, or AI agents that interact physically or conversationally with humans.
Common Mistakes
Designing a system that follows rules perfectly but behaves unpredictably or aggressively to humans (being 'technically right' but 'socially wrong').
Real World Example
Waymo cars learned to slow down slightly when going downhill, even if below the speed limit, because humans naturally do this and it feels safer/more comfortable.
If there's cars coming in that lane... the car just kind of subtly was inching its way out, communicating through this interesting body language thing.
— Shweta Shrivastava