Autonomous driving has been “five years away” for the past decade. But beyond the headlines and the hype cycles, something genuinely interesting is happening in the field.

The perception problem is not just technical

When we talk about self-driving cars, we often focus on the sensors: LiDAR, cameras, radar. But the real challenge isn’t collecting data — it’s understanding context. A child’s ball rolling into the street means something very different from a plastic bag blowing across the road. Both look similar to a sensor; both require completely different responses.

This is where AI meets philosophy. What does it mean for a machine to understand a scene? And how do we build systems that are safe enough to trust with human lives?

What excites me

I’m particularly fascinated by the intersection of learned models and classical planning. End-to-end learning is powerful but opaque. Rule-based systems are interpretable but brittle. The future, I believe, lives in the space between — systems that can learn from data while maintaining the safety guarantees we need.

The human factor

Perhaps the most underappreciated aspect of autonomous driving is the human element. How do we build trust? How do we design handoff mechanisms between human and machine? These aren’t just engineering questions — they’re deeply human ones.

More on this in future posts.

— Zhipeng