Trust at Top Speed - AI Racing Coach powered by Gemini 🏁

"What you all have done in a few days, we haven’t seen in 40 years!"

That was the genuine, shell-shocked reaction from Matt Busby, CEO of Thunderhill Raceway Park. A huge data nerd himself, Matt couldn't pull himself away from the dashboard the team was coding in real-time from a camper RV in the paddock.

He was staring at our app. Just a few days ago, it was an idea. Now, it was a digital race coach capable of whispering physics-driven mentorship through a driver's headset, spotting errors with the sharp, predictive eyes of a seasoned veteran.


The spark came from Ajeet Mirwani, a passionate race enthusiast who brought his two worlds together. As our lead for the Google Developer Experts program, he challenged us with something that sounded impossible: The High-Velocity AI Field Test.

The prompt wasn't to "build a chatbot." It was to build a verifiable, real-time intelligence system that could survive in a high-stakes environment where cloud latency could be a safety risk. To make it happen, he turned his BMW E46 M3 into a rolling "Data Capture Unit". We picked the race track as our lab. And, we called it the "Data Crucible."

I’ll be honest: I’m an ML engineer, and I've never worked with hardware before. My day job is training recommendation models for personalization and monetization, a world far removed from the physical constraints of a racecar. I’m far more at home with stochastic gradient descent in a loss landscape than I am optimizing a friction circle at the limit of traction. But taking AI out of the sterile cloud & notebooks and into a vibrating, 150 mph + 100-degree cockpit was exactly why this challenge mattered!

Trust is Built on Ground Truth 🏎️

Early on, we hit a fundamental limitation of Large Language Models: in high-velocity physical systems, standard probabilistic outputs are a safety hazard. You can't ask a model to "innovate" a racing line at 150 mph; a single incorrect token isn't just a typo, it's a safety risk.

We realized trust had to be anchored in Ground Truth. We shifted our objective from "AI Innovation" to "Expert Scaling." To create a deterministic baseline, we needed a human benchmark. Huge thanks to Anthony Zwain from Edge Motorworks for laying down a 1:57 reference lap on Thunderhill's East Loop!

This became our "Golden Lap." Instead of asking the AI to guess the physics of a corner, we engineered a system that calculated the precise physics-delta between the user and the pro driver. By using Gemini 3.0 to codify the expert coach-to-driver interaction, we were able to extract pedagogical patterns rather than just raw telemetry.

This created a three-layer verification system: any AI advice that deviated significantly from the Golden Lap’s high-fidelity physics profile was suppressed by hard-coded safety gates. We weren't just prompting a model; we were scaling verified human intuition through a Human-in-the-Loop architecture.

The Architecture: Splitting the Brain 🧠

We quickly found that one model couldn't do it all. You can't have a deep reasoning model trying to shout "BRAKE!" in 50 milliseconds. It's too slow! So, we architected a Split-Brain System, treating the car like a biological organism with reflexes and reasoning.

  • The Reflexes <> Gemini Nano. For the "Hot Path", the safety-critical alerts, we went to the Edge. We used Chrome's experimental window.ai API to run Gemini Nano directly in the browser.
    • To stop the model from hallucinating, we used Schema-Constrained Generation. We forced it to output strict JSON enums like ["COMMIT", "TRAIL_BRAKE"].
    • This cut our latency to ~15ms. No network round-trips. No 5G dead zones. Just pure, local inference.
    • A sub-50ms decision is worthless if the audio synthesis takes 200ms. We bypassed slow browser TTS and used the Web Audio API with a library of pre-cached command tokens. This ensured sub-10ms "decision-to-ear" latency, delivering advice before the driver even touched the brake pedal.
  • The Strategist <> Gemini 3.0 Flash. While Nano watched the corners, Gemini 3.0 Flash watched the lap. Living in the cloud via GCP Cloud Run, this layer acted as the "Race Engineer." It ingested the data stream (buffered via Redis/PubSub) and looked for patterns over time, like identifying that a driver was consistently braking 0.2s earlier than Anthony's 1:57 reference in Sector 2.
  • Virtual Sensors & Heuristics. When hardware failed us, we used physics. We implemented Virtual Sensors to infer brake pressure by calculating longitudinal and lateral G.
    • But we didn't stop at AI. We implemented a Heuristic Fallback layer, a rule-based system that acts as a hard safety gate. If the AI suggests an action that contradicts the telemetry's "Safe Set" (like "Push" during a heavy braking zone), the heuristics preempt the audio.
  • Predictive Geofencing. Best coaching comes before you need it. We defined geofences around every corner using Haversine distance. This allowed the AI to trigger "Feed Forward" advice, "APPROACHING Turn 4: Smooth release", exactly 200 meters before the braking zone.
    Credits: Vikram Tiwari

    The Workflow: From Noob to Pro ⚡️

    To be honest, this project would have taken months without the right (AI-powered!) tools. We used the Gemini ecosystem to "vibe code" the entire stack.

  • Discovery Phase: As newcomers to racing, we used Google AI Studio to anchor the entire project. We literally uploaded raw VBOX videos and telemetry logs to make sense of the domain, the data, design our "Coach Personas" (Tony, Rachel, AJ), and prototype the entire app experience in a sandbox before we moved anywhere near a motor.
  • Build Phase: In production, Google Antigravity (the agentic IDE) became our pair programmer. It didn't just autocomplete syntax; it helped build the app from the ground up. We could describe physics in plain English: "If speed drops rapidly without lateral G, that’s straight-line braking," and Antigravity would generate the Python inference logic for our virtual sensors. When our hardware failed, or GPS units spat out gibberish, I’d paste the error logs into Antigravity, and it would instantly identify the baud rate mismatch and rewrite our configuration.

    Side Note: Since track laps were limited (and expensive!), we had to get creative. We literally physically ran around the paddock holding our GPS devices to capture live coordinates for some quick testing. And yes, our first version involved a laptop strapped to the passenger seat in a backpack, but that's a chaotic story for another day.

    Lessons from the Asphalt 🏁

    The track is a cruel testing ground. Here is what I actually learned when code met physics:

  • Servers Don't Shake. Cars Do.
    • In the cloud, your servers are air-conditioned and mounted to stable racks. On the track, it’s 100 degrees in the cabin, the car is vibrating at 1.5G, and your hardware is gasping for air. We faced a 50% failure rate with our hardware setup because standard USB-C cables wiggled loose, or the browser crashed. It taught us that "robustness" isn't just about code coverage. It's about building for the physical chaos of the real world.
    • We even implemented a "Store & Forward" architecture so that when a cable slipped or a connection dropped, the data wasn't lost, it just caught up later.
  • Augmentation > Automation.
    • We originally thought the "Killer App" was the real-time voice coach. But it turned out the drivers valued the Post-Session Dashboard just as much.
    • Why? Because at 150 mph, you survive. In the paddock, you learn.
    • Combining the magic of real-time voice (cue AirPods) with the deep truth of visual analysis (Dashboard) verified the AI's advice and built real trust.
  • The Era of the System Orchestrator.
    • I learned that being a "Prompt Engineer" isn't enough anymore. You need to be a System Orchestrator. Success came from knowing how to leverage intelligence - Nano for speed, Flash for analysis, and Pro for deep thinking and reasoning!
  • Bottom Line 💡

    This experience proved something critical: you can trust AI in high-stakes environments, but only if you move beyond simple prompting and embrace rigorous orchestration.

    The "Split-Brain" architecture we validated - Edge AI for reflexes, Cloud AI for strategy, and Human expertise as Ground Truth - applies far beyond the track. Whether it's precision robotics, autonomous drone navigation, industrial floor safety, emergency response coordination, or assistive technologies for the visually impaired, the philosophy is the same.

    By weaving together Gemini Nano, Gemini 3.0 Flash, Google Cloud, and Antigravity, we didn't just build a smart app.

    We built an AI system that made it safe enough to ride at 150 mph!

    The data is in, the laps are logged, and we’ve only just glimpsed the future of augmented intelligence. Hammer time!






    A huge shoutout to the GDEs and Google teams who made this possible: Ajeet Mirwani, Austin Bennett, Hemanth HM, Jesse Nowlin, Lynn Langit, Margaret Maynard-Reid, Rabimba Karanjai, Sebastian Gomez, Vikram Tiwari, Karen Acosta Parra, Alvaro H., and the entire Developer Ecosystem team ✨ #BuildWithAI #BuildWithGemini #BuildWithAntigravity #AISprint


    Written on December 18, 2025