ZF asked us to design a diagnostic platform for automotive technicians, a challenger to AllData, Mitchell 1, and Identifix. I led naming, brand strategy, and end-to-end UX/UI. The real design problem wasn't how to build the product; it was understanding what mechanics actually needed versus what we assumed they needed, before committing to anything.
"Go to the user. Pay the cost." The single best decision on this project was committing to in-person research in three states before designing anything.
South Florida
Round 1. Four shops in Boca Raton and surrounding area. 10.8% cold outreach success rate, against an industry expectation of 1–5%.
Prototype v1 → Interviews → Synthesis → Prototype v2
St. Louis, Missouri
Round 2. ZF partner network shops, a very different repair-shop ecosystem than South Florida. Added the A/B test structure: same flow with and without AI assistance, run in the same session.
4 interviews + A/B structure introduced
Houston, Texas
Round 2 continued. Independent shops, family-owned shops, dealer shops, specialist shops (transmissions, European imports). Confirmed patterns from FL and MO.
4 interviews · 18 total across all rounds
Speed is the only feature that matters
A senior BMW mechanic in St. Louis put it bluntly: when his current system is slow, it's a paycheck problem. Flat-rate mechanics lose money every second waiting for a search result. I designed every search interaction around this single constraint: aggressive autocomplete, minimal path from "car in front of me" to "procedure on screen."
Interactive wiring diagrams are the killer feature
Came up at almost every shop, unprompted. The exact same feature description every time: click a circuit, the entire path through the schematic highlights. One mechanic said he'd switch systems for that single capability. I added it as a primary feature: research-driven, not assumption-driven.
License plate search is underrated
Almost every shop preferred typing a plate to typing a VIN. Shorter, more error-tolerant, readable without leaving the car. VIN is the obvious feature to lead with. Plate is what mechanics actually asked for, so it became the first option in the picker, not the second.
AI: genuine interest, legitimate concern
Mechanics were intrigued by AI but concerned that junior techs would lean on summaries instead of actually diagnosing. The A/B test surfaced this honestly; asking "what do you think of AI?" gets polite answers; showing the same flow with and without it at 4:45pm on a Friday gets the real answer. Result: AI became a contextual layer woven through search and document pages, not a destination.
Pay structure shapes everything
Hourly-paid mechanics have time to engage with software. Flat-rate mechanics lose money any minute they're not turning a wrench. A product that requires any meaningful learning curve is unsellable to flat-rate shops. This shaped onboarding defaults, search latency targets, and every decision about progressive disclosure throughout the product.
Design Sprint methodology (Google Ventures)
Naming, brand strategy, competitive analysis
Personas, journey, prototype v1
South Florida · 4 shops
Prototype v2 + A/B structure
St. Louis + Houston · 8 shops
Recommendations → Delivered
The engagement closed after Phase 0. The research and prototype were the deliverable. A discovery phase that concludes with "not yet, and here's why" is a successful discovery phase.
Recruiting the right people is the hardest part of field research
Automotive technicians are difficult to recruit through traditional channels; they're busy, skeptical of software vendors, and don't respond to generic cold emails. I solved this by walking into shops directly with a working prototype and asking for 20 minutes. The 10.8% success rate on cold outreach looks low until you compare it to the 1–5% industry expectation for comparable research. The in-person approach also produced richer data than any remote session would have. You learn things sitting next to someone while a car is on a lift that you simply cannot learn over Zoom.
Designing an A/B test without lab conditions
Running a meaningful A/B test inside uncontrolled shop environments (different mechanics, different levels of experience, different states of distraction) required designing the test carefully to minimize variables I couldn't control. I standardized the task, the script, and the sequence. What I couldn't standardize was the environment, which is actually part of the point: if AI assistance adds value at 4:45pm in a loud shop, that's meaningful. If it only helps in quiet conditions, that matters too.
Presenting research that expanded the scope
The initial brief was narrower than what the research found was actually needed. Presenting findings that implicitly challenged the project's original framing (and recommending a scope that changed the roadmap) required making the evidence undeniable before making the recommendation visible. I structured the Phase 0 report to let the research speak first and the recommendation follow, rather than leading with the ask and defending it afterward.
On field research
Cold emails to mechanics have terrible response rates. Walking into a shop and demoing a prototype while a car sits on a lift gives you information you simply cannot get any other way. I traveled for it. I'd travel for it again.
On A/B in qualitative sessions
Asking "what do you think of AI?" gets polite answers. Showing two versions of the same task and asking which they'd use at 4:45pm on a Friday gets the real answer. Embedding A/B into qualitative sessions is something I'll apply wherever AI features are on the table.
On credit
I led naming and strategy. Another designer executed the visual identity. Calling that 'I did the brand' would be cheap. The split made the work better and costs me nothing to be transparent about it.