Visual Rim Search for AllOEMRims
Most people who walk into a tire shop or onto a wheel website have absolutely no idea what a hollander number is. They don’t know their bolt pattern. They couldn’t tell you the offset of their current wheels if you put them under oath. What they DO have is a phone in their pocket and a picture of the wheel they need.
So I’ve been building a feature that turns AllOEMRims into a camera-first store. You take a photo of any rim — the one on your car, the one in a friend’s driveway, a picture you saved off Facebook Marketplace — and the site identifies it and shows you matching OEM options from our inventory. No hollander knowledge required, no spec sheets, no guessing.
The headline of the homepage now reads “Find Your Rim With Your Camera.” That’s the pitch.
Behind the scenes, the upload flow runs your photo through Google’s Cloud Vision API, which is surprisingly good at picking out wheels from real-world photos. If it can identify a hollander number from visually similar pages on the web, we look it up directly in our catalog and surface every used and refurbished variant we have in stock. If it can’t identify the exact rim, we fall back to a “best guess” mode that parses out general specs — color, material, approximate diameter, brand hints — and shows you similar rims we DO have. Either way, you never hit a dead end. You always see something useful.
The results page is split: a sidebar with your uploaded photo and the specs we identified, and the main area with matching products from our inventory. If we genuinely have nothing close, the page tells you so honestly and offers a couple of paths forward.
One of the more interesting design decisions is that every search — successful or not — captures itself as a training row in our database. The uploaded photo, after EXIF stripping for privacy, goes into our S3 bucket alongside our existing product photos. We record the detected hollander, the detected specs, the eventual click (which product the user actually picked from the results), and even the eventual purchase if the click became a sale. That’s an enormous labeled training set being built passively, every day, that we’ll use later to train a custom visual search model that doesn’t depend on the Cloud Vision API at all.
Progress so far
- ✅ Spec written and committed
- ✅ Backend AJAX pipeline — Turnstile-protected, rate-limited, daily-budget-capped, S3-uploading, Vision-calling, training-data-capturing
- ✅ Animated hero deployed (then disabled in favor of a future Blender video loop)
- ✅ Results page with sidebar, main grid, and zero-result state
- ✅ Privacy policy updated — 12-month retention, EXIF stripping, hashed IPs
- ✅ Deployed to production at alloemrims.com
What’s next
Phase 2 is a background process that scans every product photo in our existing catalog and fingerprints it with a perceptual hash — a pure-PHP algorithm that turns each image into a 64-bit signature. Once that runs through the ~3,800 photos in inventory, every user search gets matched against our own pictures FIRST, meaning most queries will never need to call the Vision API at all. Faster, cheaper, more accurate for the rims we actually stock. The Vision call becomes a fallback for unknown wheels rather than the primary path.
Phase 3, further out, is replacing the Vision fallback entirely with a custom-trained model. The training data is already accumulating from day one of Phase 1.
The hero is also getting a future upgrade. The current static layout will eventually be replaced with a pre-rendered 3D loop done in Blender, which I’ll either commission or build myself when I have a day to learn it.