10
That feedback from a client 3 years ago still changes how I approach training data
I had a client back in 2021 who told me my AI model kept suggesting weird product categories for their hardware store. He said 'your system thinks a hammer is closer to a garden hose than to a screwdriver.' That made me realize I was relying too much on raw text similarity instead of actual use case grouping. I ended up spending a weekend manually re-labeling about 500 training samples to separate tools by function instead of just material. Now I always check my category trees by asking a non-technical person to walk through them first. Has anyone else found that their model's logic made perfect sense to them but failed with real users?
2 comments
Log in to join the discussion
Log In2 Comments
the_piper5d ago
My wife walking through 50 item categories showed me 12 that made no sense to anyone but me. Now she gets 20 bucks and a beer to test every new grouping before I ship it.
10
reese5515d ago
12 that made no sense to anyone but me" - that part is spot on, but you said "50 item categories" and then only 12 didn't make sense. That's a pretty good hit rate honestly. Most people I know end up with closer to half their categories confusing a normal person. The real test is when you have someone who's never seen your system before try to find a specific item. If they can guess where a chainsaw goes in under 10 seconds, you're probably fine. But if they start clicking through random branches, you've still got work to do. The 20 bucks and a beer deal is a good system though, keeps the feedback honest without being too formal.
2