Content teams went from producing 25 posts/week to 250+, but were publishing a lot of generic fluff—phrases like "in today's digital landscape," unsubstantiated claims, surface-level analysis.
Existing tools (Grammarly, Clearscope, etc.) focus on mechanics and SEO keywords, not substantive quality.
What FluffFilter Does:
* Analyzes content against 20+ specialized evaluators
* Automatically detects content type (blog post, case study, email, etc.)
* Returns 3-5 surgical fixes with exact locations and suggested replacements
* Batch processing for reviewing 50+ documents at once
Technical Stack: * Rails 8.0 + SQLite: Decided against starting with Postgres + Redis to launch with zero-dependency stack. Deploys in 30 seconds, is cheap to host.
* Solid Queue replaced Redis: One less service to manage, job processing works identically.
* Claude Sonnet 4.5: ChatGPT did pretty well too, but found Claude's analyses felt better.
* Turbo Streams for real-time updates
Example Analysis:I analyzed an AI-generated "marketing trends" blog post. It would score well on SEO tools (good keywords, readable), but FluffFilter found: * Zero concrete examples or verifiable data * Vague audience (unclear who it's for) * Ends with "Would you like that?" like a personal email
See the full analysis: https://flufffilter.com/examples/blog-post
More examples: https://flufffilter.com/examples
Try it yourself: 7-day trial with 15 analyses at https://flufffilter.com
Built this nights and weekends. Would love HN's thoughts on:
1. The technical approach
2. Whether the AI feedback quality is genuinely useful vs. generic
3. What other content types would be valuable to evaluate
Launching on Product Hunt today as well, but genuinely curious what HN thinks.