I built Halal Food AI to solve a recurring problem my family faces: determining if grocery products meet specific dietary requirements (like Halal, allergen-free, etc.) by reading complex and often vague ingredient lists, especially when traveling.
While there are barcode scanners out there, most rely on static databases that are frequently incomplete for local or niche products. I wanted to see if LLMs could solve this data gap by parsing the actual ingredients on the fly.
How it works:
You can scan a barcode or snap a picture of the ingredients list. The app uses OCR and routes the text/image to Google Gemini AI to analyze. It breaks down hidden additives (like specific E-numbers), cross-references them, and builds a dietary profile. Since it's LLM-based, it naturally handles 25+ languages, which has been extremely useful for picking up foreign products. Users can also save and share safe product lists with their family network. The technical challenge: The hardest part was getting consistent, structured JSON responses from the LLM for wildly varying international ingredient formats and keeping the prompt latency low enough so you aren't standing in the grocery aisle waiting for 20 seconds.
I would love to hear your feedback on the app, and I'm especially interested to hear from anyone who has tackled real-time OCR + LLM data extraction on mobile. Happy to answer any technical questions!