I tried a couple different approaches to this, but the one I'm currently liking is a MapReduce approach that takes a query, does some pre-filtering, and maps it to the individual vector databases of all the churches. Then the Reduce step synthesizes the individual church results to return the response to a user
For example, if you ask it "LQBTQ-friendly church" it will loop through asking all the individual churches the question "Are you LGBTQ-friendly?" and then return a consolidated response from the individual responses. Latency on TTFT is about 6-7 seconds currently
Another approach we took was using Mixture of Experts. In this case, we have 3 separate retrievals - one for sermons, one for website content, and one for google reviews. And then mix the results together to respond. This MoE approach is very slow right now and I'm trying to speed it up. But you can play with it as well https://pastors.ai/churches/boulder?mode=moe
I know the HN audience isn't made up of churchgoers, but I hope HN appreciates the AI engineering behind this project. There's a bunch of reranking, query rewriting, etc behind the scenes for the different approaches. It's a hackathon project that's WIP, so I'm still working on polishing it. Happy to answer any questions or hear technical suggestions/feedback!