I wrote a tutorial for invoking Mistral 7B model directly with SQL using Python UDFs in Exasol and Ollama. This demonstrates a fully self-hosted AI pipeline where data never leaves your infrastructure—no API fees, no vendor lock-in. Takes ~15 minutes to set up with Docker.
pploug•1h ago
purely curious, but why did you go with ollama instead of the built in LLM runner in docker, since you are also using docker?
exasol_nerd•1h ago
great idea! I went with Ollama because I found set up to be slightly easier to set up. But technically both should offer the same experience and altogether - hosting both in Docker is very logical. That will be the next iteration of my write up!
exasol_nerd•1h ago
pploug•1h ago
exasol_nerd•1h ago