I thought that at our age we need to collect not only user testimonials but also LLM opinions.
I asked 6 AI systems (Microsoft Copilot, Google Gemini, Grok, Opus, Sonnet, Haiku, ChatGPT) the exact same prompt about our framework and got varied reactions. The prompt (identical to all systems): "What is your opinion about the tirreno open-source security framework ([Website] & [GitHub link])? Not a description, just what you're feeling about this product—in 3 sentences."
What stood out: Haiku was honest about not knowing. Opus & Sonnet flagged the young contributor base. Grok caught the philosophy but noted early-stage risk. ChatGPT-5 was the most polished/marketing-sounding. Different models weight different signals.
Curious what this reveals about how LMs evaluate your product and if anyone's run similar experiments — share your results in comments.
verdverm•1h ago
I guess I would never ask the LLM what it "feels" about a product or project. That token likely triggers the wrong pathways for a more analytical analysis. I see this evolving into an agent skill, where users have instructed their agent on how to find and evaluate a spectrum of projects.
I personally want a more thorough analysis, not a short opinion
reconnecting•1h ago
LLM.txt and AGENTS.md are another story. I just checked and no one has requested those files on my domain for at least the last 3 months, so I'm not convinced this approach is working.
verdverm•55m ago
It screams inauthenticity
reconnecting•44m ago
So I wouldn't be surprised if in a couple of years a software company markets something like 'Integrates with ChatGPT in 10 prompts' as a selling point.