There is no built-in Liquid property to directly detect Shopify Collective fulfillment in email notifications.
You can use the Admin GraphQL API to programmatically detect fulfillment source.
In Liquid, you must rely on tags, metafields, or custom properties that you set up yourself to mark Collective items.
If you want to automate this, consider tagging products or orders associated with Shopify Collective, or using an app to set a metafield, and then check for that in your Liquid templates.
What you can do in Liquid (email notifications):
If Shopify exposes a tag, property, or metafield on the order or line item that marks it as a Shopify Collective item, you could check for that in Liquid. For example, if you tag orders or products with "Collective", you could use:
{% if order.tags contains "Collective" %}
<!-- Show Collective-specific content -->
{% endif %}
or for line items: {% for line_item in line_items %}
{% if line_item.product.tags contains "Collective" %}
<!-- Show something for Collective items -->
{% endif %}
{% endfor %}
In the author's 'wrong' vs 'seems to work' answer, the only difference is the tag on the line items vs, the order. The flow (template? as he refers to it as 'some other cryptic Shopify process' ) he uses in his tests does seem to add the 'Shopify Collective' tag to the line items, and potentially also to the order if the whole order is Shopify Collective fullfilled, but without further info we can only guess his setup.While using AI can always lead to non-perfect results, I feel the evidence presented here does not support the conclusion.
P.S. Given the reference to 'cryptic Shopify processes', I wonder how far the author would get with 'just the docs'.
Besides, it is not even incorrect in the way he states it is. It is fully dependent on how he added the tags in his flow, as the complete answer correctly stated. He speculates on some timing issue in some 'cryptic Shopify process' adding the tag at a later stage, but this is clearly wrong as his "working answer" (which is also in the Assistant reply) does rely on the tag having been added at the same point in the process.
My pure and exaggerated on purpose speculation: He just blindly copied some flow template, then from the (same as I got?) Assistant's answer copy/pasted the first Liquid code box, tested on one order and found it not doing what he wanted, this suited his confirmation bias regarding AI, later tried pasting the second Liquid code box (or the same answer you will get from Gemini through Google Search) and found 'it worked' on his one test order, still blamed the Assistant for being 'wrong'.
I just asked chatgpt "whats the best database structure for a users table where you have users and admins?" in two different browser sessions. One gave me sql with varchars and a role column using:
role VARCHAR(20) NOT NULL CHECK (role IN ('user', 'admin')),
the other session used text columns and defined an enum to use first: CREATE TYPE user_role AS ENUM ('user', 'admin', 'superadmin');
//other sql snipped
role user_role NOT NULL DEFAULT 'user',
An Ai Assistant should be better tuned but often isn't. That variance to me makes it feel wildly unhelpful for 'documentation' as two people end up with quite different solutions.Your question is vague (technical reference, not meant derogatory). In which DBMS? By what metric of 'best'? For which size of database? Does it need to support internationalization? Will the roles be updated or extended in the future etc.
You could argue an AI Assistant would need to ask you this clarification if the question is vague rather than make a guess. But in extremis this is in practice not workable. If every minute factor needs to be answered by the user before getting a result, only the very experts would get to the stage of getting an answer if ever.
This is not just an AI problem, but a problem (human) business and technical analysts face every day in their work. When do you switch to proposing a solution rather than asking further details? It is BTW also why all those BPM or RPA platforms that promise to eliminate 'programming' and let the business analyst 'draw' a solution often fail miserably. They either have too narrow defaults or keep needing to be fed detail long past the BA's comfort zone.
This is the same exact problem in coding assistants when they hallucinate functions or cannot find the needed dependencies etc.
There are better and more complex approaches that use multiple agents to summarize different smaller queries and then iteratively buildup etc, internally we and a lot of companies have them, but for external customer queries, way too expensive. You can't spend 30 cents on every query
Everytime I land on help.shopify.com I get the feeling it's one of those "Doc pages for sales people". Like it's meant to show "We have great documentation and you can do all these things" but never actually explains how to do anything.
I tried that bot a couple of months ago and it was utterly useless:
question: When using discountRedeemCodeBulkAdd there's a limit to add 100 codes to a discount. Is this a limit on the API or on the discount? So can I add 100 codes to the same discount multiple times?
answer: I wasn't able to find any results for that. Can you tell me a little bit more about what you're looking for?
Telling it more did not help. To me that seemed like the bot didn't even have access to the technical documentation. Finding it hard to believe that any search engine can miss a word like discountRedeemCodeBulkAdd if it actually is in the dataset: https://shopify.dev/docs/api/admin-graphql/latest/mutations/...
So it's a bit like asking sales people technical questions.
edit: Okay, I should have tried that before commenting. They seem to have updated it. When I ask the same question now it answers correctly (weirdly in German) :
Die Begrenzung von 100 Codes bei der Verwendung von discountRedeemCodeBulkAdd bezieht sich auf die Anzahl der Codes, die Sie in einem einzelnen API-Aufruf hinzufügen können, nicht auf die Gesamtanzahl der Codes, die einem Rabatt zugeordnet werden können. Ein Rabattcode kann bis zu 20.000.000 eindeutige Rabattcodes enthalten. Daher können Sie mehrfach jeweils 100 Codes zum selben Rabatt hinzufügen, bis Sie das Limit von 20.000.000 Codes erreicht haben. Beachten Sie, dass Drittanbieter-Apps oder benutzerdefinierte Lösungen dieses Limit nicht umgehen oder erhöhen können.
~= It's a limit on the API endpoint, you can add up to 20M to a single discount.
Maybe that's the best anthropomorphic analogy of LLMs. Like good sales people completely disconnected from reality, but finely tuned to give you just the answer you want.
Kind of like a bad salesperson, the best salespeople I've had the pleasure of knowing were not afraid to learn the technical background of their products.
I keep seeing bots wrongly prompted with both the browser language and the text "reply in the user's language". So I write to a bot in English and I get a Spanish answer.
You want grounded RAG systems like Shopify's here to rely strongly on the underlying documents, but also still sprinkle a bit of the magic of the latent LLM knowledge too. The only way to get that balance right is evals. Lots of them. It gets even harder when you are dealing with GraphQL schema like Shopify has since most models struggle with that syntax moreso than REST APIs.
FYI I'm biased: Founder of kapa.ai here (we build docs AI assistants for +200 companies incl. Sentry, Grafana, Docker, the largest Apache projects etc).
We concatenated all our docs and tutorials into a text file, piped it all into the AI right along with the question, and the answers are pretty great. Cost was, last I checked, roughly 50c per question. Probably scales linearly with how much docs you have. This feels expensive but compared to a human writing an answer it's peanuts. Plus (assuming the customer can choose to use the AI or a human), it's great customer experience because the answer is there that much faster.
I feel like this is a no-brainer. Tbh with the context windows we have these days, I don't completely understand why RAG is a thing anymore for support tools.
Re cost though, you can usually reduce the cost significantly with context caching here.
However, in general, I’ve been positively surprised with how effective Claude Code is at grep’ing through huge codebases.
Thus, I think just putting a Claude Code-like agent in a loop, with a grep tool on your docs, and a system prompt that contains just a brief overview of your product and brief summaries of all the docs pages, would likely be my go to.
And it's inefficient in two ways-
-you're using extra tokens for every query, which adds up.
-you're making the LLM less precise by overloading it with potentially irrelevant extra info making it harder for it to needle in a haystack the specific relevant answer.
Filtering (e.g. embedding similarity & BM25) and re-ranking/pruning what you provide to RAG is an optimization. It optimizes the tokens, the processing time, and optimizes the answer in an ideal world. Most LLMs are far more effective if your RAG is limited to what is relevant to the question.
(We tend to have far fewer evals for such humans though.)
This is doing some heavy lifting
If the training data is full of certain statements you'll get certain sounding statements coming out of the model, too, even for things that are only similar, and for answers that are total bullshit
Generally i don't trust most low paid (at no fault of their own) customer service centers anymore than i do random LLMs. Historically their advice for most things is either very biased, incredibly wrong, or often both.
(My domain is regulatory compliance, so maybe this goes beyond pure documentation but I'm guessing pushed far enough the same complexities arise)
I feel like we aren't properly using AI in products yet.
It’s great when you’re looking to do creative stuff. But terrible when you’re looking to confirm the correctness of an approach or asking for support on something that you weren’t even aware of its nonexistence.
Ive also had alot of issues with cmake that it just invents syntax and functions. Every new question has to be made in a new chat context to clear the context poisoning.
Its the things that lack good docs i want to ask about. But that's where its most likley to fail.
People seem more willing to ask an AI about certain things then be judged by asking the same question of a human, so in that regard it does seem to surface slightly different feature requests then we hear when talking to customers directly.
We use inkeep.com (not affiliated, just a customer).
And what do you pay? It's crazy that none of these AI CSRs have public pricing. There should just be monthly subscription tiers, which include some number of queries, and a cost per query beyond that.
Very similar sentiment at the height of the crypto/digital currency mania
> What’s the syntax, in Liquid, to detect whether an order in an email notification contains items that will be fulfilled through Shopify Collective?
I suspect the best possible implementation of a documentation bot with respect to questions like this one would be an "agent" style bot that has the ability to spin up its own environment and actually test the code it's offering in the answer before confidently stating that it works.
That's really hard to do - Robin in this case could only test the result by placing and then refunding an order! - but the effort involved in providing a simulated environment for the bot to try things out in might make the difference in terms of producing more reliable results.
They take a screenshot and make fun of the rubbish bot on social media.
If that happens rarely it's still a worthwhile improvement over today. If it happens frequently then the documentation bot is junk and should be retired.
But that's not true! Docs are sometimes wrong, and even more so if you could errors of omission. From a users perspective, dense / poorly structured docs are wrong, because they lead users to think the docs don't have the answer. If they're confusing enough, they may even mislead users.
There's always an error rate. DocBots are almost certainly wrong more frequently, but they're also almost certainly much much faster than reading the docs. Given that the standard recommendation is to test your code before jamming it in production, that seems like a reasonable tradeoff.
YMMV!
(One level down: the feedback loop for getting docbots corrected is _far_ worse. You can complain to support that the docs are wrong, and most orgs will at least try to fix it. We, as an industry, are not fully confident in how to fix a wrong LLM response reliably in the same way.)
- "Oh yeah just write this," except the person is not an expert and it's either wrong or not idiomatic
- An answer that is reliably correct enough of the time
- An answer in the form "read this page" or quotes the docs
The last one is so much better because it directly solves the problem, which is fundamentally a search problem. And it places the responsibility for accuracy where it belongs (on the written docs).
BossingAround•7h ago
I remember being taught that no docs is better (i.e. less frustrating to the user) than bad/incorrect docs.
pmg101•6h ago
After a certain number of years you learn that source code comments so often fall out of synch with the code itself that they're more of a liability than an asset.
taneq•4h ago
Although, “All datasheets are wrong. Some datasheets are useful.”
walthamstow•4h ago
My current place? It's in Confluence, miles away from code and with no review mechanism.