try:
answer = chain.invoke(question)
# print(answer) # raw JSON output
display_answer(answer)
except Exception as e: print(f"An error occurred: {e}")
chain_no_parser = prompt | llm
raw_output = chain_no_parser.invoke(question)
print(f"Raw output:\n\n{raw_output}")
```Wait, are you calling LLM again if parsing fails just to get what LLM has sent to you already?
The whole thing is not difficult to do if you directly call API without Lang chain, it'd also help you avoid such inefficiency.
Langchain has a way to return raw output, aside "with structured output": https://python.langchain.com/docs/how_to/structured_output/#...
It's pretty common to use a cheaper model to fix these errors to match the schema if it fails with a tool call.
This has not be true for a while.
For open models there's 0 need for these kind of hacks with libraries like Xgrammar and Outlines (and several others) both existing as a solution on their own and being used by a wide range of open source tools to ensure structured generation happens at the logit levels. There's no-need to add multiples to your inference cost, when in some cases (xgrammar) they can reduce inference cost.
For proprietary models more and more providers are using proper structured generation (i.e. constrained decoding) under-the-hood. Most notably OpenAI's current version of structure outputs makes use of logit based methods to guarantee the structure of the output.
Think about why langchain has dozens of adapters that are all targeting services that describe themselves as OAI compatible, Llamafile included.
I'd bet you could point some of them at Llamafile and get structured outputs.
Note that they can be made 100% reliable when done properly. They're not done properly in this article.
Amen. See also: "Langchain is Pointless" https://news.ycombinator.com/item?id=36645575
... it's going to be september forever, isn't it?
dcreater•8h ago
owebmaster•8h ago
anshumankmr•7h ago
But orgs think its some sort of flagbearer of LLMs.As I am interviewing for other roles now, HRs from other companies still ask for how many years of exp I have with Langchain and Agentic AI.
zingababba•7h ago
Hugsun•6h ago
halyconWays•3h ago
Haha, really?
ebonnafoux•6h ago
codestank•6h ago
nilamo•5h ago
I've found dspy to work closer to how I think, which has made working with pipelines so much easier for me.
screye•3h ago
I would suggest against using their orchestration tooling, DSLs or default prompts. Those components are either underbaked or require deep adoption in a way that is harder to strip out later.
We change models, providers and search tooling quite often. Having consistent interfaces helps speed things up and reduce legacy buildup. Their stream callbacks, function calling integration, RAG primitives and logging solutions are nice.
One way of another, it is useful to have a langchain-like solution for these needs. Pydanticai + logfire seems like a better version of what I like about langchain. Haven't tried it, but I bet it's good.