frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
98•valyala•4h ago•16 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
43•zdw•3d ago•11 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
23•gnufx•2h ago•19 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
56•surprisetalk•3h ago•54 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
98•mellosouls•6h ago•176 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
144•AlexeyBrin•9h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
101•vinhnx•7h ago•13 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
851•klaussilveira•1d ago•258 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
139•valyala•4h ago•109 comments

First Proof

https://arxiv.org/abs/2602.05192
68•samasblack•6h ago•52 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1093•xnx•1d ago•618 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
7•mbitsnbites•3d ago•0 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
64•thelok•6h ago•10 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
235•jesperordrup•14h ago•80 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
519•theblazehen•3d ago•191 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
94•onurkanbkrc•9h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
31•momciloo•4h ago•5 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
13•languid-photic•3d ago•4 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
259•alainrk•8h ago•425 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
49•rbanffy•4d ago•9 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
187•1vuio0pswjnm7•10h ago•267 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
615•nar001•8h ago•272 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
36•marklit•5d ago•6 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
348•ColinWright•3h ago•414 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
125•videotopia•4d ago•39 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
99•speckx•4d ago•117 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
33•sandGorgon•2d ago•15 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
211•limoce•4d ago•119 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
288•isitcontent•1d ago•38 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•5 comments
Open in hackernews

Structured Output with LangChain and Llamafile

https://blog.brakmic.com/structured-output-with-langchain-and-llamafile/
46•brakmic•7mo ago

Comments

dcreater•7mo ago
People still use langchain?
owebmaster•7mo ago
No
anshumankmr•7mo ago
Its good for quickly developing something but for production, I do not think so.We used it for a RAG application I built last year with a client, ended up removing it piece by piece, and found our app responded faster.

But orgs think its some sort of flagbearer of LLMs.As I am interviewing for other roles now, HRs from other companies still ask for how many years of exp I have with Langchain and Agentic AI.

zingababba•7mo ago
What should be used instead?
Hugsun•7mo ago
I gave up after it didn't let me see the prompt that went into the LLM, without using their proprietary service. I'd recommend just using the API directly. They're very simple. There might be some simpler wrapper library if you want all the providers and can't be bothered to implement the support for each. Vercel's ai-sdk seems decent for JS.
halyconWays•7mo ago
>I gave up after it didn't let me see the prompt that went into the LLM, without using their proprietary service.

Haha, really?

ebonnafoux•7mo ago
httpx to make the call yourself, or if you really want a wrapper the openAI python https://github.com/openai/openai-python.
Jimmc414•7mo ago
pydanticai, dspy or deal directly with the provider sdks
dcreater•7mo ago
DSPy seems like the right, developed approach but its far too convoluted and I find the grammar is ugly.
dcreater•7mo ago
Plain old http requests and your own functions.

Its almost always the better choice

codestank•7mo ago
i do because i don't know any better since i'm new to the AI space.
nilamo•7mo ago
My experience, as someone who is also new and trying to figure things out, is that langchain works great as long as everything you want to do has an adapter. Try to step off the path, and things get really complex really fast. After hitting that several times, I've found it's easier to just do things directly instead of trying to figure out the langchain way of doing things.

I've found dspy to work closer to how I think, which has made working with pipelines so much easier for me.

screye•7mo ago
It is useful if you keep swapping things out. Langchain's wrappers stay stable and up-to-date because of their popularity. In production, it's ideal startups that undergo a lot of flux.

I would suggest against using their orchestration tooling, DSLs or default prompts. Those components are either underbaked or require deep adoption in a way that is harder to strip out later.

We change models, providers and search tooling quite often. Having consistent interfaces helps speed things up and reduce legacy buildup. Their stream callbacks, function calling integration, RAG primitives and logging solutions are nice.

One way of another, it is useful to have a langchain-like solution for these needs. Pydanticai + logfire seems like a better version of what I like about langchain. Haven't tried it, but I bet it's good.

reedlaw•7mo ago
The use case in the article is relatively simple. For more complex structures, BAML (https://www.boundaryml.com/) is a better option.
pcwelder•7mo ago
```

try:

    answer = chain.invoke(question)

    # print(answer) # raw JSON output

    display_answer(answer)
except Exception as e:

    print(f"An error occurred: {e}")

    chain_no_parser = prompt | llm

    raw_output = chain_no_parser.invoke(question)

    print(f"Raw output:\n\n{raw_output}")
```

Wait, are you calling LLM again if parsing fails just to get what LLM has sent to you already?

The whole thing is not difficult to do if you directly call API without Lang chain, it'd also help you avoid such inefficiency.

moribunda•7mo ago
I don't get the langchain hate, but I agree that this "blog post" is bad.

Langchain has a way to return raw output, aside "with structured output": https://python.langchain.com/docs/how_to/structured_output/#...

It's pretty common to use a cheaper model to fix these errors to match the schema if it fails with a tool call.

crystal_revenge•7mo ago
> It's pretty common to use a cheaper model to fix these errors to match the schema if it fails with a tool call.

This has not be true for a while.

For open models there's 0 need for these kind of hacks with libraries like Xgrammar and Outlines (and several others) both existing as a solution on their own and being used by a wide range of open source tools to ensure structured generation happens at the logit levels. There's no-need to add multiples to your inference cost, when in some cases (xgrammar) they can reduce inference cost.

For proprietary models more and more providers are using proper structured generation (i.e. constrained decoding) under-the-hood. Most notably OpenAI's current version of structure outputs makes use of logit based methods to guarantee the structure of the output.

Hugsun•7mo ago
The version of llama.cpp that Llamafile uses supports structured outputs. Don't waste your time with bloat like langchain.

Think about why langchain has dozens of adapters that are all targeting services that describe themselves as OAI compatible, Llamafile included.

I'd bet you could point some of them at Llamafile and get structured outputs.

Note that they can be made 100% reliable when done properly. They're not done properly in this article.

halyconWays•7mo ago
>Don't waste your time with bloat like langchain.

Amen. See also: "Langchain is Pointless" https://news.ycombinator.com/item?id=36645575

kristjansson•7mo ago
It's right there. In the screenshot in the blog post. Grammar > 'JSON Schema + Convert'. That's what structured output is.

... it's going to be september forever, isn't it?