frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•17s ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
1•okaywriting•6m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
1•todsacerdoti•9m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•10m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•11m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•12m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•12m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•12m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
3•pseudolus•13m ago•1 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•17m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•17m ago•0 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•18m ago•0 comments

I Built a Movie Recommendation Agent to Solve Movie Nights with My Wife

https://rokn.io/posts/building-movie-recommendation-agent
4•roknovosel•18m ago•0 comments

What were the first animals? The fierce sponge–jelly battle that just won't end

https://www.nature.com/articles/d41586-026-00238-z
2•beardyw•27m ago•0 comments

Sidestepping Evaluation Awareness and Anticipating Misalignment

https://alignment.openai.com/prod-evals/
1•taubek•27m ago•0 comments

OldMapsOnline

https://www.oldmapsonline.org/en
1•surprisetalk•29m ago•0 comments

What It's Like to Be a Worm

https://www.asimov.press/p/sentience
2•surprisetalk•29m ago•0 comments

Don't go to physics grad school and other cautionary tales

https://scottlocklin.wordpress.com/2025/12/19/dont-go-to-physics-grad-school-and-other-cautionary...
2•surprisetalk•29m ago•0 comments

Lawyer sets new standard for abuse of AI; judge tosses case

https://arstechnica.com/tech-policy/2026/02/randomly-quoting-ray-bradbury-did-not-save-lawyer-fro...
3•pseudolus•30m ago•0 comments

AI anxiety batters software execs, costing them combined $62B: report

https://nypost.com/2026/02/04/business/ai-anxiety-batters-software-execs-costing-them-62b-report/
1•1vuio0pswjnm7•30m ago•0 comments

Bogus Pipeline

https://en.wikipedia.org/wiki/Bogus_pipeline
1•doener•31m ago•0 comments

Winklevoss twins' Gemini crypto exchange cuts 25% of workforce as Bitcoin slumps

https://nypost.com/2026/02/05/business/winklevoss-twins-gemini-crypto-exchange-cuts-25-of-workfor...
2•1vuio0pswjnm7•32m ago•0 comments

How AI Is Reshaping Human Reasoning and the Rise of Cognitive Surrender

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646
3•obscurette•32m ago•0 comments

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
2•jackhalford•33m ago•0 comments

Ask HN: What breaks in cross-border healthcare coordination?

1•abhay1633•34m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
2•tangjiehao•36m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•37m ago•1 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•38m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•38m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
2•tusharnaik•39m ago•0 comments
Open in hackernews

A lightweight TypeScript library for assertion-based runtime data validation

https://github.com/nimeshnayaju/decode-kit
30•nayajunimesh•5mo ago

Comments

nayajunimesh•5mo ago
Most validation libraries like Zod create deep clones of your data during validation, which can impact performance in high-throughput applications. I built decode-kit to take a different approach: assertion-based validation that validates and narrows TypeScript types in-place, without any copying or transformation. Here's what the API looks like in practice:

import { object, string, number, validate } from "decode-kit";

// Example of untrusted data (e.g., from an API) const input: unknown = { id: 123, name: "Alice" };

// Validate the data (throws if validation fails) validate(input, object({ id: number(), name: string() }));

// `input` is now typed as { id: number; name: string } console.log(input.id, input.name);

When validation fails, decode-kit takes an equally thoughtful approach. Rather than being prescriptive about error formatting, it exposes a structured error system with an AST-like path that precisely indicates where validation failed. It does include a sensible default error message for debugging, but you can also traverse the error path to build whatever error handling approach fits your application - from simple logging to sophisticated user-facing messages.

The library also follows a fail-fast approach, immediately throwing when validation fails, which provides both better performance and clearer error messages by focusing on the first issue encountered.

I'd love to hear your thoughts and feedback on this approach.

kenward•5mo ago
> fail-fast approach, immediately throwing when validation fails

would this mask any errors that would occur later in the validation?

nayajunimesh•5mo ago
With the fail-fast approach, yes - unless we introduce an option to collect all errors. In my own applications, I have found this to be a better default because the 'average' requests is valid and paying a constant overhead just to be thorough on rare invalid cases can be wasteful.

My overall takeaway has mostly been to not optimize for the worst case by default. Keep fail-fast as baseline for boundaries and hot paths, and selectively enable “collect all” where it demonstrably saves human time.

yencabulator•5mo ago
A useful counterexample, where one wants to see all errors, is validating form input. But it's very much ok to say your library is not meant for that use case!
nayajunimesh•5mo ago
That is a good point! How do you feel about fail-fast by default but being able to configure a validator to collect all errors (like Zod does by default)? Something like array(number(), { failFast: false });

We currently expose error as a tree structure so that it's easy to map an error to a value or build custom error messages (we only provide debug error message) and I haven't been able to come up with a satisfactory error API that accommodates multiple error paths, but you raise an excellent point. Thanks for pointing out.

yencabulator•5mo ago
My instinct is to say keep it small, simple & focused.
scottmas•5mo ago
The whole benefit over zod seems to be perf, so could you do some benchmarking? I wonder if it’s worth it
nayajunimesh•5mo ago
Yes, the primary focus is memory efficiency; performance improvement is a side effect of that. From my own benchmarks, I have found that to be the case. If you're validating thousands of objects per second or working with memory constraints, the difference becomes quite significant. Happy to share the full benchmark code if you'd like to run them yourself!
typeofhuman•5mo ago
You should include this benchmark in your repo and README if you want to build trust.

I think anything that declares itself as a performance improvement over the competition ought to prove it!

nayajunimesh•5mo ago
Hey, I did some benchmarks if you're interested - benchmarks results are in the README.md.

https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)

https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark suggested in another comment comparing Valibot against Zod)

revskill•5mo ago
Love it. I hate zod.
esafak•5mo ago
Why?
jasonlernerman•5mo ago
arktype does this too!
matt-attack•5mo ago
Arktype is fantastic.
seabass•5mo ago
I was surprised by the source including a bunch of try/catch, which results in deopts for that code path as far as I understand, given that the stated benefit over Zod and other validators was that this should be run in performance critical code. I’d be curious to see benchmarks that show whether this is faster than zod, valibot, and zod4 mini in hot code paths.
CharlesW•5mo ago
OP, you could potentially leverage an existing benchmark suite: https://naruaway.github.io/valibot-benchmarks/
nayajunimesh•5mo ago
Thank you, this is quite helpful. Will take a detailed look very soon!
typeofhuman•5mo ago
How do you know you're tool is more performant?
nayajunimesh•5mo ago
Hey, I did some benchmarks if you're interested - benchmarks results are in the README.md.

https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)

https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark suggested in another comment comparing Valibot against Zod)

nayajunimesh•5mo ago
Hey, thanks for the link! I was able to add the library to the benchmark quite easily. Here's the fork if you're interested:

https://github.com/nimeshnayaju/valibot-benchmarks

I have included the results I obtained from running the benchmark in the README.md. I'd love for you to also take a look at the changed code to see if I may have missed something with the integration. Curious to hear your feedback too!

CharlesW•5mo ago
Nicely done, and thanks for following up!
nayajunimesh•5mo ago
We do use try/catch in a few places. However, in normal operations (valid input), no exceptions are thrown - the try/catch blocks are present in certain validators but do not execute their catch clauses. AFAIK, modern engines generally don’t impose large steady-state penalties merely for the presence of a try/catch when no exception is thrown; the measurable cost is usually when exceptions are actually thrown. When an element/property fails, the catch is used to construct precise error paths. That’s intentionally trading some failure-path overhead for better developer diagnostics.

In the new few days, I'll prepare benchmarks to compare with Zod and Valibot!

seabass•5mo ago
The information I have on this could be outdated, so take this with a grain of salt, but it used to be the case that in hot code paths the presence of a try/catch would force a deoptimization whether or not you throw. The optimizing compiler in v8, for example, would specifically not run on any functions containing try/catch due to its inability to speculatively inline the optimized code. If you're feeling up to it, you can prove whether that is still the case with `d8 --allow-natives-syntax --trace-deopt ./your-script.js` and sprinkle in some `%OptimizeFunctionOnNextCall` in your code. I did a quick search for `try {` in the zod 4 source and didn't see anything, so I suspect that the performance issues surrounding try/catch are still at least somewhat around, unless they are simply avoiding try/catch for code cleanliness which could totally be the case. Regardless, I'd encourage you to look into whether plain old boolean return values in your validators would work for your project. Just include the `throw` part without all the `try/catch` and the code itself will likely be simpler, faster, and easy for the JIT to optimize. Good luck on those benchmarks.
nayajunimesh•5mo ago
Hey, this comment was actually very helpful. While running benchmarks, I noticed my union validator was consistently underperforming, and your insight about try/catch deoptimization made it much easier to pinpoint the issue. I ended up updating the validation logic to not use try catch frivolously and I saw significant performance improvement.

For benchmarks, I forked Zod, which already included benchmarks comparing Zod 4 against Zod 3. Here are the results (in README.md) if you're interested:

https://github.com/nimeshnayaju/zod

I noticed significantly better performance than both Zod 3 and Zod 4 across most validation scenarios (especially validations that involved rules like min/max length, etc), with the exception of simple object parsing.

I also forked another benchmark suggested in a different comment if you're interested (I noticed similar results).

https://github.com/nimeshnayaju/valibot-benchmarks

sizediterable•5mo ago
Need it be said https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...
ale•5mo ago
Libraries like these are meant for runtime validation. I agree though. I prefer to use the compiler itself (tsc --noEmit) than recreating the validation logic.
mirekrusin•5mo ago
It doesn't compete with static type system, it complements it. Static type system in typescript can't do anything with unknown/any values that are crossing i/o boundary - they require runtime assertion to bring them into statically typed, safer world.
tomjakubowski•5mo ago
Libraries like runtypes, zod, et al. market themselves as validation libraries, but they function as parsing libraries in the sense this article means: with them you "parse" untyped POJOs at the I/O boundary and get typed values (or a raised exception) out the other end.

Typescript language features like branded types, private constructors can make it so those values can only be constructed through the parse method.

They're really not much different, in terms of type safety*, from something like Serde.

*: they are of course different in other important ways -- like that Serde can flexibly work with all kinds of serialized formats.

nayajunimesh•5mo ago
We actually use the idea of branded types in one of the validators (iso8601), and I also understand that it doesn't replace fully transformed values.

https://github.com/nimeshnayaju/valleys?tab=readme-ov-file#i...

nayajunimesh•5mo ago
I understand completely, and the library is intentionally unopinionated in that regard. We simply ensure that the value passed matches the provided schema and ruleset and refine the type in-place.

In certain cases (like validating that an input is ISO8601 format), we refine the input type to a branded type (we have a Iso8601 branded type). At runtime it's just a string, but at compile time TypeScript treats it as a distinct type that can only be obtained through validation. But, it is still not transforming or parsing the data in the way that the blog post intends, which is by design.

https://github.com/nimeshnayaju/valleys?tab=readme-ov-file#i...

orangee•5mo ago
assertion-based API is a neat concept
cluckindan•5mo ago
This would be way more powerful if there was a way to infer a validator from a TypeScript type.
gr4vityWall•5mo ago
It would be nice to provide some benchmarks comparing it to Zod, arktype, etc. Comparing it across different runtimes (Node.js, Bun, web browsers, etc.) would be great, too.
nayajunimesh•5mo ago
Hey, I did some benchmarks if you're interested - benchmarks results are in the README.md.

https://github.com/nimeshnayaju/zod (Fork of Zod's repo which already included benchmarks comparing Zod 4 against Zod 3, so I simply integrated my validation library)

https://github.com/nimeshnayaju/valibot-benchmarks (An unofficial benchmark from another user comparing Valibot against Zod)

gr4vityWall•5mo ago
Thanks, I appreciate that you added them. :)

They show a lot of potential and promising results. Any guess why Valley is slower for parsing objects with primitive values?