frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: LocalGPT – A local-first AI assistant in Rust with persistent memory

https://github.com/localgpt-app/localgpt
47•yi_wang•2h ago•18 comments

Haskell for all: Beyond agentic coding

https://haskellforall.com/2026/02/beyond-agentic-coding
12•RebelPotato•1h ago•2 comments

SectorC: A C Compiler in 512 bytes (2023)

https://xorvoid.com/sectorc.html
227•valyala•9h ago•43 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
136•surprisetalk•9h ago•142 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
172•mellosouls•12h ago•326 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
56•gnufx•8h ago•54 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
22•chwtutha•29m ago•2 comments

Do you have a mathematically attractive face?

https://www.doimog.com
5•a_n•1h ago•8 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
151•vinhnx•12h ago•16 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
172•AlexeyBrin•15h ago•31 comments

IBM Beam Spring: The Ultimate Retro Keyboard

https://www.rs-online.com/designspark/ibm-beam-spring-the-ultimate-retro-keyboard
13•rbanffy•4d ago•4 comments

First Proof

https://arxiv.org/abs/2602.05192
118•samasblack•12h ago•74 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
91•randycupertino•5h ago•194 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
292•jesperordrup•20h ago•94 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
66•momciloo•9h ago•13 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
96•thelok•11h ago•21 comments

Show HN: Axiomeer – An open marketplace for AI agents

https://github.com/ujjwalredd/Axiomeer
7•ujjwalreddyks•5d ago•2 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
33•swah•4d ago•76 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
33•mbitsnbites•3d ago•2 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
563•theblazehen•3d ago•206 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
278•1vuio0pswjnm7•16h ago•457 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
118•josephcsible•7h ago•141 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
105•zdw•3d ago•54 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
178•valyala•9h ago•165 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
28•languid-photic•4d ago•9 comments

Eigen: Building a Workspace

https://reindernijhoff.net/2025/10/eigen-building-a-workspace/
10•todsacerdoti•4d ago•3 comments

The silent death of good code

https://amit.prasad.me/blog/rip-good-code
74•amitprasad•4h ago•75 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
115•onurkanbkrc•14h ago•5 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
897•klaussilveira•1d ago•274 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
224•limoce•4d ago•124 comments
Open in hackernews

Vortex: An extensible, state of the art columnar file format

https://github.com/vortex-data/vortex
115•tanelpoder•2mo ago

Comments

sys13•2mo ago
How does this compare with delta lake and iceberg?
oa335•2mo ago
Vortex is a file format, where as delta lake and iceberg are table formats. it should be compared to Parquet rather than delta lake and iceberg. This guest lecture by a maintainer of Vortex provides a good overview of the file format, motivations for its creation and its key features.

https://www.youtube.com/watch?v=zyn_T5uragA

sys13•2mo ago
I think it would still make sense to compare with those table formats, or is the idea that you would only use this if you could not use a table format?
bz_bz_bz•2mo ago
That’s like comparing words with characters.

Vortex is, roughly, how you save data to files and Iceberg is the database-like manager of those files. You’ll soon be able to run Iceberg using Vortex because they are complementary, not competing, technologies.

ks2048•2mo ago
The website could use a comparison / motivation in comparison to Parquet (beyond just stating it's 100x better).
3eb7988a1663•2mo ago
Agreed, really need a tl;dr here, because Parquet is boring technology. Going to require quite the sales pitch to move. At minimum, I assume it will be years before I could expect native integration in pandas/polars/etc which would make it low effort enough to consider.

Parquet is ..fine, I guess. It is good enough. Why invoke churn? Sell me on the vision.

frisbm•2mo ago
DuckDB just added support for vortex in their last release using the Vortex Python package so hopefully other tools wont be too far behind
bsder•2mo ago
> Going to require quite the sales pitch to move.

Mutability would be one such pitch I would like to see ...

cpard•2mo ago
As others said, Vortex is complementary to the table Formats you mentioned.

There are other formats though that it can be compared to.

The Lance columnar format is one: https://github.com/lancedb/lancedb

And Nimble from Meta is another: https://github.com/facebookincubator/nimble

Parquet is so core to data infra and widespread, that removing it from its throne is a really really hard task.

The people behind these projects that are willing to try and do this, have my total respect.

nahnahno•2mo ago
how does this compare to Arrow IPC / Feather v2?
rubenvanwyk•2mo ago
I've never understood why people say Feather file format isn't meant for "long-term" storage and prefer Parquet for that. Access is much faster from Feather, compression better with Parquet but Feather is really good.
sheepscreek•2mo ago
Honestly I think Arrow makes Feather redundant. To answer your question, Parquet is optimized for storage on disk - can store with compression to take leas space, and might include clever tricks or some form of indices to query data from the file. Feather on the other hand is optimized for loading onto memory. It uses the same representation on disk as it does in memory. Very little in the way of compression (if any). No optimized for disk at all. BUT you can memory map a Feather file and randomly access any part of it in O(1) time (I believe, but do your own due diligence :)
ozgrakkurt•2mo ago
It is wildly more complex
kipukun•2mo ago
The cuDF interop in the roadmap [1] will be huge for my workloads. XGBoost has the fastest inference time on GPUs, so a fast path straight from these Vortex files to GPU memory seems promising.

[1] https://github.com/vortex-data/vortex/issues/2116

reactordev•2mo ago
Can you explain how it’s faster? GPU memory is just a blob with an address. Is it because the loading algorithms for vortex align better with XGBoost or just plain uploading to the GPU?
robert3005•2mo ago
What you can do if you have gpu friendly format is you send compressed data over PCI-E and then decompress on the gpu. Thus your overall throughput will increase since PCI-E bandwidth is the limiting factor of the overall system.
reactordev•2mo ago
That doesn’t explain how vortex is faster. Yes, you should send compressed data to the GPU and let it uncompress. You should maximize your PCI-E throughput to minimize latency in execution, but what does Vortex bring? Other than Parque bad, Vortex good.
kipukun•2mo ago
XGBoost is just faster on the GPU, regardless of the file format. A sibling post also pointed out compression helping out on bandwidth.
xigoi•2mo ago
Can we stop with the cringe emojis at the start of every heading?
mrbluecoat•2mo ago
I guess not surprising from a project that combines Polars & Vortex
kh_hk•2mo ago
I tend to agree, but I don't see this one as any of the worst offenders, unless I am missing something.

This readme has what, max two or three emojis? Compare that to most LLM generated readmes with a zillion of emojis for every single feature.

xigoi•2mo ago
They seem to have removed the emojis since I posted my comment: https://github.com/vortex-data/vortex/commit/8294dd665869a72...
kh_hk•2mo ago
Thanks
rubenvanwyk•2mo ago
Vortex and Lance both seem really cool but will have to infiltrate either the Delta or Iceberg specs to become mainstream.
robert3005•2mo ago
Can’t wait for https://github.com/apache/iceberg/issues/12225 to merge so there’s an api to integrate against
meehai•2mo ago
Can you append new columns to a file stored on disk without reading it all in mempey? Somehoe this is beyond parquet capabilities.
robert3005•2mo ago
The default writer will decompress the values, however, right now you can implement your own write strategy that will avoid doing it. We plan on adding that as an option since it’s quite common.
andyferris•2mo ago
One thing I found interesting is the logical type system doesn't seem to include sum types or unions, unlike Arrow etc.

I'd generally encourage new type systems to include sum types as a first-class concept.

infogulch•2mo ago
I wonder if a columnar storage format should implement sum types with a struct of arrays where only one array has a nun-null value for each index.
ozgrakkurt•2mo ago
Arrow has two variants of it and this is one of them. Other variant has a seperate offsets array that you use to index into the active “field” array, so it is slower to process in most cases but is more compact