frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Are You Random? – A game that predicts your "random" choices

https://github.com/OvidijusParsiunas/are-you-random
1•ovisource•50s ago•0 comments

Poland to probe possible links between Epstein and Russia

https://www.reuters.com/world/poland-probe-possible-links-between-epstein-russia-pm-tusk-says-202...
1•doener•8m ago•0 comments

Effectiveness of AI detection tools in identifying AI-generated articles

https://www.ijoms.com/article/S0901-5027(26)00025-1/fulltext
1•XzetaU8•14m ago•0 comments

Warsaw Circle

https://wildtopology.com/bestiary/warsaw-circle/
1•hackandthink•15m ago•0 comments

Reverse Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
1•pacod•20m ago•0 comments

The AI4Agile Practitioners Report 2026

https://age-of-product.com/ai4agile-practitioners-report-2026/
1•swolpers•21m ago•0 comments

Digital Independence Day

https://di.day/
1•pabs3•25m ago•0 comments

What a bot hacking attempt looks like: SQL injections galore

https://old.reddit.com/r/vibecoding/comments/1qz3a7y/what_a_bot_hacking_attempt_looks_like_i_set_up/
1•cryptoz•26m ago•0 comments

Show HN: FlashMesh – An encrypted file mesh across Google Drive and Dropbox

https://flashmesh.netlify.app
1•Elevanix•27m ago•0 comments

Show HN: AgentLens – Open-source observability and audit trail for AI agents

https://github.com/amitpaz1/agentlens
1•amit_paz•28m ago•0 comments

Show HN: ShipClaw – Deploy OpenClaw to the Cloud in One Click

https://shipclaw.app
1•sunpy•30m ago•0 comments

Unlock the Power of Real-Time Google Trends Visit: Www.daily-Trending.org

https://daily-trending.org
1•azamsayeedit•32m ago•1 comments

Explanation of British Class System

https://www.youtube.com/watch?v=Ob1zWfnXI70
1•lifeisstillgood•33m ago•0 comments

Show HN: Jwtpeek – minimal, user-friendly JWT inspector in Go

https://github.com/alesr/jwtpeek
1•alesrdev•36m ago•0 comments

Willow – Protocols for an uncertain future [video]

https://fosdem.org/2026/schedule/event/CVGZAV-willow/
1•todsacerdoti•38m ago•0 comments

Feedback on a client-side, privacy-first PDF editor I built

https://pdffreeeditor.com/
1•Maaz-Sohail•42m ago•0 comments

Clay Christensen's Milkshake Marketing (2011)

https://www.library.hbs.edu/working-knowledge/clay-christensens-milkshake-marketing
2•vismit2000•48m ago•0 comments

Show HN: WeaveMind – AI Workflows with human-in-the-loop

https://weavemind.ai
9•quentin101010•54m ago•2 comments

Show HN: Seedream 5.0: free AI image generator that claims strong text rendering

https://seedream5ai.org
1•dallen97•56m ago•0 comments

A contributor trust management system based on explicit vouches

https://github.com/mitchellh/vouch
2•admp•58m ago•1 comments

Show HN: Analyzing 9 years of HN side projects that reached $500/month

3•haileyzhou•58m ago•1 comments

The Floating Dock for Developers

https://snap-dock.co
2•OsamaJaber•59m ago•0 comments

Arcan Explained – A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
2•walterbell•1h ago•0 comments

We are not scared of AI, we are scared of irrelevance

https://adlrocha.substack.com/p/adlrocha-we-are-not-scared-of-ai
1•adlrocha•1h ago•0 comments

Quartz Crystals

https://www.pa3fwm.nl/technotes/tn13a.html
2•gtsnexp•1h ago•0 comments

Show HN: I built a free dictionary API to avoid API keys

https://github.com/suvankar-mitra/free-dictionary-rest-api
2•suvankar_m•1h ago•0 comments

Show HN: Kybera – Agentic Smart Wallet with AI Osint and Reputation Tracking

https://kybera.xyz
3•xipz•1h ago•0 comments

Show HN: brew changelog – find upstream changelogs for Homebrew packages

https://github.com/pavel-voronin/homebrew-changelog
1•kolpaque•1h ago•0 comments

Any chess position with 8 pieces on board and one pair of pawns has been solved

https://mastodon.online/@lichess/116029914921844500
2•baruchel•1h ago•1 comments

LLMs as Language Compilers: Lessons from Fortran for the Future of Coding

https://cyber-omelette.com/posts/the-abstraction-rises.html
3•birdculture•1h ago•0 comments
Open in hackernews

Ask HN: Distributed SQL engine for ultra-wide tables

23•synsqlbythesea•3w ago
I ran into a practical limitation while working on ML feature engineering and multi-omics data.

At some point, the problem stops being “how many rows” and becomes “how many columns”. Thousands, then tens of thousands, sometimes more.

What I observed in practice:

- Standard SQL databases usually cap out around ~1,000–1,600 columns. - Columnar formats like Parquet can handle width, but typically require Spark or Python pipelines. - OLAP engines are fast, but tend to assume relatively narrow schemas. - Feature stores often work around this by exploding data into joins or multiple tables.

At extreme width, metadata handling, query planning, and even SQL parsing become bottlenecks.

I experimented with a different approach: - no joins - no transactions - columns distributed instead of rows - SELECT as the primary operation

With this design, it’s possible to run native SQL selects on tables with hundreds of thousands to millions of columns, with predictable (sub-second) latency when accessing a subset of columns.

On a small cluster (2 servers, AMD EPYC, 128 GB RAM each), rough numbers look like: - creating a 1M-column table: ~6 minutes - inserting a single column with 1M values: ~2 seconds - selecting ~60 columns over ~5,000 rows: ~1 second

I’m curious how others here approach ultra-wide datasets. Have you seen architectures that work cleanly at this width without resorting to heavy ETL or complex joins?

Comments

icsa•3w ago
> With this design, it’s possible to run native SQL selects on tables with hundreds of thousands to millions of columns, with predictable (sub-second) latency when accessing a subset of columns.

What is the design?

synsqlbythesea•3w ago
In a few words: table data is stored on hundreds of MariaDB servers. Each table is user designed hash key columns(1->32) to manage automatic partitioning. Wide tables are split in chunks. 1 chunk = the hash key + columns = one MariaDB server. The data dictionary is stored on mirrored dedicated MariaDB servers. The engine in itself uses a massive fork policy. In my lab, the k1000 table is stored on 500 chunks. I used a small trick : where I say 1 MariaDB server you can use one database in a MariaDB server. So I have only 20 VmWare Linux servers with 25 database each containing 25 databases.
remywang•3w ago
What are the columns and why are there so many of them? The standard approach is to explode into many tables and introduce joins as you said. Why don’t you want joins?
anotherpaul•3w ago
I am speculating here but as it genomics data I assume it's information such as: gene count, epigenetic information (methylation, histones etc) Once you do 20k times a few post translational modifications you can come to a few columns quickly.

Usually this would be stored in a sparse long form though. So I might be wrong.

hobs•3w ago
If you want to do that why not just do an EVA pattern or something else that can translate rows to columns?
jamesblonde•3w ago
If they are exploding categorical variables using OHE and storing the columns - that is the wrong thing to do. You should only ever store untransformed feature data in tables. You apply the feature transformations, like OHE, on reading from the tables, as those transformations are parameterized by the data you read (the training data subset you select).

Reference: https://www.hopsworks.ai/post/a-taxonomy-for-data-transforma...

minitoar•3w ago
ClickHouse and Scuba address this. The core idea is the data layout on disk only requires the scan to open files or otherwise access data for the columns referenced in that query.
synsqlbythesea•3w ago
Thanks — both are great systems.

ClickHouse and Scuba are extremely good at what they’re designed for: fast OLAP over relatively narrow schemas (dozens to hundreds of columns) with heavy aggregation.

The issue I kept running into was extreme width: tens or hundreds of thousands of columns per row, where metadata handling, query planning, and even column enumeration start to dominate.

In those cases, I found that pushing width this far forces very different tradeoffs (e.g. giving up joins and transactions, distributing columns instead of rows, and making SELECT projection part of the contract).

If you’ve seen ClickHouse or Scuba used successfully at that kind of width, I’d genuinely be interested in the details.

minitoar•3w ago
Scuba could handle 100,000 columns, probably more. But yes, the model is that you have one table and you can only do self-joins and it’s more or less append only and you were only accessing maybe dozens of columns in a single query.

Feel free to email if you want to chat more.

kentm•3w ago
What engine and data format were you using for your experiment?

You mention parquet and spark, but I’m wondering if you tried any of the “Lakehouse” formats that are basically parquet + a metadata layer (ie iceberg). I’d probably at least give Trino or Presto a shot, although I suspect that you’ll have similar metadata issues with those engines.

mamcx•3w ago
Yeah, this is a hard problem, in special because Standard SQL databases only partially implement the relational model, have not good recurse for deal with relations-in-relations and lack of ways to (in user space) build your own storage (all stuff that I dream to tackle).

I think the possible answer is to try to "compress" columns with custom datatypes, it could require to touch part of the innards of sql (like in postgreSQL you need to solve it with c) but is a viable option in many cases where you noted that what you could express in json, for example, is in fact a custom type that could be stored efficiently if there is a way to translate it to more primitive types, then solved that the indexes will work.

The second option is to hide part of the join complexity with views.

pedrini210•3w ago
Check the Vortex file format (https://vortex.dev/), if you are interested in a distributed SQL engine then you can check SpiralDB (https://spiraldb.com/), I haven’t used this one personally but they created Vortex.

If you can drop the “distributed” part, then plug DuckDB (https://duckdb.org/) and query Parquet (out of the box) or Vortex (https://duckdb.org/docs/stable/core_extensions/vortex.html) with it.

didgetmaster•3w ago
Is there really a market for these kinds of relational tables?

I created a system to support my custom object store where the metadata tags are stored within key-value stores. I can use them to create relational tables and query them just like conventional row stores used by many popular database engines.

My 'columnar store database' can handle many thousands of columns within a single table. So far, I have only tested it out to 10,000 columns, but it should handle many more.

I can get sub-second query times against it running on a single desktop. I haven't promoted this feature since everyone I have talked to about it, never had a compelling use for it.

synsqlbythesea•3w ago
That’s a fair question!

A concrete case where this comes up is multi-omics research. A single study routinely combines ~20k gene expression values, 100k–1M SNPs, thousands of proteins and metabolites, plus clinical metadata — all per patient.

Today, this data is almost never stored in relational tables. It lives in files and in-memory matrices, and a large part of the work is repeatedly rebuilding wide matrices just to explore subsets of features or cohorts.

In that context, a “wide table” isn’t about transactions or joins — it’s about having a persistent, queryable representation of a matrix that already exists conceptually. Integration becomes “load patients”, and exploration becomes SELECT statements.

I’m not claiming this fits every workload, but based on how much time is currently spent on data reshaping in multi-omics, I’m confident there is a real need for this kind of model.

didgetmaster•3w ago
Interesting. Are you willing to try out some 'experimental' software?

As I indicated in my previous post, I have a unique kind of data management system that I have built over the years as a hobby project.

It was originally designed to be a replacement for conventional file systems. It is an object store where you could store millions or billions of files in a single container and attach metadata tags to each one. Searches for data could be based on these tags. I had to design a whole new kind of metadata manager to handle these tags.

Since thousands or millions of different kinds of tags could be defined, each with thousands or millions of unique values within them; the whole system started to look like a very wide, sparse relational table.

I found that I could use the individual 'columnar stores' that I built, to also build conventional database tables. I was actually surprised at how well it worked when I started benchmarking it against popular database engines.

I would test my code by downloading and importing various public datasets and then doing analytics against that data. My system does both analytic and transactional operations pretty well.

Most of the datasets only had a few dozen columns and many had millions of rows; but I didn't find any with over a thousand columns.

As I said before, I had previously only tested it out to 10,000 columns. But since reading your original question, I started to play with large numbers of columns.

After tweaking the code, I got it to create tables with up to a million columns and add some random test data to them. A 'SELECT *' query against such a table can take a long time, but doing some queries where only a few dozen of the columns were returned, worked very fast.

How many patients were represented in your dataset? I assume that most rows did not have a value in every column.

didip•3w ago
Try StarRocks. I am totally not affiliated with them but I have investigated them deeply in the past.

That said, I have never seen 1 million columns.

jinjin2•3w ago
Exasol is another MPP database that easily handles super-wide tables, and does all the distribution across nodes for you.

It used to only be available for big enterprises, but now there is a totally free version you can try out: https://www.exasol.com/personal

synsqlbythesea•3w ago
From what I understand, Exasol is a very fast analytical database for traditional data warehouses. My engine doesn't replace a data warehouse; it solves a type of table that data warehouses simply can't handle: tables with hundreds of thousands or millions of columns with an access model that guarantees interactive response times even in these extreme cases.
bnprks•3w ago
With genomics, your data is probably write ~once, almost entirely numeric, and is most likely used for single-client offline analysis. This differs a lot from what most SQL databases are optimizing for.

My best experience has been ignoring SQL and using (sparse) matrix formats for the genomic data itself, possibly combined with some small metadata tables that can fit easily in existing solutions (often even in memory). Sparse matrix formats like CSC/CSR can store numeric data at ~12 bytes per non-zero entry, so a single one of your servers should handle 10B data points in RAM and another 10x that comfortably on a local SSD. Maybe no need to pay the cost of going distributed?

Self plug: if you're in the single cell space, I wrote a paper on my project BPCells which has some storage format benchmarks up to a 60k column, 44M row RNA-seq matrix.

perrohunter•3w ago
I think this is where Array Databases shine, like https://github.com/TileDB-Inc/TileDB