frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
576•klaussilveira•10h ago•167 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
889•xnx•16h ago•540 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
91•matheusalmeida•1d ago•20 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
18•helloplanets•4d ago•10 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
21•videotopia•4d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
197•isitcontent•11h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
199•dmpetrov•11h ago•91 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
307•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
352•aktau•17h ago•175 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
350•ostacke•17h ago•91 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
453•todsacerdoti•19h ago•228 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
20•romes•4d ago•2 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
79•quibono•4d ago•18 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
52•kmm•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
253•eljojo•13h ago•153 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
388•lstoll•17h ago•263 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
5•bikenaga•3d ago•1 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
231•i5heu•13h ago•175 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
12•neogoose•3h ago•7 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
68•phreda4•10h ago•12 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
24•gmays•6h ago•6 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
116•SerCe•7h ago•94 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
135•vmatsiiako•16h ago•59 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
43•gfortaine•8h ago•13 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
268•surprisetalk•3d ago•36 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
168•limoce•3d ago•87 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1039•cdrnsf•20h ago•431 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
60•rescrv•18h ago•22 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
88•antves•1d ago•63 comments
Open in hackernews

Generalized K-Means Clustering

https://github.com/derrickburns/generalized-kmeans-clustering
41•derrickrburns•3mo ago

Comments

derrickrburns•3mo ago
# HackerNews Announcement

## Title

Generalized K-Means Clustering for Apache Spark with Bregman Divergences

## Body (3,982 characters)

I've built a production-ready K-Means library for Apache Spark that supports multiple distance functions beyond Euclidean.

*Why use this instead of Spark MLlib?*

MLlib's KMeans is hard-coded to Euclidean distance, which is mathematically wrong for many data types:

- *Probability distributions* (topic models, histograms): KL divergence is the natural metric. Euclidean treats [0.5, 0.3, 0.2] and [0.49, 0.31, 0.2] as similar even though they represent different distributions. - *Audio/spectral data*: Itakura-Saito respects multiplicative power spectra. Euclidean incorrectly treats -20dB and -10dB as closer than -10dB and 0dB. - *Count data* (traffic, sales): Generalized-I divergence for Poisson-distributed data. - *Outlier robustness*: L1/Manhattan gives median-based clustering vs mean-based (L2).

Using the wrong divergence yields mathematically valid but semantically meaningless clusters.

*Available divergences:* KL, Itakura-Saito, L1/Manhattan, Generalized-I, Logistic Loss, Squared Euclidean

*What's included:* - 6 algorithms: GeneralizedKMeans, BisectingKMeans, XMeans (auto k), SoftKMeans (fuzzy), StreamingKMeans, KMedoids - Drop-in MLlib replacement (same DataFrame API) - 740 tests, deterministic behavior, cross-version persistence (Spark 3.4↔3.5, Scala 2.12↔2.13) - Automatic optimization (broadcast vs crossJoin based on k×dim to avoid OOM) - Python and Scala APIs

*Example:*

```scala // Clustering topic distributions from LDA val topics: DataFrame = // probability vectors

// WRONG: MLlib with Euclidean new org.apache.spark.ml.clustering.KMeans() .setK(10).fit(topics)

// CORRECT: KL divergence for probabilities new GeneralizedKMeans() .setK(10) .setDivergence("kl") .fit(topics)

// For standard data, drop-in replacement: new GeneralizedKMeans() .setDivergence("squaredEuclidean") .fit(numericData) ```

*Quick comparison:*

| Use Case | MLlib | This Library | |----------|-------|--------------| | General numeric | L2 | L2 (compatible) | | Probability distributions | Wrong | KL divergence | | Outlier-robust | | L1 or KMedoids | | Auto k selection | | XMeans (BIC/AIC) | | Fuzzy clustering | | SoftKMeans |

*Performance:* ~870 pts/sec (SE), ~3,400 pts/sec (KL) on modest hardware. Scales to billions of points with automatic strategy selection.

*Production-ready:* - Cross-version model persistence - Scalability guardrails (chunked assignment) - Determinism tests (same seed → identical results) - Performance regression detection - Executable documentation

GitHub: https://github.com/derrickburns/generalized-kmeans-clusterin...

This started as an experiment to understand Bregman divergences. Surprisingly, KL divergence is often faster than Euclidean for probability data. Open to feedback!

seanhunter•3mo ago
For people who are unfamiliar, k-means is a partitioning algorithm that aims to group observations into a specific number (k) of clusters in such a way that each observation ends up in the cluster with the “nearest” mean. So say you want 5 groups, it will make five groups so that every observation is in the group where it’s nearest to the mean.

And so that raises the question of what “nearest” means, and here this allows you to replace Euclidian distance with things like Kullback-Leibler divergence (that’s the KL below) which make more sense than Euclidian distance if you’re trying to measure how close two probability distributions are to each other.

nurettin•3mo ago
> And so that raises the question of what “nearest” means

To me, the definition of "nearest" is just a technicality.

The real question is: what is K?

mentalgear•3mo ago
Have you tried HDBSCAN (DBSCAN variant) or Hierarchical Clustering (HAC) ?
nurettin•3mo ago
Me? I probably tried every classification algorithm and their H variants. I still think "What is K?" is a profound question.
keeeba•3mo ago
I agree it is a profound question. My thesis is fairly boring.

For any given clustering task of interest, there is no single value of K.

Clustering & unsupervised machine learning is as much about creating meaning and structure as it is about discovering or revealing it.

Take the case of biological taxonomy, what K will best segment the animal kingdom?

There is no true value of K. If your answer is for a child, maybe it’ 7 corresponding to what we’re taught in school - mammals, birds, reptiles, amphibians, fish, and invertebrates.

If your answer is for a zoologist, obviously this won’t do.

Every clustering task of interest is like this. And I say of interest because clustering things like digits in the classic MNIST dataset is better posed as a classification problem - the categories are defined analytically.

seanhunter•3mo ago
K is whatever you want it to be. You want 5 clusters k=5. If you don’t know the right number of clusters try a few different values of k and see which partitions your sample in a way that’s good for your problem
Spivak•3mo ago
The total number of clusters. Determining this algorithmically is a fun open problem https://en.wikipedia.org/wiki/Determining_the_number_of_clus....

For the data I work with at $dayjob I've found the Silhouette algorithm to perform best but I assume it will be extremely field specific. Clustering your data and taking a representative sample of each cluster is such a powerful trick to make big data small but finding an appropriate K is an art more than a science.

dcl•3mo ago
At a previous $dayjob at a very large financial institution, it's however many clusters are present in the strategy that was agreed to by the exec team and their highly paid consultants.

You find that many clusters and shoehorn the consultant provided categories on to the k clusters you obtain.

3abiton•3mo ago
To be fair finding K is highly domain dependent and I would argue should not be for the analyst (solely) to decide, but with a feedback from domain experts.
apwheele•3mo ago
Can folks comment on what applications they use k-means for? It was a basic technique I learned in school, but honestly I am not really familiar with a single use case that is very clearly motivated besides "pretty pictures".

So I do a bit of work in geospatial analysis, and hotspots are better represented by DBSCAN (do not need to assign every point a cluster). I just do not even use clustering very often in gig (supervised ML and anomaly detection are much more prevalent in the rest of my work).

atiedebee•3mo ago
It's used for vector quantization which can be used for color quantization