frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Tiny C Compiler

https://bellard.org/tcc/
59•guerrilla•1h ago•22 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
151•valyala•5h ago•25 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
81•zdw•3d ago•32 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
86•surprisetalk•5h ago•91 comments

LLMs as the new high level language

https://federicopereiro.com/llm-high/
26•swah•4d ago•19 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
19•martialg•58m ago•3 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
120•mellosouls•8h ago•236 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
159•AlexeyBrin•11h ago•28 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
866•klaussilveira•1d ago•266 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
115•vinhnx•8h ago•14 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
33•randycupertino•1h ago•33 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
73•thelok•7h ago•13 comments

Show HN: A luma dependent chroma compression algorithm (image compression)

https://www.bitsnbites.eu/a-spatial-domain-variable-block-size-luma-dependent-chroma-compression-...
22•mbitsnbites•3d ago•1 comments

First Proof

https://arxiv.org/abs/2602.05192
76•samasblack•8h ago•57 comments

I write games in C (yes, C) (2016)

https://jonathanwhiting.com/writing/blog/games_in_c/
157•valyala•5h ago•136 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
253•jesperordrup•15h ago•82 comments

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
36•gnufx•4h ago•41 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
535•theblazehen•3d ago•197 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
100•onurkanbkrc•10h ago•5 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
39•momciloo•5h ago•5 comments

Selection rather than prediction

https://voratiq.com/blog/selection-rather-than-prediction/
19•languid-photic•4d ago•5 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
213•1vuio0pswjnm7•12h ago•325 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
42•marklit•5d ago•6 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
276•alainrk•10h ago•454 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
129•videotopia•4d ago•41 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
52•rbanffy•4d ago•14 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
52•josephcsible•3h ago•67 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
650•nar001•9h ago•284 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
41•sandGorgon•2d ago•17 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
109•speckx•4d ago•149 comments
Open in hackernews

A linear-time alternative for Dimensionality Reduction and fast visualisation

https://medium.com/@roman.f/a-linear-time-alternative-to-t-sne-for-dimensionality-reduction-and-fast-visualisation-5cd1a7219d6f
118•romanfll•1mo ago

Comments

romanfll•1mo ago
Author here. I built this because I needed to run dimensionality reduction entirely in the browser (client-side) for an interactive tool. The standard options (UMAP, t-SNE) were either too heavy for JS/WASM or required a GPU backend to run at acceptable speeds for interactive use.

This approach ("Sine Landmark Reduction") uses linearised trilateration—similar to GPS positioning—against a synthetic "sine skeleton" of landmarks.

The main trade-offs:

It is O(N) and deterministic (solves Ax=b instead of iterative gradient descent).

It forces the topology onto a loop structure, so it is less accurate than UMAP for complex manifolds (like Swiss Rolls), but it guarantees a clean layout for user interfaces.

It can project ~9k points (50 dims) to 3D in about 2 seconds on a laptop CPU. Python implementation and math details are in the post. Happy to answer questions!

aoeusnth1•1mo ago
This is really cool! Are you considering publishing a paper on it? This seems conceptually similar to landmark MDS / Isomap, except using PCA on the landmark matrix instead of MDS. (https://cannoodt.dev/2019/11/lmds-landmark-multi-dimensional...)
romanfll•1mo ago
Thanks! You nailed the intuition! Yes, it shares DNA with Landmark MDS, but we needed something strictly deterministic for the UI. Re: Publishing: We don't have a paper planned for this specific visualisation technique yet. I just wanted to open-source it because it solved a major bottleneck for our dashboard. However, our main research focus at Thingbook is DriftMind (a cold start streaming forecaster and anomaly detector, preprint here: https://www.researchgate.net/publication/398142288_DriftMind...). That paper is currently under peer review! It shares the same 'efficiency-first' philosophy as this visualisation tool
lmeyerov•1mo ago
Fwiw, we are heavy UMAP users (pygraphistry), and find UMAP CPU fine for interactive use at up to 30K rows and GPU at 100K rows, then generally switch to a trained mode when > 100K rows. Our use case is often highly visual - see correlations, and link together similar entities into explorable & interactive network diagrams. For headless, like in daily anomaly detection, we will do this to much larger scales.

We see a lot of wide social, log, and cyber data where this works, anywhere from 5-200 dim. Our bio users are trickier, as we can have 1K+ dimensions pretty fast. We find success there too, and mostly get into preconditioning tricks for those.

At the same time, I'm increasingly thinking of learning neural embeddings in general for these instead of traditional clustering algorithms. As scales go up, the performance argument here goes up too.

abhgh•1mo ago
I was not aware this existed and it looks cool! I am definitely going to take out some time to explore it further.

I have a couple of questions for now: (1) I am confused by your last sentence. It seems you're saying embeddings are a substitute for clustering. My understanding is that you usually apply a clustering algorithm over embeddings - good embeddings just ensure that the grouping produced by the clustering algo "makes sense".

(2) Have you tried PaCMAP? I found it to produce high quality and quick results when I tried it. Haven't tried it in a while though - and I vaguely remember that it won't install properly on my machine (a Mac) the last time I had reached out for it. Their group has some new stuff coming out too (on the linked page).

[1] https://github.com/YingfanWang/PaCMAP

lmeyerov•1mo ago
We generally run UMAP on regular semi-structured data like database query results. We automatically feature encode that for dates, bools, low-cardinality vals, etc. If there is text, and the right libs available, we may also use text embeddings for those columns. (cucat is our GPU port of dirtycat/skrub, and pygraphistry's .featurize() wraps around that).

My last sentence was on more valuable problems, we are finding it makes sense to go straight to GNNs, LLMs, etc and embed multidimensional data that way vs via UMAP dim reductions. We can still use UMAP as a generic hammer to control further dimensionality reductions, but the 'hard' part would be handled by the model. With neural graph layouts, we can potentially even skip the UMAP for that too.

Re:pacmap, we have been eyeing several new tools here, but so far haven't felt the need internally to go from UMAP to them. We'd need to see significant improvements given the quality engineering in UMAP has set the bar high. In theory I can imagine some tools doing better in the future, but the creators have't done the engineering investment, so internally, we rather stay with UMAP. We make our API pluggable, so you can pass in results from other tools, and we haven't heard much from that path from others.

abhgh•1mo ago
Thank you. Your comment about LLMs to semantically parse diverse data, as a first step, makes sense. In fact come to think of it, in the area of prompt optimization too - such as MIPROv2 [1] - the LLM is used to create initial prompt guesses based on its understanding of data. And I agree that UMAP still works well out of the box and has been pretty much like this since its introduction.

[1] Section C.1 in the Appendix here https://arxiv.org/pdf/2406.11695

nighthawk454•1mo ago
I’m working on a new UMAP alternative - curious what kinds of improvements you’d be interested in?
lmeyerov•1mo ago
A few things

Table stakes for our bigger users:

- parity or improvement on perf, for both CPU & GPU mode

- better support for learning (fit->transform) so we can embed billion+ scale data

- expose inferred similarity edges so we can do interactive and human-optimized graph viz, vs overplotted scatterplots

New frontiers:

- alignment tooling is fascinating, as we increasingly want to re-fit->embed over time as our envs change and compare, eg, day-over-day analysis. This area is not well-defined yet common for anyone operational so seems ripe for innovation

- maybe better support for mixing input embeddings. This seems increasingly common in practice, and seems worth examining as special cases

Always happy to pair with folks in getting new plugins into the pygraphistry / graphistry community, so if/when ready, happy to help push a PR & demo through!

lmcinnes•1mo ago
> alignment tooling is fascinating, as we increasingly want to re-fit->embed over time as our envs change and compare, eg, day-over-day analysis. This area is not well-defined yet common for anyone operational so seems ripe for innovation

It is probably not all the things you want, but AlignedUMAP can do some of this right now: https://umap-learn.readthedocs.io/en/latest/aligned_umap_bas...

If you want to do better than that, I would suggest that the quite new landmarked parametric UMAP options are actually very good this: https://umap-learn.readthedocs.io/en/latest/transform_landma...

Training the parametric UMAP is a little more expensive, but the new landmarked based updating really does allow you to steadily update with new data and have new clusters appear as required. Happy to chat as always, so reach out if you haven't already looked at this and it seems interesting.

romanfll•1mo ago
The shift from Explicit Reduction to GNNs/Embeddings is where the high-end is going in my view… We hit this exact fork in the road with our forecasting/anomaly detection engine (DriftMind). We considered heavy embedding models but realised that for edge streams, we couldn't afford the inference cost or the latency of round-tripping to a GPU server. It feels like the domain is splitting into 'Massive Server-Side Intelligence' (I am a big fan of Graphistry) and 'Hyper-Optimized Edge Intelligence' (where we are focused).
lmeyerov•1mo ago
Interesting, mind sharing the context here?

My experience has been as workloads get heavier, it's "cheaper" to push to an accelerated & dedicated inferencing server. This doesn't always work though, eg, world of difference between realtime video on phones vs an interactive chat app.

Re:edge embedding, I've been curious about the push by a few to 'foundation GNNs', and it may be fun to compare UMAP on property-rich edges to those. So far we focus on custom models, but the success of neural graph drawing NNs & newer tabular NNs suggest something pretrained can replace UMAP as a generic hammer here too...

threeducks•1mo ago
Without looking at the code, O(N * k) with N = 9000 points and k = 50 dimensions should take in the order of milliseconds, not seconds. Did you profile your code to see whether there is perhaps something that takes an unexpected amount of time?
donkeybeer•1mo ago
If he wrote the for loop in python instead of numpy or C or whatever it could be a plausible runtime.
yorwba•1mo ago
Each of the N data points is processed through several expensive linear algebra operations. O(N * k) just expresses that if you double N, the runtime also at most doubles. It doesn't mean it has to be fast in an absolute sense for any particular value of N and k.
akoboldfrying•1mo ago
Didn't read TFA, but it's hard to think of a linear algebra operation that is both that slow and takes time independent of n and k.
romanfll•1mo ago
The '2 seconds' figure comes from the end-to-end time on a standard laptop. I quoted 2s to set realistic expectations for the user experience, not the CPU cycle count. You are right that the core linear algebra (Ax=b) is milliseconds. The bottleneck is the DOM/rendering overhead, but strictly speaking, the math itself is blazing fast.
moralestapia•1mo ago
This is great Roman, congrats on the amazing work :)!
jdhwosnhw•1mo ago
Thats not how big-O notation works. You don’t know what proportionality constants are being hidden by the notation so you cant make any assertions about absolute runtimes
threeducks•1mo ago
It is true that big-O notation does not necessarily tell you anything about the actual runtime, but if the hidden constant appears suspiciously large, one should double-check whether something else is going on.
yxhuvud•1mo ago
FWIW, there are iterative SVD implementations out there that could potentially be useful as well in certain contexts when you get more data over time in a streamed manner.
leecarraher•1mo ago
are you referring to this paper https://arxiv.org/abs/1501.01711 ? i believe they won best paper at icml or other impact journal. the published paper and algorithm i recall being compact and succinct, something that took less than a day to implement.
yxhuvud•1mo ago
I was referring to even older stuff that I happened to see while doing my masters back in 2007-2008 or so. But that one looks more approachable.
memming•1mo ago
first subsample a fixed number of random landmark points from data, then...
romanfll•1mo ago
Thanks for your comment. You are spot on, that is effectively the standard Nyström/Landmark MDS approach.

The technique actually supports both modes in the implementation (synthetic skeleton or random subsampling). However, for this browser visualisation, we default to the synthetic sine skeleton for two reasons:

1. Determinism: Random landmarks produce a different layout every time you calculate the projection. For a user interface, we needed the layout to be identical every time the user loads the data, without needing to cache a random seed. 2. Topology Forcing: By using a fixed sine/loop skeleton, we implicitly 'unroll' the high-dimensional data onto a clean reduced structure. We found this easier for users to visually navigate compared to the unpredictable geometry that comes from a random subset

HelloNurse•1mo ago
You don't need a "proper" random selection: if your points are sorted deterministically and not too adversarially, any reasonably unbiased selection (e.g. every Nth point) is pseudorandom.
jmpeax•1mo ago
> They typically need to compare many or all points to each other, leading to O(N²) complexity.

UMAP is not O(n^2) it is O(n log n).

romanfll•1mo ago
Thanks for your comment! You are right, Barnes-Hut implementation brings UMAP down to O(N log N). I should have been more precise in the document. The main point is that even O(N log N) could be too much if you run this in a browser.. Thanks for clarifying!
emil-lp•1mo ago
If k=50, then I'm pretty sure O(n log n) beats O(nk).
romanfll•1mo ago
You are strictly correct for a single pass! log2(9000)~13, which is indeed much smaller than k=50. The missing variable in that comparison is Iterations. t-SNE and UMAP are iterative optimisation algorithms. They repeat that O(N log N) step hundreds of times to converge. My approach is a closed-form linear solution (Ax=b) that runs exactly once. So the wall-clock comparison is effectively: Iterations * (N log N) VS 1 * (N *k) That need for convergence is where the speedup comes from, not the complexity class per se.
benob•1mo ago
Is there a pip installable version?
romanfll•1mo ago
Not yet, but coming...
zipy124•1mo ago
Something seems off here. t-SNE should not be taking 15-25 seconds for only 5k points and 20 dimensions, but rather somewhere like 1-2 seconds. Also since the given alternative is not as good, you would probably be able to reduce the iterations somewhat with t-SNE if speed is wanted at the risk of quality. Alternatively UMAP for this would be milliseconds, bordering on real-time with aggressive tuning.
rundev•1mo ago
The claim of linear runtime is only true if K is independent of the dataset size, so it would have been nice to see an exploration of how different values of K impact results. I.e. does clustering get better for larger K, if so how much? The values 50 and 100 seem arbitrary and even suspiciously close to sqrt(N) for the 9K dataset.
romanfll•1mo ago
Thanks for your comment.

To clarify: K is a fixed hyperparameter in this implementation, strictly independent of N. Whether we process 9k points or 90k points, we keep K at ~100. We found that increasing K yields diminishing returns very quickly. Since the landmarks are generated along a fixed synthetic topology, increasing K essentially just increases resolution along that specific curve, but once you have enough landmarks to define the curve's structure, adding more doesn't reveal new topology… it just adds computational cost to the distance matrix calculation. Re: sqrt(N): That is purely a coincidence!

trgn•1mo ago
Glad to see 2d mapping is still of interest. 20 years ago, information visualization, data cartography, exploratory analytics, etc.. was pretty alive, but it never really took off and found a reliable niche in the industry, or real end user application. Why map it, when the machine can just tell you.

Would be nice to see it come back. Would love to browse for books and movies on maps again, rather that getting lists regurgitated at me.