frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: I quit my job over weaponized robots to start my own venture

2•barratia•3m ago•0 comments

Building a Real-Time Routing System for Payment Success at Cashfree Payments

https://tech.cashfree.com/building-a-real-time-routing-system-for-high-volume-payment-success-at-...
1•shritama_saha•5m ago•0 comments

Read the Friendly Manual

https://plo.fyi/blog/read-the-friendly-manual/
1•ploMP4•5m ago•0 comments

Ada Lovelace and the First Computer Algorithm

https://www.101computing.net/ada-lovelace-and-the-first-computer-algorithm/
1•birdculture•5m ago•0 comments

We Tracked Every Congressional Bill to Its Prediction Market

https://simplefunctions.dev/blog/legislation-tracker-congress-prediction-markets
1•patrickliu0077•6m ago•1 comments

China Imposes New Rules to Block Foreign Companies from 'Decoupling'

https://www.nytimes.com/2026/04/14/business/china-foreign-companies-supply-chain.html
1•thelastgallon•7m ago•0 comments

The Case Against Gameplay Loops

https://blog.joeyschutz.com/the-case-against-gameplay-loops/
1•coinfused•8m ago•0 comments

Optimizing Chained Strcmp Calls for Speed and Clarity

https://medium.com/@yair.lenga/optimizing-chained-strcmp-calls-for-speed-and-clarity-without-refa...
1•yairlenga•10m ago•1 comments

Architecture Catas – A collection of anti-patterns

https://github.com/Bellangelo/architecture-catas
2•bellangelo•10m ago•0 comments

Mastodon gets Sovereign Tech Agency funding

https://blog.joinmastodon.org/2026/04/sovereign-tech-agency-funding/
2•edent•12m ago•0 comments

Rubens Menin's 150 Years "Old" Port Wine

https://neofeed.com.br/finde/o-vinho-do-porto-very-very-old-de-rubens-menin/en/
1•Anon84•13m ago•0 comments

Computational 'time machine' shows solar and wind power on track for 2°C target

https://techxplore.com/news/2026-04-machine-solar-power-track-2c.html
1•geox•13m ago•0 comments

NimConf 2026: Dates Announced, Registrations Open

https://nim-lang.org/blog/2026/04/07/nimconf-2026.html
2•moigagoo•15m ago•0 comments

Thucydides Trap

https://en.wikipedia.org/wiki/Thucydides_Trap
1•nomilk•16m ago•0 comments

1% Vacancy, 81% Preleased: Where Midmarket Compute Deploys in 2026

1•jaynamburi•20m ago•0 comments

Ask HN: Preferred pricing model for sound effects libraries?

2•CSP_LIBRARY•28m ago•1 comments

Energy-Guard OS – A 411MB CPU-Native AI Security Gateway (4ms Latency)

https://github.com/almoizsaad/Energy-Guard-OS-Security-Benchmark
1•ALMOIZ_MOHMED•32m ago•0 comments

Why it's so hard to innovate in the email space (2014)

https://collinmathilde.medium.com/why-its-so-hard-to-innovate-in-the-e-mail-space-9874e08e3426
2•downbad_•34m ago•1 comments

PHP 8.6 Closure Optimizations

https://wiki.php.net/rfc/closure-optimizations
1•moebrowne•35m ago•1 comments

The Folly of SEO

https://yadin.com/notes/seo-folly/
1•dryadin•36m ago•0 comments

US officials underwhelmed by French far-right's plans for economy

https://www.reuters.com/world/europe/us-officials-underwhelmed-by-french-far-rights-plans-economy...
3•vrganj•37m ago•2 comments

AdVersa: Adversarially-Robust and Practical Ad and Tracker Blocking in the Wild

https://github.com/SKKU-SecLab/AdVersa
1•grac3•45m ago•1 comments

The Etymological Problem with Apples

https://dannybate.com/2026/04/08/the-etymological-problem-with-apples/
1•Anon84•49m ago•0 comments

Why My WordPress?

https://alex.kirk.at/2026/04/14/why-my-wordpress/
2•akirk•51m ago•1 comments

Show HN: VibeDrift – Measure drift in AI-generated codebases

https://www.vibedrift.ai/
1•samiahmadkhan•51m ago•2 comments

Post-Slop Stress Disorder (PSSD)

https://github.com/mikemasam/pssd
1•mikemasam•53m ago•0 comments

I trained an AI to do my LinkedIn outreach, it books more meetings than me

https://mimikflow.com
1•moalani_•53m ago•1 comments

The Origins of GPU Computing

https://cacm.acm.org/federal-funding-of-academic-research/the-origins-of-gpu-computing/
1•MasterScrat•53m ago•0 comments

On hacker mindset

https://www.henrikkarlsson.xyz/p/hacker-mindset
2•jger15•55m ago•0 comments

The Crypto Social Arena

https://blockarena.live
1•memalama•59m ago•0 comments
Open in hackernews

Introspective Diffusion Language Models

https://introspective-diffusion.github.io/
67•zagwdt•3h ago

Comments

andsoitis•3h ago
Is anyone here experimenting seriously with Diffusion for text generation? I’d love to learn about your experiences!
moostee•3h ago
I have. It requires a distinct intuition compared to a normal language model. Very well suited to certain problems.
andsoitis•2h ago
Can you tell us more?
recsv-heredoc•3h ago
https://www.inceptionlabs.ai/

This startup seems to have been at it a while.

From our look into it - amazing speed, but challenges remain around time-to-first-token user experience and overall answer quality.

Can absolutely see this working if we can get the speed and accuracy up to that “good enough” position for cheaper models - or non-user facing async work.

One other question I’ve had is wondering if it’s possible to actually set a huge amount of text to diffuse as the output - using a larger body to mechanically force greater levels of reasoning. I’m sure there’s some incredibly interesting research taking place in the big labs on this.

IanCal•2h ago
The overall speed rather than TTFT might start to be more relevant as the caller moves from being a human to another model.

However quality is really important. I tried that site and clicked one of their examples, "create a javascript animation". Fast response, but while it starts like this

``` Below is a self‑contained HTML + CSS + JavaScript example that creates a simple, smooth animation: a colorful ball bounces around the browser window while leaving a fading trail behind it.

<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>JavaScript Bounce Animation</title> <style> body, html { margin: 0; padding: 0;

```

the answer then degrades to

``` radius: BALL_RADIUS, color: BALL_COLOR, traivD O] // array of previous {x,y} positions }; ```

Then more things start creeping in

``` // 3⃣ Bounce off walls if (ball.G 0 ball.radius < 0 || ball.x + ball.radius > _7{nas.width) { ball.vx *= -1; ibSl.x = Math.max(ball.radius, Math.min(ball.x, canvbbF4idth - ball.radius)); } if

```

and the more it goes on the worse it gets

``` Ho7 J3 Works 0 Atep | Description | ```

and

``` • prwrZ8}E6on 5 jdF wVuJg Ar touc> 2ysteners ,2 Ppawn \?) balls w>SFu the 8b$] cliM#]9 ```

This is for the demo on the front page, so I expect this is a pretty good outcome compared to what else you might ask.

cataflutter•1h ago
Weird; I clicked through out of curiosity and didn't get any corruption of the sort in the end result.

I also asked it some technical details about how diffusion LLMs could work and it provided grammatically-correct plausible answers in a very short time (I don't know the tech to say if it's correct or not).

girvo•2h ago
It's being explored right now for speculative decoding in the local-LLM space, which I think is quite interesting as a use-case

https://www.emergentmind.com/topics/dflash-block-diffusion-f...

roger_•4m ago
DFlash immediately came to my mind.

There are several Mac implementations of it that show > 2x faster Qwen3.5 already.

LoganDark•1h ago
I've been playing with a Swift implementation of a diffusion language model (WeDLM), but performance is not yet acceptable and it still generates roughly from left-to-right like a language model (just within a sliding window rather than strictly token-by-token... but that doesn't matter when the sliding window is only like 16 tokens.)
thepasch•1h ago
If I’m reading this right, this is pretty wild. They turned a Qwen autoregressor into a diffuser by using a bunch of really clever techniques, and they vastly outperform any “native diffuser,” actually being competitive with the base model they were trained from. The obvious upside here is the massive speedup in generation.

And then through a LoRA adapter, you can ground the diffuser on the base model’s distribution (essentially have it “compare” its proposals against what the base model would’ve generated), which effectively means: exact same byte-for-byte output for the same seed, just roughly twice as fast (which should improve even more for batched tasks).

I’m not an expert, more of a “practicing enthusiast,” so I might be missing something, but at first glance, this reads super exciting to me.

awestroke•40m ago
I don't understand how you can compare against the base model output without generating with the base model, in which case what's the point?
a1j9o94•1m ago
You would only use the base model during training. This is a distillation technique
ramon156•1h ago
> 2025-04-12: Initial code release with training and inference support.

> 2025-04-12: Released I-DLM-8B, I-DLM-32B, and I-DLM-8B-LoRA on HuggingFace.

Is this old already? Not saying that's a bad thing, since it seems very sophisticated. Just curious if there's an update

oersted•1h ago
It's clearly a typo on the year, April 12 was two days ago, a quick check in HuggingFace shows that they were uploaded 5 days ago.
simianwords•1h ago
Can diffusion models have reasoning steps where they generate a block, introspect and then generate another until the output is satisfactory?
moeadham•56m ago
Well, you can take the output of a first pass and pass it back through the model like AR “reasoning” models do at inference time.
simianwords•46m ago
Yes and has this been tried?