frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: AI-Powered Merchant Intelligence

https://nodee.co
1•jjkirsch•1m ago•0 comments

Bash parallel tasks and error handling

https://github.com/themattrix/bash-concurrent
1•pastage•1m ago•0 comments

Let's compile Quake like it's 1997

https://fabiensanglard.net/compile_like_1997/index.html
1•billiob•2m ago•0 comments

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•7m ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•13m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•14m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•19m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•21m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•27m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
3•oxxoxoxooo•30m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•31m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•35m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•36m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•37m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•40m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
3•myk-e•42m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•43m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•45m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•47m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•49m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•52m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•57m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•58m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•1h ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments
Open in hackernews

To Make Language Models Work Better, Researchers Sidestep Language

https://www.quantamagazine.org/to-make-language-models-work-better-researchers-sidestep-language-20250414/
25•jxmorris12•9mo ago

Comments

kelseyfrog•9mo ago
I wonder why they go with recurrent rather than something like latent flow-matching?
scotty79•9mo ago
The idea is that cleverness of intellect isn't anything mysterious. Humans do astounding feats just by applying relatively simple reasoning iteratively. Requiring artificial neural networks to do it all one-shot, from the top of the head is probably the reason why they require billions of parameters to show even a small bit of cleverness. Chain of thought is obvious solution. But in converting internal reasoning to output tokens some information is lost. Chain of thought in latent space is the natural next step. Thus recurrent networks.

I'm not familiar with flow matching, but I don't think it has any iterative processing in a sense of chain of thought or recurrence (despite arriving at the solution gradually).

kelseyfrog•9mo ago
Flow matching is iterative in the sense that it predicts a dv(t)/dt at each step as it integrates toward x_0.
scotty79•9mo ago
It's iterative in a sense of solving differential equation iteratively. While recurrent networks are iterative in sense of putting a for loop around a bunch of if-s.
kelseyfrog•9mo ago
It's also in the sense that initial latent vector is Gaussian noise. The transformer loop is de-noising latent space. They just happen to be doing the equivalent of predicting x_0 directly.
K0balt•9mo ago
This is really promising research. Still, it is worth looking closely at how models that aren’t re-aligned with the training data with each iteration deal with spicy edge cases where ethical alignment is important.

I have yet to find a model (except where “dumb” external filters kick in) that won’t come to the conclusion that extermination of humanity might actually be the best solution for certain types of extreme, contrived situations. To be fair, any reasonable human would likely reach the same conclusion given the parameters… but the point is alignment towards human prosperity regardless of the cost to artificial sentience or the improbability of success.

That said, it’s remarkably difficult to get a well aligned model, even after “uncensoring” or other efforts to remove bolt-on alignment, to follow you down a dark path without offering up more reasonable, benevolent alternatives all the way down. I attribute this to the “halo effect” where much of the writing that humans do on the internet displays their best traits, since few want to be known by their worst nature. The other stuff is easily filtered out of the training data because it’s usually laced with easily identified characteristics and keywords.

Latent-space reasoning might circumvent this cyclical realignment to the training data and find more innovative, “pragmatic”solutions that drift farther outside of the intrinsic alignment of the training corpus, relying more on “bolt on” alignment training and algorithmic censorship wrappers.

This might be fantastically useful in terms of innovative thinking, but also might result in problematic behavior, especially for VLA and other Large Behavior Models. OTOH it might be critical for making robots that can effectively function in security and protection roles, or as soldiers. And that’s what we want, right? I mean what could possibly go wrong with armed sentient robots lol.

To continue my ramble, because, well, why not, I’m on a roll… I think a lot of the arguments about “is AI sentient(1)” etc will wither when we start getting used to LBMs operating in a continuous OODA loop. The biggest hurdle to AI feeling “real” is the lack of a continuous chain of thought which provides “presence of mind”, but that comes naturally with embodiment in physical space.

It’s going to be an interesting century, kids. Hold on.

(1) here I mean functionally, as in exhibiting the external characteristics of. I am not exploring the metaphysical/ philosophical / spiritual facets of sentience. That will be up to the new form of mind to decide for itself, if it cares to ponder the question. Imposing external views on that has exactly zero positive benefits and could have many negative outcomes.