frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
1•0y•37s ago•0 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•55s ago•0 comments

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•3m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
1•ryan_j_naughton•3m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
1•ravenical•5m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•5m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•7m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•8m ago•0 comments

Rome is studded with cannon balls (2022)

https://essenceofrome.com/rome-is-studded-with-cannon-balls
1•thomassmith65•14m ago•0 comments

8-piece tablebase development on Lichess (op1 partial)

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
2•somethingp•15m ago•0 comments

US to bankroll far-right think tanks in Europe against digital laws

https://www.brusselstimes.com/1957195/us-to-fund-far-right-forces-in-europe-tbtb
3•saubeidl•16m ago•0 comments

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•19m ago•0 comments

pi-nes

https://twitter.com/thomasmustier/status/2018362041506132205
1•tosh•21m ago•0 comments

Show HN: Crew – Multi-agent orchestration tool for AI-assisted development

https://github.com/garnetliu/crew
1•gl2334•21m ago•0 comments

New hire fixed a problem so fast, their boss left to become a yoga instructor

https://www.theregister.com/2026/02/06/on_call/
1•Brajeshwar•23m ago•0 comments

Four horsemen of the AI-pocalypse line up capex bigger than Israel's GDP

https://www.theregister.com/2026/02/06/ai_capex_plans/
1•Brajeshwar•23m ago•0 comments

A free Dynamic QR Code generator (no expiring links)

https://free-dynamic-qr-generator.com/
1•nookeshkarri7•24m ago•1 comments

nextTick but for React.js

https://suhaotian.github.io/use-next-tick/
1•jeremy_su•26m ago•0 comments

Show HN: I Built an AI-Powered Pull Request Review Tool

https://github.com/HighGarden-Studio/HighReview
1•highgarden•26m ago•0 comments

Git-am applies commit message diffs

https://lore.kernel.org/git/bcqvh7ahjjgzpgxwnr4kh3hfkksfruf54refyry3ha7qk7dldf@fij5calmscvm/
1•rkta•29m ago•0 comments

ClawEmail: 1min setup for OpenClaw agents with Gmail, Docs

https://clawemail.com
1•aleks5678•35m ago•1 comments

UnAutomating the Economy: More Labor but at What Cost?

https://www.greshm.org/blog/unautomating-the-economy/
1•Suncho•42m ago•1 comments

Show HN: Gettorr – Stream magnet links in the browser via WebRTC (no install)

https://gettorr.com/
1•BenaouidateMed•43m ago•0 comments

Statin drugs safer than previously thought

https://www.semafor.com/article/02/06/2026/statin-drugs-safer-than-previously-thought
1•stareatgoats•45m ago•0 comments

Handy when you just want to distract yourself for a moment

https://d6.h5go.life/
1•TrendSpotterPro•47m ago•0 comments

More States Are Taking Aim at a Controversial Early Reading Method

https://www.edweek.org/teaching-learning/more-states-are-taking-aim-at-a-controversial-early-read...
2•lelanthran•48m ago•0 comments

AI will not save developer productivity

https://www.infoworld.com/article/4125409/ai-will-not-save-developer-productivity.html
1•indentit•53m ago•0 comments

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•59m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•1h ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
7•michaelchicory•1h ago•1 comments
Open in hackernews

Show HN: I vibe-coded some unusual transformer models

https://github.com/killerstorm/expere/tree/master/non_autoregressive_transformer
1•killerstorm•9mo ago
Goals:

  * demonstrate that LLMs are smart enough to conduct ML experiments pretty much on their own
   * specifically, vibe-coding isn't just for web stuff
 * encourage people to conduct these small experiments
   * in particular, to get better understanding of the concepts
Background: I had a linear algebra course in university, but no proper ML training. Nevertheless, 5 years ago things like AI Dungeon and GPT-3 got me really interested and I started watching Yannic Kilcher videos to understand how it works. I even got some ideas for experiments with transformer architecture, but actually performing them seemed a bit too tedious.

Enter vibe coding. Specifically, Claude Code. Is it smart enough to organize an experiment: prepare data set, make a model, training code, debug it, etc?

Basically, yes. It takes some effort to describe what you want and make sure it does not cheat, but Claude is smart enough to write model code from scratch.

Other models like Gemini 2.5 Pro, o3 might be even better.

A lot of people believe that LLMs cannot write new code, they can only rehash existing code. I don't think it's true. It's hard to say with certainty that code was 100% unique, but it was at least rather unusual.

Anyway, here's what I did:

1. Encoder-only non-autoregressive transformer.

Pretty much all generative LLMs are based on decoder-only autoregressive transformer architecture, which generates one token at a time. (I.e. to generate token (n+1) it relies on data from tokens 1..n.) This type of transformers can be efficiently trained (causal mask gives training signal for each token using only a single forward pass), bug generation process is slow and inefficient. Gemini 2.5 Flash allows 1M tokens of input but only 65k token output. You can't really transform large amount of text.

But what if we directly generate the target sequence using just a single forward pass?.. I.e. instead of predicting the next token, we can predict tokens of output sequence. There's no fundamental reason it can't work, but it's more challenging as NN has to keep track of both input and output token positions, etc.

And, well, the experiment shows it can work for simple languages, at least: in this example transformer learned how to expand parentheses, e.g. for input "a(b+c)" it generates "ab+a*c". https://github.com/killerstorm/expere/tree/master/non_autore...

I'm sure there's a better way to do it, but at least it's enough to confirm there's no fundamental reason it can't work. It took ~20 minutes to make code, example trains in 2 minutes on RTX 4070.

I tried few more experiments:

2. try to improve attention by adding a small MLP on top of per-head attention scores. 3. make a hybrid between RWKV and transformer.

That also worked well enough to start training and get a plausible loss curve. (Although it took me >30 minutes to get Claude to fix code, it had a bit more difficulty here.) Although training a real language model takes a beefier GPU and time and I didn't wait for it to finish.

I think with a bit better prompts and better models it can conduct experiments fully autonomously, and that can happen this year.

Comments

Disposal8433•9mo ago
I would hate to fix that. Did you use Ruff on the result? That would help a lot.