frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•2m ago•1 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•3m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
1•1vuio0pswjnm7•5m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
1•1vuio0pswjnm7•7m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•9m ago•0 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•11m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•16m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•18m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•21m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•33m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•35m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•36m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•49m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•52m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•54m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1h ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
2•basilikum•1h ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•1h ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•1h ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•1h ago•2 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•1h ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•1h ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•1h ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•1h ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•1h ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•1h ago•1 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments
Open in hackernews

Show HN: I vibe-coded some unusual transformer models

https://github.com/killerstorm/expere/tree/master/non_autoregressive_transformer
1•killerstorm•9mo ago
Goals:

  * demonstrate that LLMs are smart enough to conduct ML experiments pretty much on their own
   * specifically, vibe-coding isn't just for web stuff
 * encourage people to conduct these small experiments
   * in particular, to get better understanding of the concepts
Background: I had a linear algebra course in university, but no proper ML training. Nevertheless, 5 years ago things like AI Dungeon and GPT-3 got me really interested and I started watching Yannic Kilcher videos to understand how it works. I even got some ideas for experiments with transformer architecture, but actually performing them seemed a bit too tedious.

Enter vibe coding. Specifically, Claude Code. Is it smart enough to organize an experiment: prepare data set, make a model, training code, debug it, etc?

Basically, yes. It takes some effort to describe what you want and make sure it does not cheat, but Claude is smart enough to write model code from scratch.

Other models like Gemini 2.5 Pro, o3 might be even better.

A lot of people believe that LLMs cannot write new code, they can only rehash existing code. I don't think it's true. It's hard to say with certainty that code was 100% unique, but it was at least rather unusual.

Anyway, here's what I did:

1. Encoder-only non-autoregressive transformer.

Pretty much all generative LLMs are based on decoder-only autoregressive transformer architecture, which generates one token at a time. (I.e. to generate token (n+1) it relies on data from tokens 1..n.) This type of transformers can be efficiently trained (causal mask gives training signal for each token using only a single forward pass), bug generation process is slow and inefficient. Gemini 2.5 Flash allows 1M tokens of input but only 65k token output. You can't really transform large amount of text.

But what if we directly generate the target sequence using just a single forward pass?.. I.e. instead of predicting the next token, we can predict tokens of output sequence. There's no fundamental reason it can't work, but it's more challenging as NN has to keep track of both input and output token positions, etc.

And, well, the experiment shows it can work for simple languages, at least: in this example transformer learned how to expand parentheses, e.g. for input "a(b+c)" it generates "ab+a*c". https://github.com/killerstorm/expere/tree/master/non_autore...

I'm sure there's a better way to do it, but at least it's enough to confirm there's no fundamental reason it can't work. It took ~20 minutes to make code, example trains in 2 minutes on RTX 4070.

I tried few more experiments:

2. try to improve attention by adding a small MLP on top of per-head attention scores. 3. make a hybrid between RWKV and transformer.

That also worked well enough to start training and get a plausible loss curve. (Although it took me >30 minutes to get Claude to fix code, it had a bit more difficulty here.) Although training a real language model takes a beefier GPU and time and I didn't wait for it to finish.

I think with a bit better prompts and better models it can conduct experiments fully autonomously, and that can happen this year.

Comments

Disposal8433•9mo ago
I would hate to fix that. Did you use Ruff on the result? That would help a lot.