frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Reverse Engineering Medium.com's Editor: How Copy, Paste, and Images Work

https://app.writtte.com/read/gP0H6W5
1•birdculture•52s ago•0 comments

Go 1.22, SQLite, and Next.js: The "Boring" Back End

https://mohammedeabdelaziz.github.io/articles/go-next-pt-2
1•mohammede•6m ago•0 comments

Laibach the Whistleblowers [video]

https://www.youtube.com/watch?v=c6Mx2mxpaCY
1•KnuthIsGod•8m ago•1 comments

I replaced the front page with AI slop and honestly it's an improvement

https://slop-news.pages.dev/slop-news
1•keepamovin•12m ago•1 comments

Economists vs. Technologists on AI

https://ideasindevelopment.substack.com/p/economists-vs-technologists-on-ai
1•econlmics•14m ago•0 comments

Life at the Edge

https://asadk.com/p/edge
2•tosh•20m ago•0 comments

RISC-V Vector Primer

https://github.com/simplex-micro/riscv-vector-primer/blob/main/index.md
2•oxxoxoxooo•24m ago•1 comments

Show HN: Invoxo – Invoicing with automatic EU VAT for cross-border services

2•InvoxoEU•24m ago•0 comments

A Tale of Two Standards, POSIX and Win32 (2005)

https://www.samba.org/samba/news/articles/low_point/tale_two_stds_os2.html
2•goranmoomin•28m ago•0 comments

Ask HN: Is the Downfall of SaaS Started?

3•throwaw12•29m ago•0 comments

Flirt: The Native Backend

https://blog.buenzli.dev/flirt-native-backend/
2•senekor•31m ago•0 comments

OpenAI's Latest Platform Targets Enterprise Customers

https://aibusiness.com/agentic-ai/openai-s-latest-platform-targets-enterprise-customers
1•myk-e•33m ago•0 comments

Goldman Sachs taps Anthropic's Claude to automate accounting, compliance roles

https://www.cnbc.com/2026/02/06/anthropic-goldman-sachs-ai-model-accounting.html
2•myk-e•36m ago•5 comments

Ai.com bought by Crypto.com founder for $70M in biggest-ever website name deal

https://www.ft.com/content/83488628-8dfd-4060-a7b0-71b1bb012785
1•1vuio0pswjnm7•37m ago•1 comments

Big Tech's AI Push Is Costing More Than the Moon Landing

https://www.wsj.com/tech/ai/ai-spending-tech-companies-compared-02b90046
4•1vuio0pswjnm7•39m ago•0 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
2•1vuio0pswjnm7•40m ago•0 comments

Suno, AI Music, and the Bad Future [video]

https://www.youtube.com/watch?v=U8dcFhF0Dlk
1•askl•42m ago•2 comments

Ask HN: How are researchers using AlphaFold in 2026?

1•jocho12•45m ago•0 comments

Running the "Reflections on Trusting Trust" Compiler

https://spawn-queue.acm.org/doi/10.1145/3786614
1•devooops•50m ago•0 comments

Watermark API – $0.01/image, 10x cheaper than Cloudinary

https://api-production-caa8.up.railway.app/docs
1•lembergs•52m ago•1 comments

Now send your marketing campaigns directly from ChatGPT

https://www.mail-o-mail.com/
1•avallark•55m ago•1 comments

Queueing Theory v2: DORA metrics, queue-of-queues, chi-alpha-beta-sigma notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•1h ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
5•o8vm•1h ago•1 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•1h ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•1h ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•1h ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
2•helloplanets•1h ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•1h ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•1h ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1h ago•0 comments
Open in hackernews

Ask HN: Why do LLMs struggle with word count?

2•rishikeshs•5mo ago
I've noticed that most LLMs struggle to generate within a set word count. Any reason for this?

What is causing this limitation? If a basic online word count tool can do this, why can't these big companies do this?

Comments

viraptor•5mo ago
> Any reason for this?

They're not trained for that. And there's no good reason to improve it if you can instead rerun the paragraph saying "make this slightly shorter".

> If a basic online word count tool can do this

It's an entirely different technology and not comparable at all. If you want to involve an actual word counter, this is not hard to integrate, with a basic loop that measures the output and feeds back the result so that the LLM can shorten/lengthen the text automatically before returning to you.

nivertech•5mo ago
they don't see words, only tokens

and even with tokens they don't know how to count them at the LLM completion layer

they have to be trained with something like RLHF about word counting at the question answering / instruction following layers

or at the application layer (so called "agentic workflows"), e.g. writing a Python code to count words, or calling a function or a CLI tool like "wc"

geophph•5mo ago
The M stands for Model not Math
giveita•5mo ago
Same reason Pavlov's dog can't count either.
gobdovan•5mo ago
For LLMs, it's a meta-cognition task. Before they see anything, all text gets cut into pieces called tokens. Tokens contain letters, spaces, punctuation. LLMs never see the true punctuation or spaces, they only see these tokens. And by seeing these tokens, I mean the tokenizer simply says: I have a dictionary from text to tokens; I won't even show the token representation to you, just their position in the dictionary. For example, instead of showing "cat;", it just hands over entry #48712. The model has to deal with the rest.

So they'd need to do complex recall on resources of language structure it was trained on to be able to count accurately.

My picture over LLMs is this: I like to imagine what LLMs do is close to us trying to learn language from a dictionary of an alien language. We couldn't ground anything in reality, we maybe wouldn't know where words start or end in the definitions, but we can pattern match enough stuff to be useful for an alien giving us text queries.

I also asked GPT for a metaphor, and it came back with these:

- It’s like trying to clap to music and being asked, “Make it 100 words worth of claps.” You’re working with rhythm, not actual word units, so your sense of count is fuzzy.

- LLMs are excellent at flowing language but bad at rigid constraints — like a jazz musician who can improvise beautifully but can’t stop exactly on the 137th note without counting.