frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Hello

1•otrebladih•1m ago•0 comments

FSD helped save my father's life during a heart attack

https://twitter.com/JJackBrandt/status/2019852423980875794
1•blacktulip•3m ago•0 comments

Show HN: Writtte – Draft and publish articles without reformatting, anywhere

https://writtte.xyz
1•lasgawe•5m ago•0 comments

Portuguese icon (FROM A CAN) makes a simple meal (Canned Fish Files) [video]

https://www.youtube.com/watch?v=e9FUdOfp8ME
1•zeristor•7m ago•0 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
2•gnufx•9m ago•0 comments

Transcribe your aunts post cards with Gemini 3 Pro

https://leserli.ch/ocr/
1•nielstron•13m ago•0 comments

.72% Variance Lance

1•mav5431•14m ago•0 comments

ReKindle – web-based operating system designed specifically for E-ink devices

https://rekindle.ink
1•JSLegendDev•16m ago•0 comments

Encrypt It

https://encryptitalready.org/
1•u1hcw9nx•16m ago•1 comments

NextMatch – 5-minute video speed dating to reduce ghosting

https://nextmatchdating.netlify.app/
1•Halinani8•17m ago•1 comments

Personalizing esketamine treatment in TRD and TRBD

https://www.frontiersin.org/articles/10.3389/fpsyt.2025.1736114
1•PaulHoule•18m ago•0 comments

SpaceKit.xyz – a browser‑native VM for decentralized compute

https://spacekit.xyz
1•astorrivera•19m ago•0 comments

NotebookLM: The AI that only learns from you

https://byandrev.dev/en/blog/what-is-notebooklm
1•byandrev•19m ago•1 comments

Show HN: An open-source starter kit for developing with Postgres and ClickHouse

https://github.com/ClickHouse/postgres-clickhouse-stack
1•saisrirampur•20m ago•0 comments

Game Boy Advance d-pad capacitor measurements

https://gekkio.fi/blog/2026/game-boy-advance-d-pad-capacitor-measurements/
1•todsacerdoti•20m ago•0 comments

South Korean crypto firm accidentally sends $44B in bitcoins to users

https://www.reuters.com/world/asia-pacific/crypto-firm-accidentally-sends-44-billion-bitcoins-use...
2•layer8•21m ago•0 comments

Apache Poison Fountain

https://gist.github.com/jwakely/a511a5cab5eb36d088ecd1659fcee1d5
1•atomic128•22m ago•2 comments

Web.whatsapp.com appears to be having issues syncing and sending messages

http://web.whatsapp.com
1•sabujp•23m ago•2 comments

Google in Your Terminal

https://gogcli.sh/
1•johlo•24m ago•0 comments

Shannon: Claude Code for Pen Testing: #1 on Github today

https://github.com/KeygraphHQ/shannon
1•hendler•25m ago•0 comments

Anthropic: Latest Claude model finds more than 500 vulnerabilities

https://www.scworld.com/news/anthropic-latest-claude-model-finds-more-than-500-vulnerabilities
2•Bender•29m ago•0 comments

Brooklyn cemetery plans human composting option, stirring interest and debate

https://www.cbsnews.com/newyork/news/brooklyn-green-wood-cemetery-human-composting/
1•geox•29m ago•0 comments

Why the 'Strivers' Are Right

https://greyenlightenment.com/2026/02/03/the-strivers-were-right-all-along/
1•paulpauper•31m ago•0 comments

Brain Dumps as a Literary Form

https://davegriffith.substack.com/p/brain-dumps-as-a-literary-form
1•gmays•31m ago•0 comments

Agentic Coding and the Problem of Oracles

https://epkconsulting.substack.com/p/agentic-coding-and-the-problem-of
1•qingsworkshop•32m ago•0 comments

Malicious packages for dYdX cryptocurrency exchange empties user wallets

https://arstechnica.com/security/2026/02/malicious-packages-for-dydx-cryptocurrency-exchange-empt...
1•Bender•32m ago•0 comments

Show HN: I built a <400ms latency voice agent that runs on a 4gb vram GTX 1650"

https://github.com/pheonix-delta/axiom-voice-agent
1•shubham-coder•32m ago•0 comments

Penisgate erupts at Olympics; scandal exposes risks of bulking your bulge

https://arstechnica.com/health/2026/02/penisgate-erupts-at-olympics-scandal-exposes-risks-of-bulk...
4•Bender•33m ago•0 comments

Arcan Explained: A browser for different webs

https://arcan-fe.com/2026/01/26/arcan-explained-a-browser-for-different-webs/
1•fanf2•34m ago•0 comments

What did we learn from the AI Village in 2025?

https://theaidigest.org/village/blog/what-we-learned-2025
2•mrkO99•35m ago•0 comments
Open in hackernews

Ask HN: Why do LLMs struggle with word count?

2•rishikeshs•5mo ago
I've noticed that most LLMs struggle to generate within a set word count. Any reason for this?

What is causing this limitation? If a basic online word count tool can do this, why can't these big companies do this?

Comments

viraptor•5mo ago
> Any reason for this?

They're not trained for that. And there's no good reason to improve it if you can instead rerun the paragraph saying "make this slightly shorter".

> If a basic online word count tool can do this

It's an entirely different technology and not comparable at all. If you want to involve an actual word counter, this is not hard to integrate, with a basic loop that measures the output and feeds back the result so that the LLM can shorten/lengthen the text automatically before returning to you.

nivertech•5mo ago
they don't see words, only tokens

and even with tokens they don't know how to count them at the LLM completion layer

they have to be trained with something like RLHF about word counting at the question answering / instruction following layers

or at the application layer (so called "agentic workflows"), e.g. writing a Python code to count words, or calling a function or a CLI tool like "wc"

geophph•5mo ago
The M stands for Model not Math
giveita•5mo ago
Same reason Pavlov's dog can't count either.
gobdovan•5mo ago
For LLMs, it's a meta-cognition task. Before they see anything, all text gets cut into pieces called tokens. Tokens contain letters, spaces, punctuation. LLMs never see the true punctuation or spaces, they only see these tokens. And by seeing these tokens, I mean the tokenizer simply says: I have a dictionary from text to tokens; I won't even show the token representation to you, just their position in the dictionary. For example, instead of showing "cat;", it just hands over entry #48712. The model has to deal with the rest.

So they'd need to do complex recall on resources of language structure it was trained on to be able to count accurately.

My picture over LLMs is this: I like to imagine what LLMs do is close to us trying to learn language from a dictionary of an alien language. We couldn't ground anything in reality, we maybe wouldn't know where words start or end in the definitions, but we can pattern match enough stuff to be useful for an alien giving us text queries.

I also asked GPT for a metaphor, and it came back with these:

- It’s like trying to clap to music and being asked, “Make it 100 words worth of claps.” You’re working with rhythm, not actual word units, so your sense of count is fuzzy.

- LLMs are excellent at flowing language but bad at rigid constraints — like a jazz musician who can improvise beautifully but can’t stop exactly on the 137th note without counting.