frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•46s ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•2m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•4m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•8m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•9m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•10m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
1•Anon84•14m ago•0 comments

Nestlé couldn't crack Japan's coffee market.Then they hired a child psychologist

https://twitter.com/BigBrainMkting/status/2019792335509541220
1•rmason•15m ago•0 comments

Notes for February 2-7

https://taoofmac.com/space/notes/2026/02/07/2000
2•rcarmo•17m ago•0 comments

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
2•Willingham•24m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
2•shervinafshar•25m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•30m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
9•mooreds•30m ago•2 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•32m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

2•pinkmuffinere•33m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•38m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•39m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•39m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•40m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
4•archb•42m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•42m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•43m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•43m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•49m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
4•dragandj•50m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•51m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•52m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•53m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•53m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•56m ago•0 comments
Open in hackernews

LLMZip: Lossless Text Compression Using Large Language Models

https://arxiv.org/abs/2306.04050
2•jfantl•3mo ago

Comments

hamsic•3mo ago
"Lossless" does not mean that the LLM can accurately reconstruct human-written sentences. Rather, it means that the LLM generates a fully reproducible bitstream based on its own predicted probability distribution.

Reconstructing human-written sentences accurately is impossible because it requires modeling the "true source"—the human brain state (memory, emotion, etc.)—rather than the LLM itself.

Instead, a practical approach is to reconstruct the LLM output itself based on seeds or to store it in a compressible probabilistic structure.

DoctorOetker•3mo ago
Its unclear what you claim lossless compression does or doesn't do, especially since you tie in storing an RNG's seed value at the end of your comment.

"LLMZip: Lossless Text Compression Using Large Language Models"

Implies they use the LLM's next token probability distribution to bring the most likely ones up for the likelihood sorted list of tokens (the higher the next token from the input stream -generated by humans or not- the fewer bits needed to encode its position starting the count from top to bottom, so the better the LLM can predict the true probability of the next token, the better it will be able to compress human-generated text in general)

Do you deny LLM's can be used this way for lossless compression?

Such a system can accurately reconstruct the uncompressed original input text (say generated by a human) from its compressed form.

hamsic•3mo ago
Sure, a model-based coder can losslessly compress any token stream. I just meant that for human-written text, the model’s prediction diverges from how the text was actually produced — so the compression is formally lossless, but not semantically faithful or efficient.
DoctorOetker•3mo ago
This is from 2023 (not a complaint, just observing that the result might be stale and even lower upper bounds may have been achieved).

Its quite curious to consider the connection between compression and intelligence. It's hard to quantify comprehension, i.e. how do you see if a system effectively comprehends some data? Lossless compression rates are very attractive, since the task is to not lose data but squeeze it as close as possible to its information content.

It does raise other questions though: which corpus is considered representative? A base model without finetuning might be more vulgar but also more effective at compressing the comparatively vulgar corpus. The corpus the corpus expressed by an RLHF/whatever reinforced and pretty-prompted chatbot however will be very good at compressing its own outputs but less good at compressing the actual vile human corpus, although both the base model and the aligned model will be relatively good at compressing each others output as well, they will each excel at compressing their own implicit corpus.

Another question: as the bits/per character upper bound falls monotonically it will suffer diminishing returns. How does one square that with the proposal that lossless compression corresponds to intelligence? It would clearly not be a linear correspondence, and it suggests that one would need exponentially larger and larger corpus to beat the prior compression rates.

How long can it write before repeating itself?

====

It also raises lots of societal questions: less than 1 bit per character, how many characters in library genesis / anna's archive etc?